Security

Mozilla ranks dozens of popular ‘smart’ gift ideas on creepiness and security

Posted by | Gadgets, hardware, Mozilla, Security | No Comments

If you’re planning on picking up some cool new smart device for a loved one this holiday season, it might be worth your while to check whether it’s one of the good ones or not. Not just in the quality of the camera or step tracking, but the security and privacy practices of the companies that will collect (and sell) the data it produces. Mozilla has produced a handy resource ranking 70 of the latest items, from Amazon Echos to smart teddy bears.

Each of the dozens of toys and devices is graded on a number of measures: what data does it collect? Is that data encrypted when it is transmitted? Who is it shared with? Are you required to change the default password? And what’s the worst-case scenario if something went wrong?

Some of the security risks are inherent to the product — for example, security cameras can potentially see things you’d rather they didn’t — but others are oversights on the part of the company. Security practices like respecting account deletion, not sharing data with third parties, and so on.

At the top of the list are items getting most of it right — this Mycroft smart speaker, for instance, uses open-source software and the company that makes it makes all the right choices. Their privacy policy is even easy to read! Lots of gadgets seem just fine, really. This list doesn’t just trash everything.

On the other hand, you have something like this Dobby drone. They don’t seem to even have a privacy policy — bad news when you’re installing an app that records your location, HD footage and other stuff! Similarly, this Fredi baby monitor comes with a bad password you don’t have to change, and has no automatic security updates. Are you kidding me? Stay far, far away.

Altogether, 33 of the products met Mozilla’s recently proposed “minimum security standards” for smart devices (and got a nice badge); 7 failed; and the rest fell somewhere in-between. In addition to these official measures there’s a crowdsourced (hopefully not to be gamed) “creep-o-meter” where prospective buyers can indicate how creepy they find a device. But why is BB-8 creepy? I’d take that particular metric with a grain of salt.

Powered by WPeMatico

Google’s Project Fi gets an improved VPN service

Posted by | Android, Google, Mobile, project fi, Security, virtual private networks, vpn, wi-fi, wireless, wireless service | No Comments

Google’s Project Fi wireless service is getting a major update today that introduces an optional always-on VPN service and a smarter way to switch between Wi-Fi and cellular connections.

By default, Fi already uses a VPN service to protect users when they connect to the roughly two million supported Wi-Fi hotspots. Now, Google is expanding this to cellular connections, as well. “When you enable our enhanced network, all of your mobile and Wi-Fi traffic will be encrypted and securely sent through our virtual private network (VPN) on every network you connect to, so you’ll have the peace of mind of knowing that others can’t see your online activity,” the team writes in today’s announcement.

Google notes that the VPN also shields all of your traffic from Google itself and that it isn’t tied to your Google account or phone number.

The VPN is part of what Google calls its “enhanced network” and the second part of this announcement is that this network now also allows for a faster switch between Wi-Fi and mobile networks. When you enable this — and both of these features are currently in beta and only available on Fi-compatible phones that run Android Pie — your phone will automatically detect when your Wi-Fi connection gets weaker and fill in those gaps with cellular data. The company says that in its testing, this new system reduces a user’s time without a working connection by up to 40 percent.

These new features will start rolling out to Fi users later this week. They are off by default, so you’ll have to head to the Fi Network Tools in the Project Fi app and turn them on to get started. One thing to keep in mind here: Google says your data usage will likely increase by about 10 percent when you use the VPN.

Powered by WPeMatico

Utah man pleads guilty to causing 2013 gaming service outages

Posted by | computing, cyberattacks, Gaming, Hack, Internet traffic, lizard squad, playstation network, Security, sony playstation | No Comments

A Utah man has pleaded guilty to computer hacking charges, after admitting to knocking several gaming services offline five years ago.

Austin Thompson, 23, launched several denial-of-service attacks against EA’s Origin, Sony PlayStation and Valve’s Steam gaming services during the December holiday season in 2013.

At the time, those denial-of-service attacks made it near-impossible for some gamers to play — many of whom had bought new consoles or games in the run-up to Christmas, including League of Legends and Dota 2, because they required access to the network.

Specifics of Thompson’s plea deal were not publicly available at the time of writing, but prosecutors said Thompson — aged 18 at the time of the attacks — flooded the gaming giants’ networks “with enough internet traffic to take them offline.”

Thompson would take to his Twitter account, @DerpTrolling, to announce his targets ahead of time, and posted screenshots of downed services in the aftermath of his attacks. Thompson’s attacks caused upwards of $95,000 in damages, prosecutors said.

“The attacks took down game servers and related computers around the world, often for hours at a time,” said Adam Braverman, district attorney for Southern California, in a statement.

“Denial-of-service attacks cost businesses millions of dollars annually,” said Braverman. “We are committed to finding and prosecuting those who disrupt businesses, often for nothing more than ego.”

Thompson faces up to 10 years in prison and is scheduled to be sentenced in March.

Powered by WPeMatico

Security researchers have busted the encryption in several popular Crucial and Samsung SSDs

Posted by | cryptography, disk encryption, encryption, Gadgets, hardware, open source software, Samsung Electronics, Security, solid state drive | No Comments

Researchers at Radboud University have found critical security flaws in several popular Crucial and Samsung solid state drives (SSDs), which they say can be easily exploited to recover encrypted data without knowing the password.

The researchers, who detailed their findings in a new paper out Monday, reverse engineered the firmware of several drives to find a “pattern of critical issues” across the device makers.

In the case of one drive, the master password used to decrypt the drive’s data was just an empty string and could be easily exploiting by flipping a single bit in the drive’s memory. Another drive could be unlocked with “any password” by crippling the drive’s password validation checks.

That wouldn’t be much of a problem if an affected drive also used software encryption to secure its data. But the researchers found that in the case of Windows computers, often the default policy for BitLocker’s software-based drive encryption is to trust the drive — and therefore rely entirely on a device’s hardware encryption to protect the data. Yet, as the researchers found, if the hardware encryption is buggy, BitLocker isn’t doing much to prevent data theft.

In other words, users “should not rely solely on hardware encryption as offered by SSDs for confidentiality,” the researchers said.

Alan Woodward, a professor at the University of Surrey, said that the greatest risk to users is the drive’s security “failing silently.”

“You might think you’ve done the right thing enabling BitLocker but then a third-party fault undermines your security, but you never know and never would know,” he said.

Matthew Green, a cryptography professor at Johns Hopkins, described the BitLocker flaw in a tweet as “like jumping out of a plane with an umbrella instead of a parachute.”

The researchers said that their findings are not yet finalized — pending a peer review. But the research was made public after disclosing the bugs to the drive makers in April.

Crucial’s MX100, MX200 and MX300 drives, Samsung’s T3 and T5 USB external disks and Samsung 840 EVO and 850 EVO internal hard disks are known to be affected, but the researchers warned that many other drives may also be at risk.

The researchers criticized the device makers’ proprietary and closed-source cryptography that they said — and proved — is “often shown to be much weaker in practice” than their open-source and auditable cryptographic libraries. “Manufacturers that take security seriously should publish their crypto schemes and corresponding code so that security claims can be independently verified,” they wrote.

The researchers recommend using software-based encryption, like the open-source software VeraCrypt.

In an advisory, Samsung also recommended that users install encryption software to prevent any “potential breach of self-encrypting SSDs.” Crucial’s owner Micron is said to have a fix on the way, according to an advisory by the Netherlands’ National Cyber Security Center, but did not say when.

Micron did not immediately respond to a request for comment.

Powered by WPeMatico

Civil servant who watched porn at work blamed for infecting a US government network with malware

Posted by | Android, computer security, computing, cybercrime, Cyberwarfare, Government, malware, national security, Prevention, ransomware, Removable media, Security, security breaches, spokesperson, U.S. government, United States | No Comments

A U.S. government network was infected with malware thanks to one employee’s “extensive history” of watching porn on his work computer, investigators have found.

The audit, carried out by the U.S. Department of the Interior’s inspector general, found that a U.S. Geological Survey (USGS) network at the EROS Center, a satellite imaging facility in South Dakota, was infected after an unnamed employee visited thousands of porn pages that contained malware, which downloaded to his laptop and “exploited the USGS’ network.” Investigators found that many of the porn images were “subsequently saved to an unauthorized USB device and personal Android cell phone,” which was connected to the employee’s government-issued computer.

Investigators found that his Android cell phone “was also infected with malware.”

The findings were made public in a report earlier this month but buried on the U.S. government’s oversight website and went largely unreported.

It’s bad enough in this day and age that a government watchdog has to remind civil servants to not watch porn at work — let alone on their work laptop. The inspector general didn’t say what the employee’s fate was, but ripped into the Department of the Interior’s policies for letting him get that far in the first place.

“We identified two vulnerabilities in the USGS’ IT security posture: web-site access and open USB ports,” the report said.

There is a (slightly) bright side. The EROS Center, which monitors and archives images of the planet’s land surface, doesn’t operate any classified networks, a spokesperson for Interior’s inspector general told TechCrunch in an email, ruling out any significant harm to national security. But the spokesperson wouldn’t say what kind of malware used — only that, “the malware helps enable data exfiltration and is also associated with ransomware attacks.”

Investigators recommended that USGS enforce a “strong blacklist policy” of known unauthorized websites and “regularly monitor employee web usage history.”

The report also said the agency should lock down its USB drive policy, restricting employees from using removable media on government devices, but it’s not known if the recommendations have yet gone into place. USGS did not return a request for comment.

Powered by WPeMatico

Smart home makers hoard your data, but won’t say if the police come for it

Posted by | Amazon, Apple, computer security, Facebook, Gadgets, Google, Government, hardware, Internet of Things, law enforcement, national security, privacy, Security, smart home devices, television, transparency report | No Comments

A decade ago, it was almost inconceivable that nearly every household item could be hooked up to the internet. These days, it’s near impossible to avoid a non-smart home gadget, and they’re vacuuming up a ton of new data that we’d never normally think about.

Thermostats know the temperature of your house, and smart cameras and sensors know when someone’s walking around your home. Smart assistants know what you’re asking for, and smart doorbells know who’s coming and going. And thanks to the cloud, that data is available to you from anywhere — you can check in on your pets from your phone or make sure your robot vacuum cleaned the house.

Because the data is stored or accessible by the smart home tech makers, law enforcement and government agencies have increasingly sought data from the companies to solve crimes.

And device makers won’t say if your smart home gadgets have been used to spy on you.

For years, tech companies have published transparency reports — a semi-regular disclosure of the number of demands or requests a company gets from the government for user data. Google was first in 2010. Other tech companies followed in the wake of Edward Snowden’s revelations that the government had enlisted tech companies’ aid in spying on their users. Even telcos, implicated in wiretapping and turning over Americans’ phone records, began to publish their figures to try to rebuild their reputations.

As the smart home revolution began to thrive, police saw new opportunities to obtain data where they hadn’t before. Police sought Echo data from Amazon to help solve a murder. Fitbit data was used to charge a 90-year old man with the murder of his stepdaughter. And recently, Nest was compelled to turn over surveillance footage that led to gang members pleading guilty to identity theft.

Yet, Nest — a division of Google — is the only major smart home device maker that has published how many data demands it receives.

As first noted by Forbes last week, Nest’s little-known transparency report doesn’t reveal much — only that it’s turned over user data about 300 times since mid-2015 on over 500 Nest users. Nest also said it hasn’t to date received a secret order for user data on national security grounds, such as in cases of investigating terrorism or espionage. Nest’s transparency report is woefully vague compared to some of the more detailed reports by Apple, Google and Microsoft, which break out their data requests by lawful request, by region and often by the kind of data the government demands.

As Forbes said, “a smart home is a surveilled home.” But at what scale?

We asked some of the most well-known smart home makers on the market if they plan to release a transparency report, or disclose the number of demands they receive for data from their smart home devices.

For the most part, we received fairly dismal responses.

What the big four tech giants said

Amazon did not respond to requests for comment when asked if it will break out the number of demands it receives for Echo data, but a spokesperson told me last year that while its reports include Echo data, it would not break out those figures.

Facebook said that its transparency report section will include “any requests related to Portal,” its new hardware screen with a camera and a microphone. Although the device is new, a spokesperson did not comment on if the company will break out the hardware figures separately.

Google pointed us to Nest’s transparency report but did not comment on its own efforts in the hardware space — notably its Google Home products.

And Apple said that there’s no need to break out its smart home figures — such as its HomePod — because there would be nothing to report. The company said user requests made to HomePod are given a random identifier that cannot be tied to a person.

What the smaller but notable smart home players said

August, a smart lock maker, said it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA),” but did not comment on the number of subpoenas, warrants and court orders it receives. “August does comply with all laws and when faced with a court order or warrant, we always analyze the request before responding,” a spokesperson said.

Roomba maker iRobot said it “has not received any demands from governments for customer data,” but wouldn’t say if it planned to issue a transparency report in the future.

Both Arlo, the former Netgear smart home division, and Signify, formerly Philips Lighting, said they do not have transparency reports. Arlo didn’t comment on its future plans, and Signify said it has no plans to publish one. 

Ring, a smart doorbell and security device maker, did not answer our questions on why it doesn’t have a transparency report, but said it “will not release user information without a valid and binding legal demand properly served on us” and that Ring “objects to overbroad or otherwise inappropriate demands as a matter of course.” When pressed, a spokesperson said it plans to release a transparency report in the future, but did not say when.

Spokespeople for Honeywell and Canary — both of which have smart home security products — did not comment by our deadline.

And, Samsung, a maker of smart sensors, trackers and internet-connected televisions and other appliances, did not respond to a request for comment.

Only Ecobee, a maker of smart switches and sensors, said it plans to publish its first transparency report “at the end of 2018.” A spokesperson confirmed that, “prior to 2018, Ecobee had not been requested nor required to disclose any data to government entities.”

All in all, that paints a fairly dire picture for anyone thinking that when the gadgets in your home aren’t working for you, they could be helping the government.

As helpful and useful as smart home gadgets can be, few fully understand the breadth of data that the devices collect — even when we’re not using them. Your smart TV may not have a camera to spy on you, but it knows what you’ve watched and when — which police used to secure a conviction of a sex offender. Even data from when a murder suspect pushed the button on his home alarm key fob was enough to help convict someone of murder.

Two years ago, former U.S. director of national intelligence James Clapper said the government was looking at smart home devices as a new foothold for intelligence agencies to conduct surveillance. And it’s only going to become more common as the number of internet-connected devices spread. Gartner said more than 20 billion devices will be connected to the internet by 2020.

As much as the chances are that the government is spying on you through your internet-connected camera in your living room or your thermostat are slim — it’s naive to think that it can’t.

But the smart home makers wouldn’t want you to know that. At least, most of them.

Powered by WPeMatico

Buggy software in popular connected storage drives can let hackers read private data

Posted by | Axentra, computer security, computing, firewall, Gadgets, Hack, hardware, Netgear, Security, vulnerability, web interface | No Comments

Security researchers have found flaws in four popular connected storage drives that they say could let hackers access a user’s private and sensitive data.

The researchers Paulos Yibelo and Daniel Eshetu said the software running on three of the devices they tested — NetGear Stora, Seagate Home and Medion LifeCloud — can allow an attacker to remotely read, change and delete data without requiring a password.

Yibelo, who shared the research with TechCrunch this week and posted the findings Friday, said that many other devices may be at risk.

The software, Hipserv, built by tech company Axentra, was largely to blame for three of the four flaws they found. Hipserv is Linux-based, and uses several web technologies — including PHP — to power the web interface. But the researchers found that bugs could let them read files on the drive without any authentication. It also meant they could run any command they wanted as “root” — the built-in user account with the highest level of access — making the data on the device vulnerable to prying eyes or destruction.

We contacted Axentra for comment on Thursday but did not hear back by the time of writing.

A Netgear spokesperson said that the Stora is “no longer a supported product… because it has been discontinued and is an end of life product.” Seagate did not comment by our deadline, but we’ll update if that changes. Lenovo, which now owns Medion, did not respond to a request for comment.

The researchers also reported a separate bug affecting WD My Book Live drives, which can allow an attacker to remotely gain root access.

A spokesperson for WD said that the vulnerability report affects devices originally introduced in 2010 and discontinued in 2014, and “no longer covered under our device software support lifecycle.” WD added: “We encourage users who wish to continue operating these legacy products to configure their firewall to prevent remote access to these devices, and to take measures to ensure that only trusted devices on the local network have access to the device.”

In all four vulnerabilities, the researchers said that an attacker only needs to know the IP address of an affected drive. That isn’t so difficult in this day and age, thanks to sites like Shodan, a search engine for publicly available devices and databases, and similar search and indexing services.

Depending on where you look, the number of affected devices varies. Shodan puts the number at 311,705, but ZoomEye puts the figure at closer to 1.8 million devices.

Although the researchers described the bugs in moderate detail, they said they have no plans to release any exploit code to prevent attackers taking advantage of the flaws.

Their advice: If you’re running a cloud drive, “make sure to remove your device from the internet.”

Powered by WPeMatico

Here’s how Google is revamping Gmail and Android security

Posted by | Android, Apps, gmail, Google, Mobile, privacy, Security | No Comments

Eager to change the conversation from their years-long exposure of user data via Google+ to the bright, shining future the company is providing, Google has announced some changes to the way permissions are approved for Android apps. The new process will be slower, more deliberate and hopefully secure.

The changes are part of “Project Strobe,” a “root-and-branch review of third-party developer access to Google account and Android device data and our philosophy around apps’ data access.” Essentially they decided it was time to update the complex and likely not entirely cohesive set of rules and practices around those third-party developers and API access.

One of those roots (or perhaps branches) was the bug discovered inside Google+, which theoretically (the company can’t tell if it was abused or not) exposed non-public profile data to apps that should have received only a user’s public profile. This, combined with the fact that Google+ never really justified its own existence in the first place, led to the service essentially being shut down. “The consumer version of Google+ currently has low usage and engagement,” Google admitted. “90 percent of Google+ user sessions are less than five seconds.”

But the team doing the review has plenty of other suggestions to improve the process of informed consent to sharing data with third parties.

The first change is the most user-facing. When an application wants to access your Google account data — say your Gmail, Calendar and Drive contents for a third-party productivity app — you’ll have to approve each one of those separately. You’ll also have the opportunity to deny access to one or more of those requests, so if you never plan on using the Drive functionality, you can just nix it and the app will never get that permission.

These permissions can also be delayed and gated behind the actions that require them. For instance, if this theoretical app wanted to give you the opportunity to take a picture to add to an email, it wouldn’t have to ask up front when you download it. Instead, when you tap the option to attach a picture, it would ask permission to access the camera then and there. Google went into a little more detail on this in a post on its developer blog.

Notably there is only the option to “deny” or “allow,” but no “deny this time” or “allow this time,” which I find to be useful when you’re not totally on board with the permission in question. You can always revert the setting manually, but it’s nice to have the option to say “okay, just this once, strange app.”

The changes will start rolling out this month, so don’t be surprised if things look a little different next time you download a game or update an app.

The second and third changes have to do with limiting which data from your Gmail and messaging can be accessed by apps, and which apps can be granted access in the first place.

Specifically, Google is restricting access to these sensitive data troves to apps “directly enhancing email functionality” for Gmail and your default calling and messaging apps for call logs and SMS data.

There are some edge cases where this might be annoying to power users; some have more than one messaging app that falls back to SMS or integrates SMS replies, and this might require those apps to take a new approach. And apps that want access to these things may have trouble convincing Google’s review authorities that they qualify.

Developers also will need to review and agree to a new set of rules governing what Gmail data can be used, how they can use it and the measures they must have in place to protect it. For example, apps are not allowed to “transfer or sell the data for other purposes such as targeting ads, market research, email campaign tracking, and other unrelated purposes.” That probably puts a few business models out of the running.

Apps looking to handle Gmail data will also have to submit a report detailing “application penetration testing, external network penetration testing, account deletion verification, reviews of incident response plans, vulnerability disclosure programs, and information security policies.” No fly-by-night operations permitted, clearly.

There also will be additional scrutiny on what permissions developers ask for to make sure it matches up with what their app requires. If you ask for Contacts access but don’t actually use it for anything, you’ll be asked to remove that, as it only increases risk.

These various new requirements will go into effect next year, with application review (a multi-week process) starting on January 9; tardy developers will see their apps stop working at the end of March if they don’t comply.

The relatively short timeline here suggests that some apps may in fact shut down temporarily or permanently due to the rigors of the review process. Don’t be surprised if early next year you get an update saying service may be interrupted due to Google review policies or the like.

These changes are just the first handful issuing from the recommendations of Project Strobe; we can expect more to appear over the next few months, though perhaps not such striking ones. To say Gmail and Android apps are widely used is something of an understatement, so it’s understandable that they would be focused on first, but there are many other policies and services the company will no doubt find reason to improve.

Powered by WPeMatico

Chinese chip spying report shows the supply chain remains the ultimate weakness

Posted by | Amazon, Apple, Gadgets, Security | No Comments

Thursday’s explosive story by Bloomberg reveals detailed allegations that the Chinese military embedded tiny chips into servers, which made their way into data centers operated by dozens of major U.S. companies.

We covered the story earlier, including denials by Apple, Amazon and Supermicro — the server maker that was reportedly targeted by the Chinese government. Apple didn’t respond to a request for comment. Amazon said in a blog post that it “employs stringent security standards across our supply chain.” The FBI did not return a request for comment but declined to Bloomberg, and the Office for the Director of National Intelligence declined to comment. This is a complex story that rests on more than a dozen anonymous sources — many of which are sharing classified or highly sensitive information, making on-the-record comments impossible without repercussions. Despite the companies’ denials, Bloomberg is putting its faith in that the reader will trust the reporting.

Much of the story can be summed up with this one line from a former U.S. official: “Attacking Supermicro motherboards is like attacking Windows. It’s like attacking the whole world.”

It’s a fair point. Supermicro is one of the biggest tech companies you’ve probably never heard of. It’s a computing supergiant based in San Jose, Calif., with global manufacturing operations across the world — including China, where it builds most of its motherboards. Those motherboards trickle throughout the rest of the world’s tech — and were used in Amazon’s data center servers that power its Amazon Web Services cloud and Apple’s iCloud.

One government official speaking to Bloomberg said China’s goal was “long-term access to high-value corporate secrets and sensitive government networks,” which fits into the playbook of China’s long-running effort to steal intellectual property.

“No consumer data is known to have been stolen,” said Bloomberg.

Infiltrating Supermicro, if true, will have a long-lasting ripple effect on the wider tech industry and how they approach their own supply chains. Make no mistake — introducing any kind of external tech in your data center isn’t taken lightly by any tech company. Fear of corporate and state-sponsored espionage has been rife for years. It’s chief among the reasons why the U.S. and Australia have effectively banned some Chinese telecom giants — like ZTE — from operating on its networks.

Having a key part of your manufacturing process infiltrated — effectively hacked — puts every believed-to-be-secure supply chain into question.

With nearly every consumer electronics or automobile, manufacturers have to procure different parts and components from various sources across the globe. Ensuring the integrity of each component is near impossible. But because so many components are sourced from or assembled in China, it’s far easier for Beijing than any other country to infiltrate without anyone noticing.

The big question now is how to secure the supply chain?

Companies have long seen supply chain threats as a major risk factor. Apple and Amazon are down more than 1 percent in early Thursday trading and Supermicro is down more than 35 percent (at the time of writing) following the news. But companies are acutely aware that pulling out of China will cost them more. Labor and assembly are far cheaper in China, and specialist parts and specific components often can’t be found elsewhere.

Instead, locking down the existing supply chain is the only viable option.

Security giant CrowdStrike recently found that the vast majority — nine out of 10 companies — have suffered a software supply chain attack, where a supplier or part manufacturer was hit by ransomware, resulting in a shutdown of operations.

But protecting the hardware supply chain is a different task altogether — not least for the logistical challenge.

Several companies have already identified the risk of manufacturing attacks and taken steps to mitigate. BlackBerry was one of the first companies to introduce root of trust in its phones — a security feature that cryptographically signs the components in each device, effectively preventing the device’s hardware from tampering. Google’s new Titan security key tries to prevent manufacturing-level attacks by baking in the encryption in the hardware chips before the key is assembled.

Albeit at start, it’s not a one-size-fits-all solution. Former NSA hacker Jake Williams, founder of Rendition Infosec, said that even those hardware security mitigations may not have been enough to protect against the Chinese if the implanted chips had direct memory access.

“They can modify memory directly after the secure boot process is finished,” he told TechCrunch.

Some have even pointed to blockchain as a possible solution. By cryptographically signing — like in root of trust — each step of the manufacturing process, blockchain can be used to track goods, chips and components throughout the chain.

Instead, manufacturers often have to act reactively and deal with threats as they emerge.

According to Bloomberg, “since the implanted chips were designed to ping anonymous computers on the internet for further instructions, operatives could hack those computers to identify others who’d been affected.”

Williams said that the report highlights the need for network security monitoring. “While your average organization lacks the resources to discover a hardware implant (such as those discovered to be used by the [Chinese government]), they can see evidence of attackers on the network,” he said.

“It’s important to remember that the malicious chip isn’t magic — to be useful, it must still communicate with a remote server to receive commands and exfiltrate data,” he said. “This is where investigators will be able to discover a compromise.”

The intelligence community is said to be still investigating after it first detected the Chinese spying effort, some three years after it first opened a probe. The investigation is believed to be classified — and no U.S. intelligence officials have yet to talk on the record — even to assuage fears.

Powered by WPeMatico

Despite objection, Congress passes bill that lets U.S. authorities shoot down private drones

Posted by | american civil liberties union, automotive, Department of Homeland Security, Federal Aviation Administration, Gadgets, hardware, law enforcement, privacy, Security, senate, technology, unmanned aerial vehicles | No Comments

U.S. authorities will soon have the authority to shoot down private drones if they are considered a threat — a move decried by civil liberties and rights groups.

The Senate passed the FAA Reauthorization Act on Wednesday, months after an earlier House vote in April. The bill renews funding for the Federal Aviation Administration (FAA) until 2023, and includes several provisions designed to modernize U.S aviation rule — from making commercial flights more comfortable for passengers to including new provisions to act against privately owned drones.

But critics say the new authority that gives the government the right to “disrupt,” “exercise control,” or “seize or otherwise confiscate” drones that’s deemed a “credible threat” is dangerous and doesn’t include enough safeguards.

Federal authorities would not need to first obtain a warrant, which rights groups say that authority could be easily abused, making it possible for Homeland Security and the Justice Department and its various law enforcement and immigration agencies to shoot down anyone’s drone for any justifiable reason.

Drones, or unmanned aerial vehicles, have rocketed in popularity, by amateur pilots and explorers to journalists using drones to report from the skies. But there’s also been a growing threat from hapless hobbyists accidentally crashing a drone on the grounds of the White House to so-called Islamic State terrorists using drones on the battlefield.

Both the American Civil Liberties Union and the Electronic Frontier Foundation have denounced the bill.

“These provisions give the government virtually carte blanche to surveil, seize, or even shoot a drone out of the sky — whether owned by journalists or commercial entities — with no oversight or due process,” an ACLU spokesperson told TechCrunch. “They grant new powers to the Justice Department and the Department of Homeland Security to spy on Americans without a warrant,” and they “undermine the use of drones by journalists, which have enabled reporting on critical issues like hurricane damage and protests at Standing Rock.”

“Flying of drones can raise security and privacy concerns, and there may be situations where government action is needed to mitigate these threats,” the ACLU said in a previous blog post. “But this bill is the wrong approach.”

The EFF agreed, arguing the bill endangers the First and Fourth Amendment rights of freedom of speech and the protection from warrantless device seizures.

“If lawmakers want to give the government the power to hack or destroy private drones, then Congress and the public should have the opportunity to debate how best to provide adequate oversight and limit those powers to protect our right to use drones for journalism, activism, and recreation,” the EFF said.

Other privacy groups, including the Electronic Privacy Information Center, denounced the passing of the bill without “baseline privacy safeguards.”

The bill will go to the president’s desk, where it’s expected to be signed into law.

Powered by WPeMatico