Security

No technical reason to exclude Huawei as 5G supplier, says UK committee

Posted by | 5g, Asia, Australia, China, cyber security, Ericsson, Europe, huawei, human rights, Ian Levy, Internet of Things, jeremy wright, Mobile, National Cyber Security Centre, national security, Nokia, privacy, Security, TC, telecommunications, United Kingdom, United States, zte | No Comments

A UK parliamentary committee has concluded there are no technical grounds for excluding Chinese network kit vendor Huawei from the country’s 5G networks.

In a letter from the chair of the Science & Technology Committee to the UK’s digital minister Jeremy Wright, the committee says: “We have found no evidence from our work to suggest that the complete exclusion of Huawei from the UK’s telecommunications networks would, from a technical point of view, constitute a proportionate response to the potential security threat posed by foreign suppliers.”

Though the committee does go on to recommend the government mandate the exclusion of Huawei from the core of 5G networks, noting that UK mobile network operators have “mostly” done so already — but on a voluntary basis.

If it places a formal requirement on operators not to use Huawei for core supply the committee urges the government to provide “clear criteria” for the exclusion so that it could be applied to other suppliers in future.

Reached for a response to the recommendations, a government spokesperson told us: “The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.”

The spokesperson for the Department for Digital, Media, Culture and Sport added: “The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.”

In recent years the US administration has been putting pressure on allies around the world to entirely exclude Huawei from 5G networks — claiming the Chinese company poses a national security risk.

Australia announced it was banning Huawei and another Chinese vendor ZTE from providing kit for its 5G networks last year. Though in Europe there has not been a rush to follow the US lead and slam the door on Chinese tech giants.

In April leaked information from a UK Cabinet meeting suggested the government had settled on a policy of granting Huawei access as a supplier for some non-core parts of domestic 5G networks, while requiring they be excluded from supplying components for use in network cores.

On this somewhat fuzzy issue of delineating core vs non-core elements of 5G networks, the committee writes that it “heard unanimously and clearly” from witnesses that there will still be a distinction between the two in the next-gen networks.

It also cites testimony by the technical director of the UK’s National Cyber Security Centre (NCSC), Dr Ian Levy, who told it “geography matters in 5G”, and pointed out Australia and the UK have very different “laydowns” — meaning “we may have exactly the same technical understanding, but come to very different conclusions”.

In a response statement to the committee’s letter, Huawei SVP Victor Zhang welcomed the committee’s “key conclusion” before going on to take a thinly veiled swiped at the US — writing: “We are reassured that the UK, unlike others, is taking an evidence based approach to network security. Huawei complies with the laws and regulations in all the markets where we operate.”

The committee’s assessment is not all comfortable reading for Huawei, though, with the letter also flagging the damning conclusions of the most recent Huawei Oversight Board report which found “serious and systematic defects” in its software engineering and cyber security competence — and urging the government to monitor Huawei’s response to the raised security concerns, and to “be prepared to act to restrict the use of Huawei equipment if progress is unsatisfactory”.

Huawei has previously pledged to spend $2BN addressing security shortcomings related to its UK business — a figure it was forced to qualify as an “initial budget” after that same Oversight Board report.

“It is clear that Huawei must improve the standard of its cybersecurity,” the committee warns.

It also suggests the government consults on whether telecoms regulator Ofcom needs stronger powers to be able to force network suppliers to clean up their security act, writing that: “While it is reassuring to hear that network operators share this point of view and are ready to use commercial pressure to encourage this, there is currently limited regulatory power to enforce this.”

Another committee recommendation is for the NCSC to be consulted on whether similar security evaluation mechanisms should be established for other 5G vendors — such as Ericsson and Nokia: Two European based kit vendors which, unlike Huawei, are expected to be supplying core 5G.

“It is worth noting that an assurance system comparable to the Huawei Cyber Security Evaluation Centre does not exist for other vendors. The shortcomings in Huawei’s cyber security reported by the Centre cannot therefore be directly compared to the cyber security of other vendors,” it notes.

On the issue of 5G security generally the committee dubs this “critical”, adding that “all steps must be taken to ensure that the risks are as low as reasonably possible”.

Where “essential services” that make use of 5G networks are concerned, the committee says witnesses were clear such services must be able to continue to operate safely even if the network connection is disrupted. Government must ensure measures are put in place to safeguard operation in the event of cyber attacks, floods, power cuts and other comparable events, it adds. 

While the committee concludes there is no technical reason to limit Huawei’s access to UK 5G, the letter does make a point of highlighting other considerations, most notably human rights abuses, emphasizing its conclusion does not factor them in at all — and pointing out: “There may well be geopolitical or ethical grounds… to enact a ban on Huawei’s equipment”.

It adds that Huawei’s global cyber security and privacy officer, John Suffolk, confirmed that a third party had supplied Huawei services to Xinjiang’s Public Security Bureau, despite Huawei forbidding its own employees from misusing IT and comms tech to carry out surveillance of users.

The committee suggests Huawei technology may therefore be being used to “permit the appalling treatment of Muslims in Western China”.

Powered by WPeMatico

‘World’s first Bluetooth hair straighteners’ can be easily hacked

Posted by | Apps, Bluetooth, Gadgets, hardware, pen test partners, Security, technology, telecommunications, United Kingdom, wireless | No Comments

Here’s a thing that should have never been a thing: Bluetooth-connected hair straighteners.

Glamoriser, a U.K. firm that bills itself as the maker of the “world’s first Bluetooth hair straighteners,” allows users to link the device to an app, which lets the owner set certain heat and style settings. The app can also be used to remotely switch off the straighteners within Bluetooth range.

Big problem, though. These straighteners can be hacked.

Security researchers at Pen Test Partners bought a pair and tested them out. They found that it was easy to send malicious Bluetooth commands within range to remotely control an owner’s straighteners.

The researchers demonstrated that they could send one of several commands over Bluetooth, such as the upper and lower temperature limit of the device — 122°F and 455°F respectively — as well as the shut-down time. Because the straighteners have no authentication, an attacker can remotely alter and override the temperature of the straighteners and how long they stay on — up to a limit of 20 minutes.

“As there is no pairing or bonding established over [Bluetooth] when connecting a phone, anyone in range with the app can take control of the straighteners,” said Stuart Kennedy in his blog post, shared first with TechCrunch.

There is a caveat, said Kennedy. The straighteners only allow one concurrent connection. If the owner hasn’t connected their phone or they go out of range, only then can an attacker target the device.

Here at TechCrunch we’re all for setting things on fire “for journalism,” but in this case the numbers speak for themselves. If, per the researchers’ findings, the straighteners could be overridden to the maximum temperature of 455°F at the timeout of 20 minutes, that’s setting up a prime condition for a fire — or at very least burn damage.

It’s estimated that as many as 650,000 house fires in the U.K. are caused by hair straighteners and curling irons left on. In some cases it can take more than a half-hour for these heated devices to cool down to safe levels. U.K. fire and rescue services have called on owners to physically pull the plug on their devices to prevent fires and damage.

Glamoriser did not respond to a request for comment prior to publication. The app hasn’t been updated since June 2018, suggesting a fix has yet to be put in place.

Powered by WPeMatico

Apple disables Walkie Talkie app due to vulnerability that could allow iPhone eavesdropping

Posted by | Apple, apple inc, apple store, Apple Watch, Companies, FaceTime, iOS, iOS 10, iPhone, Mobile, privacy, Security, TC, technology, vulnerability | No Comments

Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.

Apple has apologized for the bug and for the inconvenience of being unable to use the feature while a fix is made.

The Walkie Talkie app on Apple Watch allows two users who have accepted an invite from each other to receive audio chats via a “push to talk” interface reminiscent of the PTT buttons on older cell phones.

A statement from Apple reads:

We were just made aware of a vulnerability related to the Walkie-Talkie app on the Apple Watch and have disabled the function as we quickly fix the issue. We apologize to our customers for the inconvenience and will restore the functionality as soon as possible. Although we are not aware of any use of the vulnerability against a customer and specific conditions and sequences of events are required to exploit it, we take the security and privacy of our customers extremely seriously. We concluded that disabling the app was the right course of action as this bug could allow someone to listen through another customer’s iPhone without consent.  We apologize again for this issue and the inconvenience.

Apple was alerted to the bug via its report a vulnerability portal directly and says there is no current evidence that it was exploited in the wild.

The company is temporarily disabling the feature entirely until a fix can be made and rolled out to devices. The Walkie Talkie App will remain installed on devices, but will not function until it has been updated with the fix.

Earlier this year a bug was discovered in the group calling feature of FaceTime that allowed people to listen in before a call was accepted. It turned out that the teen who discovered the bug, Grant Thompson, had attempted to contact Apple about the issue but was unable to get a response. Apple fixed the bug and eventually rewarded Thompson a bug bounty. This time around, Apple appears to be listening more closely to the reports that come in via its vulnerability tips line and has disabled the feature.

Earlier today, Apple quietly pushed a Mac update to remove a feature of the Zoom conference app that allowed it to work around Mac restrictions to provide a smoother call initiation experience — but that also allowed emails and websites to add a user to an active video call without their permission.

Powered by WPeMatico

File-storage app 4shared caught serving invisible ads and making purchases without consent

Posted by | 4shared, Advertising Tech, Android, app-store, computing, file-sharing, Google Play, instagram, malaysia, mobile software, privacy, Security | No Comments

With more than 100 million installs, file-sharing service 4shared is one of the most popular apps in the Android app store.

But security researchers say the app is secretly displaying invisible ads and subscribes users to paid services, racking up charges without the user’s knowledge — or their permission — collectively costing millions of dollars.

“It all happens in the background… nothing appears on the screen,” said Guy Krief, chief executive of London-based Upstream, which shared its research exclusively with TechCrunch.

The researchers say the app contains suspicious third-party code that allowed the app to automate clicks and make fraudulent purchases. They said the component, built by Hong Kong-based Elephant Data, downloads code which is “directly responsible” for generating the automated clicks without the user’s knowledge. The code also sets a cookie to determine if a device has previously been used to make a purchase, likely as a way to hide the activity.

Upstream also said the code deliberately obfuscates the web addresses it accesses and uses redirection chains to hide the suspicious activity.

Over the past few weeks Upstream said it’s blocked more than 114 million suspicious transactions originating from two million unique devices, according to data from its proprietary security platform, which the company said would cost consumers if they are not blocked. Upstream only has visibility in certain parts of the world — Brazil, Indonesia and Malaysia to name a few — suggesting the number of observed suspicious transactions was likely a fraction of the total number.

Then in mid-April, 4shared’s app suddenly disappeared from Google Play and was replaced with a near-identical app with the suspicious components removed.

At the time of writing, 4shared’s new app has more than 10 million users.

Irin Len, a spokesperson for 4shared, told TechCrunch that the company was “unaware” of the fraudulent ad activity in its app until we reached out, but confirmed the company no longer works with Elephant Data.

Len said the old app was removed by Google “without reason,” but its suspicions quickly fell on the third-party components, which the company removed and resubmitted the app for approval. But because their old app was pulled from Android’s app store, 4shared said it wasn’t allowed to push an update to existing users to remove the suspicious components from their devices.

Google did not respond to TechCrunch’s request for comment.

We sent Elephant Data several questions and follow-up emails prior to publication but we did not hear back.

4shared, owned by New IT Solutions based in the British Virgin Islands, makes a brief reference to Elephant Data in its privacy policy but doesn’t explicitly say what the service does. 4shared said since it’s unable to control or disable Elephant Data’s components in its old app, “we’re bound to keep the detailed overview of which data may be processed and how it may be shared” in its privacy policy.

Little else is known about Elephant Data, except that it bills itself as a “market intelligence” solution designed to “maximize ad revenue.”

The ad firm has drawn criticism in several threads on Reddit, one of which accused the company of operating a “scam” and another called the offering “dodgy.” One developer said he removed the components from his app after it began to suffer from battery-life issues, but Elephant Data was “still collecting data” from users who hadn’t updated their apps.

The developer said Google also banned his app, forcing him to resubmit an entirely new version of his app to the store.

It’s the latest app in recent months to be accused of using invisible ads to generate fraudulent revenue. In May, BuzzFeed News reported similar suspicious behavior and fraudulent purchases in Chinese video app VidMate.

Powered by WPeMatico

Security flaws in a popular smart home hub let hackers unlock front doors

Posted by | Gadgets, Home Automation, Password, privacy, private key, search engine, Security, smart device, smart devices, smart home devices, smart lock, spokesperson, wi-fi | No Comments

When is a smart home not so smart? When it can be hacked.

That’s exactly what security researchers Chase Dardaman and Jason Wheeler did with one of the Zipato smart hubs. In new research published Tuesday and shared with TechCrunch, Dardaman and Wheeler found three security flaws which, when chained together, could be abused to open a front door with a smart lock.

Smart home technology has come under increasing scrutiny in the past year. Although convenient to some, security experts have long warned that adding an internet connection to a device increases the attack surface, making the devices less secure than their traditional counterparts. The smart home hubs that control a home’s smart devices, like water meters and even the front door lock, can be abused to allow landlords entry to a tenant’s home whenever they like.

In January, security expert Lesley Carhart wrote about her landlord’s decision to install smart locks — forcing her to look for a new home. Other renters and tenants have faced similar pressure from their landlords and even sued to retain the right to use a physical key.

Dardaman and Wheeler began looking into the ZipaMicro, a popular smart home hub developed by Croatian firm Zipato, some months ago, but only released their findings once the flaws had been fixed.

The researchers found they could extract the hub’s private SSH key for “root” — the user account with the highest level of access — from the memory card on the device. Anyone with the private key could access a device without needing a password, said Wheeler.

They later discovered that the private SSH key was hardcoded in every hub sold to customers — putting at risk every home with the same hub installed.

Using that private key, the researchers downloaded a file from the device containing scrambled passwords used to access the hub. They found that the smart hub uses a “pass-the-hash” authentication system, which doesn’t require knowing the user’s plaintext password, only the scrambled version. By taking the scrambled password and passing it to the smart hub, the researchers could trick the device into thinking they were the homeowner.

All an attacker had to do was send a command to tell the lock to open or close. With just a few lines of code, the researchers built a script that locked and unlocked a smart lock connected to a vulnerable smart hub.

The proof-of-concept code letting the hackers unlock a smart lock (Image: Chase Dardaman, Jason Wheeler)

Worse, Dardaman said that any apartment building that registered one main account for all the apartments in their building would allow them to “open any door” from that same password hash.

The researchers conceded that their findings weren’t a perfect skeleton key into everyone’s homes. In order to exploit the flaws, an attacker would need to be on the same Wi-Fi network as the vulnerable smart hub. Dardaman said any hub connected directly to the internet would be remotely exploitable. The researchers found five such vulnerable devices using Shodan, a search engine for publicly available devices and databases.

Zipato says it has 112,000 devices in 20,000 households, but the exact number of vulnerable hubs isn’t known.

We asked SmartRent, a Zipato customer and one of the largest smart home automation providers, which said fewer than 5% of its apartment-owning customers were affected by the vulnerable technology. A spokesperson wouldn’t quantify the figure further. SmartRent said it had more than 20,000 installations in mid-February, just weeks before the researchers’ disclosure.

For its part, Zipato fixed the vulnerabilities within a few weeks of receiving the researchers’ disclosure.

Zipato’s chief executive Sebastian Popovic told TechCrunch that each smart hub now comes with a unique private SSH key and other security improvements. Zipato has also since discontinued the ZipaMicro hub in favor of one of its newer products.

Smart home tech isn’t likely to go away any time soon. Figures from research firm IDC estimate more than 832 million smart home devices will be sold in 2019, just as states and countries crack down on poor security in internet-connected devices.

That’s also likely to bring more scrutiny to smart home tech by hackers and security researchers alike.

“We want to show that there is a risk to this kind of tech, and apartment buildings or even individual consumers need to know that these are not necessarily safer than a traditional door lock,” said Dardaman.

Powered by WPeMatico

FTC, Justice Dept. takes coordinated action against robocallers

Posted by | caller id, department of justice, Director, Federal Communications Commission, Federal Trade Commission, Mobile, neologisms, privacy, Security, telemarketing, telephony | No Comments

Federal authorities have announced its latest crackdown on illegal robocallers — taking close to a hundred actions against several companies and individuals blamed for the recent barrage of spam calls.

In the so-called “Operation Call It Quits,” the Federal Trade Commission brought four cases — two filed on its behalf by the Justice Department — and three settlements in cases said to be responsible for making more than a billion illegal robocalls.

Several state and local authorities also brought actions as part of the operation, officials said.

Each year, billions of automatically dialed or spoofed phone calls trick millions into picking up the phone. An annoyance at least, at worse it tricks unsuspecting victims into turning over cash or buying fake or misleading products. So far, the FTC has fined companies more than $200 million but only collected less than 0.01% of the fines because of the agency’s limited enforcement powers.

In this new wave of action, the FTC said it will send a strong signal to the robocalling industry.

Andrew Smith, director of the FTC’s Bureau of Consumer Protection, said Americans are “fed up” with the billions of robocalls received every year. “Today’s joint effort shows that combatting this scourge remains a top priority for law enforcement agencies around the nation,” he said.

It’s the second time the FTC has acted in as many months. In May, the agency also took action against four companies accused of making “billions” of robocalls.

The FTC said its latest action brings the number of robocall violators up to 145.

Several of the cases involved shuttering operations that offer consumers “bogus” credit card interest rate reduction services, which the FTC said specifically targeted seniors. Other cases involved the use of illegal robocalls to promote money-making schemes.

Another cases included actions against Lifewatch, a company pitching medical alert systems, which the FTC contended uses spoofed caller ID information to trick victims into picking up the phone. The company settled for $25.3 million. Meanwhile, Redwood Scientific settled for $18.2 million, suspended due to the inability for defendant Danielle Cadiz to pay, for “deceptively” marketing dentistry products, according to the FTC’s complaint.

The robocalling epidemic has caught the attention of the Federal Communications Commission, which regulates the telecoms and internet industries. Last month, its commissioners proposed a new rule that would make it easier for carriers to block robocalls.

Powered by WPeMatico

LTE flaws let hackers ‘easily’ spoof presidential alerts

Posted by | amber alert, Emergency Alert System, Government, hawaii, LTE, Mobile, mobile phones, president, Security, technology, telecommunications, text messaging, United States | No Comments

Security vulnerabilities in LTE can allow hackers to “easily” spoof presidential alerts sent to mobile phones in the event of a national emergency.

Using off-the-shelf equipment and open-source software, a working exploit made it possible to send a simulated alert to every phone in a 50,000-seat football stadium with little effort, with the potential of causing “cascades of panic,” said researchers at the University of Colorado Boulder in a paper out this week.

Their attack worked in nine out of 10 tests, they said.

Last year the Federal Emergency Management Agency sent out the first “presidential alert” test using the Wireless Emergency Alert (WEA) system. It was part of an effort to test the new state-of-the-art system to allow any president to send out a message to the bulk of the U.S. population in the event of a disaster or civil emergency.

But the system — which also sends out weather warnings and AMBER alerts — isn’t perfect. Last year amid tensions between the U.S. and North Korea, an erroneous alert warned residents of Hawaii of an inbound ballistic missile threat. The message mistakenly said the alert was “not a drill.”

Although no system is completely secure, many of the issues over the years have been as a result of human error. But the researchers said the LTE network used to transmit the broadcast message is the biggest weak spot.

Because the system uses LTE to send the message and not a traditional text message, each cell tower blasts out an alert on a specific channel to all devices in range. A false alert can be sent to every device in range if that channel is identified.

Making matters worse, there’s no way for devices to verify the authenticity of received alerts.

The researchers said fixing the vulnerabilities would “require a large collaborative effort between carriers, government stakeholders and cell phone manufacturers.” They added that adding digital signatures to each broadcast alert is not a “magic solution,” but would make it far more difficult to send spoofed messages.

A similar vulnerability in LTE was discovered last year, allowing researchers to not only send emergency alerts but also eavesdrop on a victim’s text messages and track their location.

Powered by WPeMatico

The real risk of Facebook’s Libra coin is crooked developers

Posted by | Apps, blockchain, Cambridge Analytica, cryptocurrency, Developer, Facebook, Facebook Cryptocurrency, facebook platform, Facebook Policy, Libra Association, Libra Cryptocurrency, Mobile, Opinion, payments, Policy, Security, Social, TC | No Comments

Everyone’s worried about Mark Zuckerberg controlling the next currency, but I’m more concerned about a crypto Cambridge Analytica.

Today Facebook announced Libra, its forthcoming stablecoin designed to let you shop and send money overseas with almost zero transaction fees. Immediately, critics started harping about the dangers of centralizing control of tomorrow’s money in the hands of a company with a poor track record of privacy and security.

Facebook anticipated this, though, and created a subsidiary called Calibra to run its crypto dealings and keep all transaction data separate from your social data. Facebook shares control of Libra with 27 other Libra Association founding members, and as many as 100 total when the token launches in the first half of 2020. Each member gets just one vote on the Libra council, so Facebook can’t hijack the token’s governance even though it invented it.

With privacy fears and centralized control issues at least somewhat addressed, there’s always the issue of security. Facebook naturally has a huge target on its back for hackers. Not just because Libra could hold so much value to steal, but because plenty of trolls would get off on screwing up Facebook’s currency. That’s why Facebook open-sourced the Libra Blockchain and is offering a prototype in a pre-launch testnet. This developer beta plus a bug bounty program run in partnership with HackerOne is meant to surface all the flaws and vulnerabilities before Libra goes live with real money connected.

Yet that leaves one giant vector for abuse of Libra: the developer platform.

“Essential to the spirit of Libra . . . the Libra Blockchain will be open to everyone: any consumer, developer, or business can use the Libra network, build products on top of it, and add value through their services. Open access ensures low barriers to entry and innovation and encourages healthy competition that benefits consumers,” Facebook explained in its white paper and Libra launch documents. It’s even building a whole coding language called Move for making Libra apps.

Apparently Facebook has already forgotten how allowing anyone to build on the Facebook app platform and its low barriers to “innovation” are exactly what opened the door for Cambridge Analytica to hijack 87 million people’s personal data and use it for political ad targeting.

But in this case, it won’t be users’ interests and birthdays that get grabbed. It could be hundreds or thousands of dollars’ worth of Libra currency that’s stolen. A shady developer could build a wallet that just cleans out a user’s account or funnels their coins to the wrong recipient, mines their purchase history for marketing data or uses them to launder money. Digital risks become a lot less abstract when real-world assets are at stake.

In the wake of the Cambridge Analytica scandal, Facebook raced to lock down its app platform, restrict APIs, more heavily vet new developers and audit ones that look shady. So you’d imagine the Libra Association would be planning to thoroughly scrutinize any developer trying to build a Libra wallet, exchange or other related app, right? “There are no plans for the Libra Association to take a role in actively vetting [developers],” Calibra’s head of product Kevin Weil surprisingly told me. “The minute that you start limiting it is the minute you start walking back to the system you have today with a closed ecosystem and a smaller number of competitors, and you start to see fees rise.”

That translates to “the minute we start responsibly verifying Libra app developers, things start to get expensive, complicated or agitating to cryptocurrency purists. That might hurt growth and adoption.” You know what will hurt growth of Libra a lot worse? A sob story about some migrant family or a small business getting all their Libra stolen. And that blame is going to land squarely on Facebook, not some amorphous Libra Association.

Image via Getty Images / alashi

Inevitably, some unsavvy users won’t understand the difference between Facebook’s own wallet app Calibra and any other app built for the currency. “Libra is Facebook’s cryptocurrency. They wouldn’t let me get robbed,” some will surely say. And on Calibra they’d be right. It’s a custodial wallet that will refund you if your Libra are stolen and it offers 24/7 customer support via chat to help you regain access to your account.

Yet the Libra Blockchain itself is irreversible. Outside of custodial wallets like Calibra, there’s no getting your stolen or mis-sent money back. There’s likely no customer support. And there are plenty of crooked crypto developers happy to prey on the inexperienced. Indeed, $1.7 billion in cryptocurrency was stolen last year alone, according to CypherTrace via CNBC. “As with anything, there’s fraud and there are scams in the existing financial ecosystem today . . .  that’s going to be true of Libra too. There’s nothing special or magical that prevents that,” says Weil, who concluded “I think those pros massively outweigh the cons.”

Until now, the blockchain world was mostly inhabited by technologists, except for when skyrocketing values convinced average citizens to invest in Bitcoin just before prices crashed. Now Facebook wants to bring its family of apps’ 2.7 billion users into the world of cryptocurrency. That’s deeply worrisome.

Facebook founder and CEO Mark Zuckerberg arrives to testify during a Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing about Facebook on Capitol Hill in Washington, DC, April 10, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

Regulators are already bristling, but perhaps for the wrong reasons. Democrat Senator Sherrod Brown tweeted that “We cannot allow Facebook to run a risky new cryptocurrency out of a Swiss bank account without oversight.” And French Finance Minister Bruno Le Maire told Europe 1 radio that Libra can’t be allowed to “become a sovereign currency.”

Most harshly, Rep. Maxine Waters issued a statement saying, “Given the company’s troubled past, I am requesting that Facebook agree to a moratorium on any movement forward on developing a cryptocurrency until Congress and regulators have the opportunity to examine these issues and take action.”

Yet Facebook has just one vote in controlling the currency, and the Libra Association preempted these criticisms, writing, “We welcome public inquiry and accountability. We are committed to a dialogue with regulators and policymakers. We share policymakers’ interest in the ongoing stability of national currencies.”

That’s why as lawmakers confer about how to regulate Libra, I hope they remember what triggered the last round of Facebook execs having to appear before Congress and Parliament. A totally open, unvetted Libra developer platform in the name of “innovation” over safety is a ticking time bomb. Governments should insist the Libra Association thoroughly audit developers and maintain the power to ban bad actors. In this strange new crypto world, the public can’t be expected to perfectly protect itself from Cambridge Analytica 2.$.

Get up to speed on Facebook’s Libra with this handy guide:

Powered by WPeMatico

Every secure messaging app needs a self-destruct button

Posted by | Apps, encryption, end-to-end encryption, Government, Mobile, privacy, secure messaging, Security, signal, TC, Telegram, WhatsApp | No Comments

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

@telegram @durov an HK citizen who runs a Telegram channel detained by the police was forced to unlock his phone and reveal his channel followers. Could you please add an option such that channel subscribers cannot be seen under extreme circumstances? Much appreciate. https://t.co/tj4UQztuZ2

— Lo Sinofobo (@tnzqo7f9) June 12, 2019

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best-case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best-case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

Powered by WPeMatico

Google opens its Android security-key tech to iPhone and iPad users

Posted by | Android, authentication, computer security, cryptography, Google, iPad, multi-factor authentication, Security, security token | No Comments

Google will now allow iPhone and iPad owners to use their Android security key to verify sign-ins, the company said Wednesday.

Last month, the search and mobile giant said it developed a new Bluetooth-based protocol that will allow modern Android 7.0 devices and later to act as a security key for two-factor authentication. Since then, Google said 100,000 users are already using their Android phones as a security key.

Since its debut, the technology was limited to Chrome sign-ins. Now Google says Apple device owners can get the same protections without having to plug anything in.

Signing in to a Google account on an iPad using an Android 7.0 device (Image: Google)

Security keys are an important security step for users who are particularly at risk of advanced attacks. They’re designed to thwart even the smartest and most resourceful attackers, like nation-state hackers. Instead of a security key that you keep on your key ring, newer Android devices have the technology built-in. When you log in to your account, you are prompted to authenticate with your key. Even if someone steals your password, they can’t log in without your authenticating device. Even phishing pages won’t work because only legitimate websites support security keys.

For the most part, security keys are a last line of defense. Google admitted last month that its standalone Titan security keys were vulnerable to a pairing bug, potentially putting it at risk of hijack. The company offered a free replacement for any affected device.

The security key technology is also FIDO2 compliant, a secure and flexible standard that allows various devices running different operating systems to communicate with each other for authentication.

For the Android security key to work, iPhone and iPad users need the Google Smart Lock app installed. For now, Google said the Android security key will be limited to sign-ins to Google accounts only.

Powered by WPeMatico