computer security

After account hacks, Twitch streamers take security into their own hands

Posted by | computer security, credential stuffing, cryptography, Gaming, johnny xmas, multi-factor authentication, Password, Prevention, salem, Security, SMS, Twitch, twitch tv, video hosting | No Comments

Twitch has an account hacking problem.

After the breach of popular browser game Town of Salem in January, some 7.8 million stolen passwords quickly became the weakest link not only for the game but gamers’ other accounts. The passwords were stored using a long-deprecated scrambling algorithm, making them easily cracked.

It didn’t take long for security researcher and gamer Matthew Jakubowski to see the aftermath.

In the weeks following, the main subreddit for Amazon-owned game streaming site Twitch — of which Jakubowski is a moderator — was flooded with complaints about account hijacks. One after the other, users said their accounts had been hacked. Many of the hijacked accounts had used their Town of Salem password for their Twitch account.

Jakubowski blamed the attacks on automated account takeovers — bots that cycle through password lists stolen from breached sites, including Town of Salem.

“Twitch knows it’s a problem — but this has been going on for months and there’s no end in sight,” Jakubowski told TechCrunch.

Credential stuffing is a security problem that requires participation from both tech companies and their users. Hackers take lists of usernames and passwords from other breached sites and brute-force their way into other accounts. Customers of DoorDash and Chipotle have in recent months complained of account breaches, but have denied their systems have been hacked, offered little help to their users or shown any effort to bolster their security, and instead washed their hands of any responsibility.

Jakubowski, working with fellow security researcher Johnny Xmas, said Twitch no longer accepting email addresses to log in and incentivizing users to set up two-factor authentication would all but eliminate the problem.

The Russia connection

In new research out Tuesday, Jakubowski and Xmas said Russian hackers are a likely culprit.

The researchers found attackers would run massive lists of stolen credentials against Twitch’s login systems using widely available automation tools. With no discernible system to prevent automated logins, the attackers can hack into Twitch accounts at speed. Once logged in, the attackers then change the password to gain persistent access to the account. Even if they’re caught, some users are claiming a turnaround time of four weeks for Twitch support to get their accounts back.

On the accounts with a stored payment card — or an associated Amazon Prime membership — the attackers follow streaming channels they run or pay a small fee to access, of which Twitch takes a cut. Twitch also has its own virtual currency — bits — to help streamers solicit donations, which can be abused by the attackers to funnel funds into their coffers.

When the attacker’s streaming account hits the payout limit, the attacker cashes out.

The researchers said the attackers stream prerecorded gameplay footage on their own Twitch channels, often using Russian words and names.

“You’ll see these Russian accounts that will stream what appears to be old video game footage — you’ll never see a face or hear anybody talking but you’ll get tons of people subscribing and following in the channel,” said Xmas. “You’ll get people donating bits when nothing is going on in there — even when the channel isn’t streaming,” he said.

This activity helps cloak the attackers’ account takeover and pay-to-follow activity, said Xmas, but the attackers would keep the subscriber counts low enough to garner payouts from Twitch but not draw attention.

“If it’s something easy enough for [Jakubowski] to stumble across, it should be easy for Twitch to handle,” said Xmas. “But Twitch is staying silent and users are constantly being defrauded.”

Two-factor all the things

Twitch, unlike other sites and services with a credential stuffing problem, already lets its 15 million daily users set up two-factor authentication on their accounts, putting much of the onus to stay secure on the users themselves.

Twitch partners, like Jakubowski, and affiliates are required to set up two-factor on their accounts.

But the researchers say Twitch should do more to incentivize ordinary users — the primary target for account hijackers and fraudsters — to secure their accounts.

“I think [Twitch] doesn’t want that extra step between a valid user trying to pay for something and adding friction to that process,” said Jakubowski.

“The hackers have no idea how valuable an account is until they log in. They’re just going to try everyone — and take a shotgun approach.”
Matthew Jakubowski, security researcher and Twitch partner

“Two-factor is important — everyone knows it’s important but users still aren’t using it because it’s inconvenient,” said Xmas. “That’s the bottom line: Twitch doesn’t want to inconvenience people because that loses Twitch money,” he said.

Recognizing there was still a lack of awareness around password security and with no help from Twitch, Jakubowski and Xmas took matters into their own hands. The pair teamed up to write a comprehensive Twitch user security guide to explain why seemingly unremarkable accounts are a target for hackers, and hosted a Reddit “ask me anything” to let users to ask questions and get instant feedback.

Even during Jakubowski’s streaming sessions, he doesn’t waste a chance to warn his viewers about the security problem — often fielding other security-related questions from his fans.

“Every 10 minutes or so, I’ll remind people watching to set-up two factor,” he said.

“The hackers have no idea how valuable an account is until they log in,” said Jakubowski. “They’re just going to try everyone — and take a shotgun approach,” he said.

Xmas said users “don’t realize” how vulnerable they are. “They don’t understand why their account — which they don’t even use to stream — is desirable to hackers,” he said. “If you have a payment card associated with your account, that’s what they want.”

Carrot and the stick

Jakubowski said that convincing the users is the big challenge.

Twitch could encourage users with free perks — like badges or emotes — costing the company nothing, the researchers said. Twitch lets users collect badges to flair their accounts. World of Warcraft maker Blizzard offers perks for setting up two-factor, and Epic Games offers similar incentives to their gamers.

“Rewarding users for implementing two-factor would go a huge way,” said Xmas. “It’s incredible to see how effective that is.”

The two said the company could also integrate third-party leaked credential monitoring services, like Have I Been Pwned, to warn users if their passwords have been leaked or exposed. And, among other fixes, the researchers say removing two-factor by text message would reduce SIM swapping attacks. Xmas, who serves as director of field engineering at anti-bot startup Kasada — which TechCrunch profiled earlier this year — said Twitch could invest in systems that detect bot activity to prevent automated logins.

Twitch, when reached prior to publication, did not comment.

Jakubowski said until Twitch acts, streamers can do their part by encouraging their viewers to switch on the security feature. “Streamers are influencers — more users are likely to switch on two-factor if they hear it from a streamer,” he said.

“Getting more streamers to get on board with security will hopefully go a much longer way,” he said.

Read more:

Powered by WPeMatico

Google turns your Android phone into a security key

Posted by | Access Control, Android, authentication, Authenticator, computer security, cryptography, Google, google authenticator, Google Cloud Next 2019, hardware, multi-factor authentication, phishing, Security, security token, TC | No Comments

Your Android phone could soon replace your hardware security key to provide two-factor authentication access to your accounts. As the company announced at its Cloud Next conference today, it has developed a Bluetooth-based protocol that will be able to talk to its Chrome browser and provide a standards-based second factor for access to its services, similar to modern security keys.

It’s no secret that two-factor authentication remains one of the best ways to secure your online accounts. Typically, that second factor comes to you in the form of a push notification, text message or through an authentication app like the Google Authenticator. There’s always the risk of somebody intercepting those numbers or phishing your account and then quickly using your second factor to log in, though. Because a physical security key also ensures that you are on the right site before it exchanges the key, it’s almost impossible to phish this second factor. The key simply isn’t going to produce a token on the wrong site.

Because Google is using the same standard here, just with different hardware, that phishing protection remains intact when you use your phone, too.

Bluetooth security keys aren’t a new thing, of course, and Google’s own Titan keys include a Bluetooth version (though they remain somewhat controversial). The user experience for those keys is a bit messy, though, since you have to connect the key and the device first. Google, however, says that it has done away with all of this thanks to a new protocol that uses Bluetooth but doesn’t necessitate the usual Bluetooth connection setup process. Sadly, though, the company didn’t quite go into details as to how this would work.

Google says this new feature will work with all Android 7+ devices that have Bluetooth and location services enabled. Pixel 3 phones, which include Google’s Titan M tamper-resistant security chip, get some extra protections, but the company is mostly positioning this as a bonus and not a necessity.

As far as the setup goes, the whole process isn’t all that different from setting up a security key (and you’ll still want to have a second or third key handy in case you ever lose or destroy your phone). You’ll be able to use this new feature for both work and private Google accounts.

For now, this also only works in combination with Chrome. The hope here, though, is to establish a new standard that will then be integrated into other browsers, as well. It’s only been a week or two since Google enabled support for logging into its own service with security keys on Edge and Firefox. That was a step forward. Now that Google offers a new service that’s even more convenient, though, it’ll likely be a bit before these competing browsers will offer support, too, once again giving Google a bit of an edge.

Powered by WPeMatico

UK report blasts Huawei for network security incompetence

Posted by | 5g, 5G network security, Asia, China, Ciaran Martin, computer security, cyberattack, cybercrime, ernst & young, Europe, european union, huawei, Mobile, National Cyber Security Centre, national security, Security, telecommunications, UK government, United Kingdom | No Comments

The latest report by a UK oversight body set up to evaluation Chinese networking giant Huawei’s approach to security has dialled up pressure on the company, giving a damning assessment of what it describes as “serious and systematic defects” in its software engineering and cyber security competence.

Although the report falls short of calling for an outright ban on Huawei equipment in domestic networks — an option U.S. president Trump continues dangling across the pond.

The report, prepared for the National Security Advisor of the UK by the Huawei Cyber Security Evaluation Centre (HCSEC) Oversight Board, also identifies new “significant technical issues” which it says lead to new risks for UK telecommunications networks using Huawei kit.

The HCSEC was set up by Huawei in 2010, under what the oversight board couches as “a set of arrangements with the UK government”, to provide information to state agencies on its products and strategies in order that security risks could be evaluated.

And last year, under pressure from UK security agencies concerned about technical deficiencies in its products, Huawei pledged to spend $2BN to try to address long-running concerns about its products in the country.

But the report throws doubt on its ability to address UK concerns — with the board writing that it has “not yet seen anything to give it confidence in Huawei’s capacity to successfully complete the elements of its transformation programme that it has proposed as a means of addressing these underlying defects”.

So it sounds like $2BN isn’t going to be nearly enough to fix Huawei’s security problem in just one European country.

The board also writes that it will require “sustained evidence” of better software engineering and cyber security “quality”, verified by HCSEC and the UK’s National Cyber Security Centre (NCSC), if there’s to be any possibility of it reaching a different assessment of the company’s ability to reboot its security credentials.

While another damning assessment contained in the report is that Huawei has made “no material progress” on issues raised by last year’s report.

All the issues identified by the security evaluation process relate to “basic engineering competence and cyber security hygiene”, which the board notes gives rise to vulnerabilities capable of being exploited by “a range of actors”.

It adds that the NCSC does not believe the defects found are a result of Chinese state interference.

This year’s report is the fifth the oversight board has produced since it was established in 2014, and it comes at a time of acute scrutiny for Huawei, as 5G network rollouts are ramping up globally — pushing governments to address head on suspicions attached to the Chinese giant and consider whether to trust it with critical next-gen infrastructure.

“The Oversight Board advises that it will be difficult to appropriately risk-manage future products in the context of UK deployments, until the underlying defects in Huawei’s software engineering and cyber security processes are remediated,” the report warns in one of several key conclusions that make very uncomfortable reading for Huawei.

“Overall, the Oversight Board can only provide limited assurance that all risks to UK national security from Huawei’s involvement in the UK’s critical networks can be sufficiently mitigated long-term,” it adds in summary.

Reached for its response to the report, a Huawei UK spokesperson sent us a statement in which it describes the $2BN earmarked for security improvements related to UK products as an “initial budget”.

It writes:

The 2019 OB [oversight board] report details some concerns about Huawei’s software engineering capabilities. We understand these concerns and take them very seriously. The issues identified in the OB report provide vital input for the ongoing transformation of our software engineering capabilities. In November last year Huawei’s Board of Directors issued a resolution to carry out a companywide transformation programme aimed at enhancing our software engineering capabilities, with an initial budget of US$2BN.

A high-level plan for the programme has been developed and we will continue to work with UK operators and the NCSC during its implementation to meet the requirements created as cloud, digitization, and software-defined everything become more prevalent. To ensure the ongoing security of global telecom networks, the industry, regulators, and governments need to work together on higher common standards for cyber security assurance and evaluation.

Seeking to find something positive to salvage from the report’s savaging, Huawei suggests it demonstrates the continued effectiveness of the HCSEC as a structure to evaluate and mitigate security risk — flagging a description where the board writes that it’s “arguably the toughest and most rigorous in the world”, and which Huawei claims shows at least there hasn’t been any increase in vulnerability of UK networks since the last report.

Though the report does identify new issues that open up fresh problems — albeit the underlying issues were presumably there last year too, just laying undiscovered.

The board’s withering assessment certainly amps up the pressure on Huawei which has been aggressively battling U.S.-led suspicion of its kit — claiming in a telecoms conference speech last month that “the U.S. security accusation of our 5G has no evidence”, for instance.

At the same time it has been appealing for the industry to work together to come up with collective processes for evaluating the security and trustworthiness of network kit.

And earlier this month it opened another cyber security transparency center — this time at the heart of Europe in Brussels, where the company has been lobbying policymakers to help establish security standards to foster collective trust. Though there’s little doubt that’s a long game.

Meanwhile, critics of Huawei can now point to impatience rising in the U.K., despite comments by the head of the NCSC, Ciaran Martin, last month — who said then that security agencies believe the risk of using Huawei kit can be managed, suggesting the government won’t push for an outright ban.

The report does not literally overturn that view but it does blast out a very loud and alarming warning about the difficulty for UK operators to “appropriately” risk-manage what’s branded defective and vulnerable Huawei kit. Including flagging the risk of future products — which the board suggests will be increasingly complex to manage. All of which could well just push operators to seek alternatives.

On the mitigation front, the board writes that — “in extremis” — the NCSC could order Huawei to carry out specific fixes for equipment currently installed in the UK. Though it also warns that such a step would be difficult, and could for example require hardware replacement which may not mesh with operators “natural” asset management and upgrades cycles, emphasizing it does not offer a sustainable solution to the underlying technical issues.

“Given both the shortfalls in good software engineering and cyber security practice and the currently unknown trajectory of Huawei’s R&D processes through their announced transformation plan, it is highly likely that security risk management of products that are new to the UK or new major releases of software for products currently in the UK will be more difficult,” the board writes in a concluding section discussing the UK national security risk.

“On the basis of the work already carried out by HCSEC, the NCSC considers it highly likely that there would be new software engineering and cyber security issues in products HCSEC has not yet examined.”

It also describes the number and severity of vulnerabilities plus architectural and build issues discovered by a relatively small team in the HCSEC as “a particular concern”.

“If an attacker has knowledge of these vulnerabilities and sufficient access to exploit them, they may be able to affect the operation of the network, in some cases causing it to cease operating correctly,” it warns. “Other impacts could include being able to access user traffic or reconfiguration of the network elements.”

In another section on mitigating risks of using Huawei kit, the board notes that “architectural controls” in place in most UK operators can limit the ability of attackers to exploit any vulnerable network elements not explicitly exposed to the public Internet — adding that such controls, combined with good opsec generally, will “remain critically important in the coming years to manage the residual risks caused by the engineering defects identified”.

In other highlights from the report the board does have some positive things to say, writing that an NCSC technical review of its capabilities showed improvements in 2018, while another independent audit of HCSEC’s ability to operate independently of Huawei HQ once again found “no high or medium priority findings”.

“The audit report identified one low-rated finding, relating to delivery of information and equipment within agreed Service Level Agreements. Ernst & Young concluded that there were no major concerns and the Oversight Board is satisfied that HCSEC is operating in line with the 2010 arrangements between HMG and the company,” it further notes.

Last month the European Commissioner said it was preparing to step in to ensure a “common approach” across the European Union where 5G network security is concerned — warning of the risk of fragmentation across the single market. Though it has so far steered clear of any bans.

Earlier this week it issued a set of recommendations for Member States, combining legislative and policy measures to assess 5G network security risks and help strengthen preventive measures.

Among the operational measures it suggests Member States take is to complete a national risk assessment of 5G network infrastructures by the end of June 2019, and follow that by updating existing security requirements for network providers — including conditions for ensuring the security of public networks.

“These measures should include reinforced obligations on suppliers and operators to ensure the security of the networks,” it recommends. “The national risk assessments and measures should consider various risk factors, such as technical risks and risks linked to the behaviour of suppliers or operators, including those from third countries. National risk assessments will be a central element towards building a coordinated EU risk assessment.”  

At an EU level the Commission said Member States should share information on network security, saying this “coordinated work should support Member States’ actions at national level and provide guidance to the Commission for possible further steps at EU level” — leaving the door open for further action.

While the EU’s executive body has not pushed for a pan-EU ban on any 5G vendors it did restate Member States’ right to exclude companies from their markets for national security reasons if they fail to comply with their own standards and legal framework.

Powered by WPeMatico

Law enforcement needs to protect citizens and their data

Posted by | Android, Australia, Column, computer security, crypto wars, cryptography, encryption, european union, Facebook, Federal Bureau of Investigation, General Data Protection Regulation, human rights, law, law enforcement, national security, privacy, Security, United Kingdom | No Comments
Robert Anderson
Contributor

Robert Anderson served for 21 years in the FBI, retiring as executive assistant director of the Criminal, Cyber, Response and Services Branch. He is currently an advisor at The Chertoff Group and the chief executive of Cyber Defense Labs.

Over the past several years, the law enforcement community has grown increasingly concerned about the conduct of digital investigations as technology providers enhance the security protections of their offerings—what some of my former colleagues refer to as “going dark.”

Data once readily accessible to law enforcement is now encrypted, protecting consumers’ data from hackers and criminals. However, these efforts have also had what Android’s security chief called the “unintended side effect” of also making this data inaccessible to law enforcement. Consequently, many in the law enforcement community want the ability to compel providers to allow them to bypass these protections, often citing physical and national security concerns.

I know first-hand the challenges facing law enforcement, but these concerns must be addressed in a broader security context, one that takes into consideration the privacy and security needs of industry and our citizens in addition to those raised by law enforcement.

Perhaps the best example of the law enforcement community’s preferred solution is Australia’s recently passed Assistance and Access Bill, an overly-broad law that allows Australian authorities to compel service providers, such as Google and Facebook, to re-engineer their products and bypass encryption protections to allow law enforcement to access customer data.

While the bill includes limited restrictions on law enforcement requests, the vague definitions and concentrated authorities give the Australian government sweeping powers that ultimately undermine the security and privacy of the very citizens they aim to protect. Major tech companies, such as Apple and Facebook, agree and have been working to resist the Australian legislation and a similar bill in the UK.

Image: Bryce Durbin/TechCrunch

Newly created encryption backdoors and work-arounds will become the target of criminals, hackers, and hostile nation states, offering new opportunities for data compromise and attack through the newly created tools and the flawed code that inevitably accompanies some of them. These vulnerabilities undermine providers’ efforts to secure their customers’ data, creating new and powerful vulnerabilities even as companies struggle to address existing ones.

And these vulnerabilities would not only impact private citizens, but governments as well, including services and devices used by the law enforcement and national security communities. This comes amidst government efforts to significantly increase corporate responsibility for the security of customer data through laws such as the EU’s General Data Protection Regulation. Who will consumers, or the government, blame when a government-mandated backdoor is used by hackers to compromise user data? Who will be responsible for the damage?

Companies have a fiduciary responsibility to protect their customers’ data, which not only includes personally identifiable information (PII), but their intellectual property, financial data, and national security secrets.

Worse, the vulnerabilities created under laws such as the Assistance and Access Bill would be subject almost exclusively to the decisions of law enforcement authorities, leaving companies unable to make their own decisions about the security of their products. How can we expect a company to protect customer data when their most fundamental security decisions are out of their hands?

phone encryption

Image: Bryce Durbin/TechCrunch

Thus far law enforcement has chosen to downplay, if not ignore, these concerns—focusing singularly on getting the information they need. This is understandable—a law enforcement officer should use every power available to them to solve a case, just as I did when I served as a State Trooper and as a FBI Special Agent, including when I served as Executive Assistant Director (EAD) overseeing the San Bernardino terror attack case during my final months in 2015.

Decisions regarding these types of sweeping powers should not and cannot be left solely to law enforcement. It is up to the private sector, and our government, to weigh competing security and privacy interests. Our government cannot sacrifice the ability of companies and citizens to properly secure their data and systems’ security in the name of often vague physical and national security concerns, especially when there are other ways to remedy the concerns of law enforcement.

That said, these security responsibilities cut both ways. Recent data breaches demonstrate that many companies have a long way to go to adequately protect their customers’ data. Companies cannot reasonably cry foul over the negative security impacts of proposed law enforcement data access while continuing to neglect and undermine the security of their own users’ data.

Providers and the law enforcement community should be held to robust security standards that ensure the security of our citizens and their data—we need legal restrictions on how government accesses private data and on how private companies collect and use the same data.

There may not be an easy answer to the “going dark” issue, but it is time for all of us, in government and the private sector, to understand that enhanced data security through properly implemented encryption and data use policies is in everyone’s best interest.

The “extra ordinary” access sought by law enforcement cannot exist in a vacuum—it will have far reaching and significant impacts well beyond the narrow confines of a single investigation. It is time for a serious conversation between law enforcement and the private sector to recognize that their security interests are two sides of the same coin.

Powered by WPeMatico

Huawei opens a cybersecurity transparency center in the heart of Europe

Posted by | 5g, Asia, Brussels, China, computer security, cybersecurity, EC, Europe, General Data Protection Regulation, huawei, Internet of Things, Mobile, Network Security, Security, telecommunications | No Comments

5G kit maker Huawei opened a Cyber Security Transparency center in Brussels yesterday as the Chinese tech giant continues to try to neutralize suspicion in Western markets that its networking gear could be used for espionage by the Chinese state.

Huawei announced its plan to open a European transparency center last year but giving a speech at an opening ceremony for the center yesterday the company’s rotating CEO, Ken Hu, said: “Looking at the events from the past few months, it’s clear that this facility is now more critical than ever.”

Huawei said the center, which will demonstrate the company’s security solutions in areas including 5G, IoT and cloud, aims to provide a platform to enhance communication and “joint innovation” with all stakeholders, as well as providing a “technical verification and evaluation platform for our
customers”.

“Huawei will work with industry partners to explore and promote the development of security standards and verification mechanisms, to facilitate technological innovation in cyber security across the industry,” it said in a press release.

“To build a trustworthy environment, we need to work together,” Hu also said in his speech. “Both trust and distrust should be based on facts, not feelings, not speculation, and not baseless rumour.

“We believe that facts must be verifiable, and verification must be based on standards. So, to start, we need to work together on unified standards. Based on a common set of standards, technical verification and legal verification can lay the foundation for building trust. This must be a collaborative effort, because no single vendor, government, or telco operator can do it alone.”

The company made a similar plea at Mobile World Congress last week when its rotating chairman, Guo Ping, used a keynote speech to claim its kit is secure and will never contain backdoors. He also pressed the telco industry to work together on creating standards and structures to enable trust.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” he urged. “Let experts decide whether networks are safe or not.”

Also speaking at MWC last week the EC’s digital commissioner, Mariya Gabriel, suggested the executive is prepared to take steps to prevent security concerns at the EU Member State level from fragmenting 5G rollouts across the Single Market.

She told delegates at the flagship industry conference that Europe must have “a common approach to this challenge” and “we need to bring it on the table soon”.

Though she did not suggest exactly how the Commission might act.

A spokesman for the Commission confirmed that EC VP Andrus Ansip and Huawei’s Hu met in person yesterday to discuss issues around cybersecurity, 5G and the Digital Single Market — adding that the meeting was held at the request of Hu.

“The Vice-President emphasised that the EU is an open rules based market to all players who fulfil EU rules,” the spokesman told us. “Specific concerns by European citizens should be addressed. We have rules in place which address security issues. We have EU procurement rules in place, and we have the investment screening proposal to protect European interests.”

“The VP also mentioned the need for reciprocity in respective market openness,” he added, further noting: “The College of the European Commission will hold today an orientation debate on China where this issue will come back.”

In a tweet following the meeting Ansip also said: “Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security.”

Met with @Huawei rotating CEO Ken Hu to discuss #5G and #cybersecurity.

Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security. pic.twitter.com/ltATdnnzvL

— Andrus Ansip (@Ansip_EU) March 4, 2019

Reuters reports Hu saying the pair had discussed the possibility of setting up a cybersecurity standard along the lines of Europe’s updated privacy framework, the General Data Protection Regulation (GDPR).

Although the Commission did not respond when we asked it to confirm that discussion point.

GDPR was multiple years in the making and before European institutions had agreed on a final text that could come into force. So if the Commission is keen to act “soon” — per Gabriel’s comments on 5G security — to fashion supportive guardrails for next-gen network rollouts a full blown regulation seems an unlikely template.

More likely GDPR is being used by Huawei as a byword for creating consensus around rules that work across an ecosystem of many players by providing standards that different businesses can latch on in an effort to keep moving.

Hu referenced GDPR directly in his speech yesterday, lauding it as “a shining example” of Europe’s “strong experience in driving unified standards and regulation” — so the company is clearly well-versed in how to flatter hosts.

“It sets clear standards, defines responsibilities for all parties, and applies equally to all companies operating in Europe,” he went on. “As a result, GDPR has become the golden standard for privacy protection around the world. We believe that European regulators can also lead the way on similar mechanisms for cyber security.”

Hu ended his speech with a further industry-wide plea, saying: “We also commit to working more closely with all stakeholders in Europe to build a system of trust based on objective facts and verification. This is the cornerstone of a secure digital environment for all.”

Huawei’s appetite to do business in Europe is not in doubt, though.

The question is whether Europe’s telcos and governments can be convinced to swallow any doubts they might have about spying risks and commit to working with the Chinese kit giant as they roll out a new generation of critical infrastructure.

Powered by WPeMatico

Europe is prepared to rule over 5G cybersecurity

Posted by | 5g, artificial intelligence, Australia, barcelona, broadband, China, computer security, EC, Emerging-Technologies, Europe, european commission, european union, Germany, huawei, Internet of Things, Mariya Gabriel, Mobile, mwc 2019, network technology, New Zealand, Security, telecommunications, trump, UK government, United Kingdom, United States, zte | No Comments

The European Commission’s digital commissioner has warned the mobile industry to expect it to act over security concerns attached to Chinese network equipment makers.

The Commission is considering a defacto ban on kit made by Chinese companies including Huawei in the face of security and espionage concerns, per Reuters.

Appearing on stage at the Mobile World Congress tradeshow in Barcelona today, Mariya Gabriel, European commissioner for digital economy and society, flagged network “cybersecurity” during her scheduled keynote, warning delegates it’s stating the obvious for her to say that “when 5G services become mission critical 5G networks need to be secure”.

Geopolitical concerns between the West and China are being accelerated and pushed to the fore as the era of 5G network upgrades approach, as well as by ongoing tensions between the U.S. and China over trade.

“I’m well away of the unrest among all of you key actors in the telecoms sectors caused by the ongoing discussions around the cybersecurity of 5G,” Gabriel continued, fleshing out the Commission’s current thinking. “Let me reassure you: The Commission takes your view very seriously. Because you need to run these systems everyday. Nobody is helped by premature decisions based on partial analysis of the facts.

“However it is also clear that Europe has to have a common approach to this challenge. And we need to bring it on the table soon. Otherwise there is a risk that fragmentation rises because of diverging decisions taken by Member States trying to protect themselves.”

“We all know that this fragmentation damages the digital single market. So therefore we are working on this important matter with priority. And to the Commission we will take steps soon,” she added.

The theme of this year’s show is “intelligent connectivity”; the notion that the incoming 5G networks will not only create links between people and (many, many more) things but understand the connections they’re making at a greater depth and resolution than has been possible before, leveraging the big data generated by many more connections to power automated decision-making in near real time, with low latency another touted 5G benefit (as well as many more connections per cell).

Futuristic scenarios being floated include connected cars neatly pulling to the sides of the road ahead of an ambulance rushing a patient to hospital — or indeed medical operations being aided and even directed remotely in real-time via 5G networks supporting high resolution real-time video streaming.

But for every touted benefit there are easy to envisage risks to network technology that’s being designed to connect everything all of the time — thereby creating a new and more powerful layer of critical infrastructure society will be relying upon.

Last fall the Australia government issued new security guidelines for 5G networks that essential block Chinese companies such as Huawei and ZTE from providing equipment to operators — justifying the move by saying that differences in the way 5G operates compared to previous network generations introduces new risks to national security.

New Zealand followed suit shortly after, saying kit from the Chinese companies posed a significant risk to national security.

While in the U.S. President Trump has made 5G network security a national security priority since 2017, and a bill was passed last fall banning Chinese companies from supplying certain components and services to government agencies.

The ban is due to take effect over two years but lawmakers have been pressuring to local carriers to drop 5G collaborations with companies such as Huawei.

In Europe the picture is so far more mixed. A UK government report last summer investigating Huawei’s broadband and mobile infrastructure raised further doubts, and last month Germany was reported to be mulling a 5G ban on the Chinese kit maker.

But more recently the two EU Member States have been reported to no longer be leaning towards a total ban — apparently believing any risk can be managed and mitigated by oversight and/or partial restrictions.

It remains to be seen how the Commission could step in to try to harmonize security actions taken by Member States around nascent 5G networks. But it appears prepared to set rules.

That said, Gabriel gave no hint of its thinking today, beyond repeating the Commission’s preferred position of less fragmentation, more harmonization to avoid collateral damage to its overarching Digital Single Market initiative — i.e. if Member States start fragmenting into a patchwork based on varying security concerns.

We’ve reached out to the Commission for further comment and will update this story with any additional context.

During the keynote she was careful to talk up the transformative potential of 5G connectivity while also saying innovation must work in lock-step with European “values”.

“Europe has to keep pace with other regions and early movers while making sure that its citizens and businesses benefit swiftly from the new infrastructures and the many applications that will be built on top of them,” she said.

“Digital is helping us and we need to reap its opportunities, mitigate its risks and make sure it is respectful of our values as much as driven by innovation. Innovation and values. Two key words. That is the vision we have delivered in terms of the defence for our citizens in Europe. Together we have decided to construct a Digital Single Market that reflects the values and principles upon which the European Union has been built.”

Her speech also focused on AI, with the commissioner highlighting various EC initiatives to invest in and support private sector investment in artificial intelligence — saying it’s targeting €20BN in “AI-directed investment” across the private and public sector by 2020, with the goal for the next decade being “to reach the same amount as an annual average” — and calling on the private sector to “contribute to ensure that Europe reaches the level of investment needed for it to become a world stage leader also in AI”.

But again she stressed the need for technology developments to be thoughtfully managed so they reflect the underlying society rather than negatively disrupting it. The goal should be what she dubbed “human-centric AI”.

“When we talk about AI and new technologies development for us Europeans it is not only about investing. It is mainly about shaping AI in a way that reflects our European values and principles. An ethical approach to AI is key to enable competitiveness — it will generate user trust and help facilitate its uptake,” she said.

“Trust is the key word. There is no other way. It is only by ensuring trustworthiness that Europe will position itself as a leader in cutting edge, secure and ethical AI. And that European citizens will enjoy AI’s benefits.”

Powered by WPeMatico

Lenovo Watch X was riddled with security bugs, researcher says

Posted by | api, Bluetooth, China, computer security, computing, encryption, Gadgets, lenovo, Password, smartwatches, spokesperson, Wearables, web server, Zhongguancun | No Comments

Lenovo’s Watch X was widely panned as “absolutely terrible.” As it turns out, so was its security.

The low-end $50 smartwatch was one of Lenovo’s cheapest smartwatches. Available only for the China market, anyone who wants one has to buy one directly from the mainland. Lucky for Erez Yalon, head of security research at Checkmarx, an application security testing company, he was given one from a friend. But it didn’t take him long to find several vulnerabilities that allowed him to change user’s passwords, hijack accounts and spoof phone calls.

Because the smartwatch wasn’t using any encryption to send data from the app to the server, Yalon said he was able to see his registered email address and password sent in plain text, as well as data about how he was using the watch, like how many steps he was taking.

“The entire API was unencrypted,” said Yalon in an email to TechCrunch. “All data was transferred in plain-text.”

The API that helps power the watch was easily abused, he found, allowing him to reset anyone’s password simply by knowing a person’s username. That could’ve given him access to anyone’s account, he said.

Not only that, he found that the watch was sharing his precise geolocation with a server in China. Given the watch’s exclusivity to China, it might not be a red flag to natives. But Yalon said the watch had “already pinpointed my location” before he had even registered his account.

Yalon’s research wasn’t just limited to the leaky API. He found that the Bluetooth-enabled smartwatch could also be manipulated from nearby, by sending crafted Bluetooth requests. Using a small script, he demonstrated how easy it was to spoof a phone call on the watch.

Using a similar malicious Bluetooth command, he could also set the alarm to go off — again and again. “The function allows adding multiple alarms, as often as every minute,” he said.

Lenovo didn’t have much to say about the vulnerabilities, besides confirming their existence.

“The Watch X was designed for the China market and is only available from Lenovo to limited sales channels in China,” said spokesperson Andrew Barron. “Our [security team] team has been working with the [original device manufacturer] that makes the watch to address the vulnerabilities identified by a researcher and all fixes are due to be completed this week.”

Yalon said that encrypting the traffic between the watch, the Android app and its web server would prevent snooping and help reduce manipulation.

“Fixing the API permissions eliminates the ability of malicious users to send commands to the watch, spoof calls, and set alarms,” he said.

Powered by WPeMatico

Fortnite bugs put accounts at risk of takeover

Posted by | computer security, cryptography, fortnite, Gaming, Hack, hacking, Password, Prevention, Security, security breaches, software testing, spokesperson, vulnerability | No Comments

With one click, any semi-skilled hacker could have silently taken over a Fortnite account, according to a cybersecurity firm that says the bug is now fixed.

Researchers at Check Point say the three vulnerabilities chained together could have affected any of its 200 million players. The flaws, if exploited, would have stolen the account access token set on the gamer’s device once they entered their password.

Once stolen, that token could be used to impersonate the gamer and log in as if they were the account holder, without needing their password.

The researchers say that the flaw lies in how Epic Games, the maker of Fortnite, handles login requests. Researchers said they could send any user a crafted link that appears to come from Epic Games’ own domain and steal an access token needed to break into an account.

Check Point’s Oded Vanunu explains how the bug works. (Image: supplied)

“It’s important to remember that the URL is coming from an Epic Games domain, so it’s transparent to the user and any security filter will not suspect anything,” said Oded Vanunu, Check Point’s head of products vulnerability research, in an email to TechCrunch.

Here’s how it works: The user clicks on a link, which points to an epicgames.com subdomain, which the hacker embeds a link to malicious code on their own server by exploiting a cross-site weakness in the subdomain. Once the malicious script loads, unbeknownst to the Fortnite player, it steals their account token and sends it back to the hacker.

“If the victim user is not logged into the game, he or she would have to log in first,” said Vanunu. “Once that person is logged in, the account can be stolen.”

Epic Games has since fixed the vulnerability.

“We were made aware of the vulnerabilities and they were soon addressed,” said Nick Chester, a spokesperson for Epic Games. “We thank Check Point for bringing this to our attention.”

“As always, we encourage players to protect their accounts by not re-using passwords and using strong passwords, and not sharing account information with others,” he said.

When asked, Epic Games would not say if user data or accounts were compromised as a result of this vulnerability.

Powered by WPeMatico

Civil servant who watched porn at work blamed for infecting a US government network with malware

Posted by | Android, computer security, computing, cybercrime, Cyberwarfare, Government, malware, national security, Prevention, ransomware, Removable media, Security, security breaches, spokesperson, U.S. government, United States | No Comments

A U.S. government network was infected with malware thanks to one employee’s “extensive history” of watching porn on his work computer, investigators have found.

The audit, carried out by the U.S. Department of the Interior’s inspector general, found that a U.S. Geological Survey (USGS) network at the EROS Center, a satellite imaging facility in South Dakota, was infected after an unnamed employee visited thousands of porn pages that contained malware, which downloaded to his laptop and “exploited the USGS’ network.” Investigators found that many of the porn images were “subsequently saved to an unauthorized USB device and personal Android cell phone,” which was connected to the employee’s government-issued computer.

Investigators found that his Android cell phone “was also infected with malware.”

The findings were made public in a report earlier this month but buried on the U.S. government’s oversight website and went largely unreported.

It’s bad enough in this day and age that a government watchdog has to remind civil servants to not watch porn at work — let alone on their work laptop. The inspector general didn’t say what the employee’s fate was, but ripped into the Department of the Interior’s policies for letting him get that far in the first place.

“We identified two vulnerabilities in the USGS’ IT security posture: web-site access and open USB ports,” the report said.

There is a (slightly) bright side. The EROS Center, which monitors and archives images of the planet’s land surface, doesn’t operate any classified networks, a spokesperson for Interior’s inspector general told TechCrunch in an email, ruling out any significant harm to national security. But the spokesperson wouldn’t say what kind of malware used — only that, “the malware helps enable data exfiltration and is also associated with ransomware attacks.”

Investigators recommended that USGS enforce a “strong blacklist policy” of known unauthorized websites and “regularly monitor employee web usage history.”

The report also said the agency should lock down its USB drive policy, restricting employees from using removable media on government devices, but it’s not known if the recommendations have yet gone into place. USGS did not return a request for comment.

Powered by WPeMatico

Smart home makers hoard your data, but won’t say if the police come for it

Posted by | Amazon, Apple, computer security, Facebook, Gadgets, Google, Government, hardware, Internet of Things, law enforcement, national security, privacy, Security, smart home devices, television, transparency report | No Comments

A decade ago, it was almost inconceivable that nearly every household item could be hooked up to the internet. These days, it’s near impossible to avoid a non-smart home gadget, and they’re vacuuming up a ton of new data that we’d never normally think about.

Thermostats know the temperature of your house, and smart cameras and sensors know when someone’s walking around your home. Smart assistants know what you’re asking for, and smart doorbells know who’s coming and going. And thanks to the cloud, that data is available to you from anywhere — you can check in on your pets from your phone or make sure your robot vacuum cleaned the house.

Because the data is stored or accessible by the smart home tech makers, law enforcement and government agencies have increasingly sought data from the companies to solve crimes.

And device makers won’t say if your smart home gadgets have been used to spy on you.

For years, tech companies have published transparency reports — a semi-regular disclosure of the number of demands or requests a company gets from the government for user data. Google was first in 2010. Other tech companies followed in the wake of Edward Snowden’s revelations that the government had enlisted tech companies’ aid in spying on their users. Even telcos, implicated in wiretapping and turning over Americans’ phone records, began to publish their figures to try to rebuild their reputations.

As the smart home revolution began to thrive, police saw new opportunities to obtain data where they hadn’t before. Police sought Echo data from Amazon to help solve a murder. Fitbit data was used to charge a 90-year old man with the murder of his stepdaughter. And recently, Nest was compelled to turn over surveillance footage that led to gang members pleading guilty to identity theft.

Yet, Nest — a division of Google — is the only major smart home device maker that has published how many data demands it receives.

As first noted by Forbes last week, Nest’s little-known transparency report doesn’t reveal much — only that it’s turned over user data about 300 times since mid-2015 on over 500 Nest users. Nest also said it hasn’t to date received a secret order for user data on national security grounds, such as in cases of investigating terrorism or espionage. Nest’s transparency report is woefully vague compared to some of the more detailed reports by Apple, Google and Microsoft, which break out their data requests by lawful request, by region and often by the kind of data the government demands.

As Forbes said, “a smart home is a surveilled home.” But at what scale?

We asked some of the most well-known smart home makers on the market if they plan to release a transparency report, or disclose the number of demands they receive for data from their smart home devices.

For the most part, we received fairly dismal responses.

What the big four tech giants said

Amazon did not respond to requests for comment when asked if it will break out the number of demands it receives for Echo data, but a spokesperson told me last year that while its reports include Echo data, it would not break out those figures.

Facebook said that its transparency report section will include “any requests related to Portal,” its new hardware screen with a camera and a microphone. Although the device is new, a spokesperson did not comment on if the company will break out the hardware figures separately.

Google pointed us to Nest’s transparency report but did not comment on its own efforts in the hardware space — notably its Google Home products.

And Apple said that there’s no need to break out its smart home figures — such as its HomePod — because there would be nothing to report. The company said user requests made to HomePod are given a random identifier that cannot be tied to a person.

What the smaller but notable smart home players said

August, a smart lock maker, said it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA),” but did not comment on the number of subpoenas, warrants and court orders it receives. “August does comply with all laws and when faced with a court order or warrant, we always analyze the request before responding,” a spokesperson said.

Roomba maker iRobot said it “has not received any demands from governments for customer data,” but wouldn’t say if it planned to issue a transparency report in the future.

Both Arlo, the former Netgear smart home division, and Signify, formerly Philips Lighting, said they do not have transparency reports. Arlo didn’t comment on its future plans, and Signify said it has no plans to publish one. 

Ring, a smart doorbell and security device maker, did not answer our questions on why it doesn’t have a transparency report, but said it “will not release user information without a valid and binding legal demand properly served on us” and that Ring “objects to overbroad or otherwise inappropriate demands as a matter of course.” When pressed, a spokesperson said it plans to release a transparency report in the future, but did not say when.

Spokespeople for Honeywell and Canary — both of which have smart home security products — did not comment by our deadline.

And, Samsung, a maker of smart sensors, trackers and internet-connected televisions and other appliances, did not respond to a request for comment.

Only Ecobee, a maker of smart switches and sensors, said it plans to publish its first transparency report “at the end of 2018.” A spokesperson confirmed that, “prior to 2018, Ecobee had not been requested nor required to disclose any data to government entities.”

All in all, that paints a fairly dire picture for anyone thinking that when the gadgets in your home aren’t working for you, they could be helping the government.

As helpful and useful as smart home gadgets can be, few fully understand the breadth of data that the devices collect — even when we’re not using them. Your smart TV may not have a camera to spy on you, but it knows what you’ve watched and when — which police used to secure a conviction of a sex offender. Even data from when a murder suspect pushed the button on his home alarm key fob was enough to help convict someone of murder.

Two years ago, former U.S. director of national intelligence James Clapper said the government was looking at smart home devices as a new foothold for intelligence agencies to conduct surveillance. And it’s only going to become more common as the number of internet-connected devices spread. Gartner said more than 20 billion devices will be connected to the internet by 2020.

As much as the chances are that the government is spying on you through your internet-connected camera in your living room or your thermostat are slim — it’s naive to think that it can’t.

But the smart home makers wouldn’t want you to know that. At least, most of them.

Powered by WPeMatico