privacy

T-Mobile customers report outage, can’t make calls or send text messages

Posted by | Mobile, mobile phone, privacy, Security, T-Mobile, telecommunications, text messaging, United States | No Comments

T-Mobile customers across the U.S. say they can’t make calls or send text messages following an apparent outage — although mobile data appears to be unaffected.

We tested with a T-Mobile phone in the office. Both calls to and from the T-Mobile phone failed. When we tried to send a text message, it said the message could not be sent. The outage began around 3pm PT (6pm ET).

Users took to social media to complain about the outage. It’s not clear how many customers are affected, but users across the U.S. have said they are affected.

A T-Mobile support account said the cell giant has “engaged our engineers and are working on a resolution.”

In a tweet two hours into the outage, chief executive John Legere acknowledged the outage, adding that the company has “already started to see signs of recovery.”

T-Mobile is the third largest cell carrier after Verizon (which owns TechCrunch) and AT&T. The company had its proposed $26.5 billion merger with Sprint approved by the Federal Communications Commission, despite a stream of state attorneys general lining up to block the deal.

Updated with acknowledgement by chief executive John Legere.

Powered by WPeMatico

Toolkit for digital abuse could help victims protect themselves

Posted by | Abuse, cornell, cornell university, domestic abuse, Gadgets, Mobile, privacy | No Comments

Domestic abuse comes in digital forms as well as physical and emotional, but a lack of tools to address this kind of behavior leaves many victims unprotected and desperate for help. This Cornell project aims to define and detect digital abuse in a systematic way.

Digital abuse may be many things: hacking the victim’s computer, using knowledge of passwords or personal date to impersonate them or interfere with their presence online, accessing photos to track their location, and so on. As with other forms of abuse, there are as many patterns as there are people who suffer from it.

But with something like emotional abuse, there are decades of studies and clinical approaches to address how to categorize and cope with it. Not so with newer phenomena like being hacked or stalked via social media. That means there’s little standard playbook for them, and both abused and those helping them are left scrambling for answers.

“Prior to this work, people were reporting that the abusers were very sophisticated hackers, and clients were receiving inconsistent advice. Some people were saying, ‘Throw your device out.’ Other people were saying, ‘Delete the app.’ But there wasn’t a clear understanding of how this abuse was happening and why it was happening,” explained Diana Freed, a doctoral student at Cornell Tech and co-author of a new paper about digital abuse.

“They were making their best efforts, but there was no uniform way to address this,” said co-author Sam Havron. “They were using Google to try to help clients with their abuse situations.”

Investigating this problem with the help of a National Science Foundation grant to examine the role of tech in domestic abuse, they and some professor collaborators at Cornell and NYU came up with a new approach.

There’s a standardized questionnaire to characterize the type of tech-based being experienced. It may not occur to someone who isn’t tech-savvy that their partner may know their passwords, or that there are social media settings they can use to prevent that partner from seeing their posts. This information and other data are added to a sort of digital presence diagram the team calls the “technograph” and which helps the victim visualize their technological assets and exposure.

technograph filled

The team also created a device they call the IPV Spyware Discovery, or ISDi. It’s basically spyware scanning software loaded on a device that can check the victim’s device without having to install anything. This is important because an abuser may have installed tracking software that would alert them if the victim is trying to remove it. Sound extreme? Not to people fighting a custody battle who can’t seem to escape the all-seeing eye of an abusive ex. And these spying tools are readily available for purchase.

“It’s consistent, it’s data-driven and it takes into account at each phase what the abuser will know if the client makes changes. This is giving people a more accurate way to make decisions and providing them with a comprehensive understanding of how things are happening,” explained Freed.

Even if the abuse can’t be instantly counteracted, it can be helpful simply to understand it and know that there are some steps that can be taken to help.

The authors have been piloting their work at New York’s Family Justice Centers, and following some testing have released the complete set of documents and tools for anyone to use.

This isn’t the team’s first piece of work on the topic — you can read their other papers and learn more about their ongoing research at the Intimate Partner Violence Tech Research program site.

Powered by WPeMatico

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

Posted by | Advertising Tech, america, Android, cookies, data processing, data protection, data security, ePrivacy Regulation, Europe, european union, Facebook, France, GDPR, General Data Protection Regulation, Germany, Google, information commissioner's office, instagram, law, online advertising, privacy, spamming, TC, United States, University of Michigan | No Comments

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

Powered by WPeMatico

Google to auction slots on Android default search ‘choice screen’ in Europe next year, rivals cry ‘pay-to-play’ foul

Posted by | Android, antitrust, Apps, competition, DuckDuckGo, Europe, Google, lawsuit, privacy, Qwant, Search | No Comments

Starting early next year Google will present Android users in Europe with a search engine choice screen when handsets bundle its own search service by default.

In a blog post announcing the latest change to flow from the European Union’s record-breaking $5 billion antitrust enforcement against Android last year, when the Commission found Google had imposed illegal restrictions on device makers (OEMs) and carriers using its dominant smartphone platform, it says new Android phones will be shown the choice screen once during set-up (or again after any factory reset).

The screen will display a selection of three rival search engines alongside its own.

OEMs will still be able to offer Android devices in Europe that bundle a non-Google search engine by default (though per Google’s reworked licensing terms they have to pay it to do so). In those instances Google said the choice screen will not be displayed.

Google says rival search engines will be selected for display on the default choice screen, per market, via a fixed-price sealed bid annual auction — with any winners (and/or eligible search providers) being displayed in a random order alongside its own.

Search engines that win the auction will secure one of three open slots on the choice screen, with Google’s own search engine always occupying one of the four total slots.

“In each country auction, search providers will state the price that they are willing to pay each time a user selects them from the choice screen in the given country,” it writes. “Each country will have a minimum bid threshold. The three highest bidders that meet or exceed the bid threshold for a given country will appear in the choice screen for that country.”

android choice screen

If there aren’t enough bids to surface three winners per auction then Google says it will randomly select from a pool of eligible search providers, which it is also inviting to apply to participate in the choice screen. (Eligibility criteria can be found here.)

“Next year, we’ll introduce a new way for Android users to select a search provider to power a search box on their home screen and as the default in Chrome (if installed),” it writes. “Search providers can apply to be part of the new choice screen, which will appear when someone is setting up a new Android smartphone or tablet in Europe.”

“As always, people can continue to customize and personalize their devices at any time after set up. This includes selecting which apps to download, changing how apps are arranged on the screen, and switching the default search provider in apps like Google Chrome,” it adds.

Google’s blog post makes no mention of whether the choice screen will be pushed to the installed base of Android devices. But a spokeswoman told us the implementation requires technical changes that means it can only be supported on new devices.

Default selections on a dominant platform are of course hugely important for gaining or sustaining market share. And it’s only since competition authorities dialed up their scrutiny that the company has started to make some shifts in how it bundles its own services in dominant products such as Android and Chrome.

Earlier this year Google quietly added rival pro-privacy search engine DuckDuckGo as one of the default choices offered by its Chrome browser, for example.

In April it also began rolling out choice screens to both new and existing Android users in Europe — offering a prompt to download additional search apps and browsers.

In the latter case, each screen shows five apps in total, including whatever search and browser is already installed. Apps not already installed are included based on their market popularity and shown in a random order.

android choice app screen.max 1000x1000

French pro-privacy search engine Qwant told us that since the rollout of the app service choice screen to Android devices the share of Qwant users using its search engine on mobile has leapt up from around 2% to more than a quarter (26%) of its total user base.

Qwant co-founder and CEO Eric Léandri said the app choice screen shows that competing against Google on search is possible — but only “thanks to the European Commission” stepping in and forcing the unbundling.

However, he raised serious concerns about the sealed bid auction structure that Google has announced for the default search choice — pointing out that many of the bidders for the slots will also be using Google advertising and technology; while the sealed structure of the auction means no-one outside Google will know what prices are being submitted as bids, making it impossible for rivals to know whether the selections Google makes are fair.

Even Google’s own FAQ swings abruptly from claims of the auction it has devised being “a fair and objective method” for determining which search providers get slots, to a flat “no” and “no” on any transparency on bid amounts or the number of providers it deems eligible per market…

Screenshot 2019 08 02 at 16.51.50

“Even if Google is Google some people can choose something else if they have the choice. But now that Google knows it, it wants to stop the process,” Léandri told TechCrunch.

“It is not up to Google to now charge its competitors for its faulty behavior and the amount of the fine, through an auction system that will benefit neither European consumers nor free competition, which should not be distorted by such process,” Qwant added in an emailed press statement. “The proposed bidding process would be open to so-called search engines that derive their results and revenues from Google, thereby creating an unacceptable distortion and a high risk of manipulation, inequity or disloyalty of the auction.”

“The decision of the European Commission must benefit European consumers by ensuring the conditions of a freedom of choice based on the intrinsic merits of each engine and the expectations of citizens, especially regarding the protection of their personal data, and not on their ability to fund Google or to be financed by it,” it also said.

In a further complaint, Léandri said Google is requiring bidders in the choice screen auction to sign an NDA in order to participate — which Qwant argues would throw a legal obstacle in the way of it being able to participate, considering it is a complainant in the EU’s antitrust case (ongoing because Google is appealing).

“Qwant cannot accept that the auction process is subject to a non-disclosure agreement as imposed by Google while its complaint is still pending,” it writes. “Such a confidentiality agreement has no other possible justification than the desire to silence its competitors on the anomalies they would see. This, again, is an unacceptable abuse of its dominant position.”

We’ve reached out to the Commission with questions about Google’s choice screen auction. Update: A Commission spokesperson told us:

The decision provides rival search providers the possibility to strike exclusive pre-installation deals with smartphone and tablet manufacturers. This was not possible before.

In order to ensure the effective implementation of the decision, Google also agreed to implement a choice screen. We have seen in the past that a choice screen can be an effective way to promote user choice.

We will be closely monitoring the implementation of the choice screen mechanism, including listening to relevant feedback from the market, in particular in relation to the presentation and mechanics of the choice screen and to the selection mechanism of rival search providers. The Commission is committed to a full and effective implementation of the decision.

DuckDuckGo founder Gabriel Weinberg has also been quick to point to flaws in the auction structure — writing on Twitter: “A ‘ballot box’ screen could be an excellent way to increase meaningful consumer choice if designed properly. Unfortunately, Google’s announcement today will not meaningfully deliver consumer choice.

“A pay-to-play auction with only 4 slots means consumers won’t get all the choices they deserve, and Google will profit at the expense of the competition. We encourage regulators to work with directly with Google, us, and others to ensure the best system for consumers.”

Powered by WPeMatico

Siri recordings ‘regularly’ sent to Apple contractors for analysis, claims whistleblower

Posted by | Apple, artificial intelligence, Gadgets, Mobile, privacy, siri | No Comments

Apple has joined the dubious company of Google and Amazon in secretly sharing with contractors audio recordings of its users, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature and are paired with other revealing data, like location, app data and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than 1% of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of 1% rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google likewise sends clips to be analyzed, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (under which Siri queries would fall):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.

Powered by WPeMatico

iOS 13: Here are the new security and privacy features you might’ve missed

Posted by | Android, Apple, Apps, Bluetooth, cloud applications, computing, hardware, iOS, iPad, iPhone, privacy, safari, Security, smartphones, social media, tablet computers, technology, webmail, wi-fi | No Comments

In just a few weeks Apple’s new iOS 13, the thirteenth major iteration of its popular iPhone software, will be out — along with new iPhones and a new iPad version, the aptly named iPadOS. We’ve taken iOS 13 for a spin over the past few weeks — with a focus on the new security and privacy features — to see what’s new and how it all works.

Here’s what you need to know.

You’ll start to see reminders about apps that track your location

1 location track

Ever wonder which apps track your location? Wonder no more. iOS 13 will periodically remind you about apps that are tracking your location in the background. Every so often it will tell you how many times an app has tracked where you’ve been in a recent period of time, along with a small map of the location points. From this screen you can “always allow” the app to track your location or have the option to limit the tracking.

You can grant an app your location just once

2 location ask

To give you more control over what data have access to, iOS 13 now lets you give apps access to your location just once. Previously there was “always,” “never” or “while using,” meaning an app could be collecting your real-time location as you’re using it. Now you can grant an app access on a per use basis — particularly helpful for the privacy-minded folks.

And apps wanting access to Bluetooth can be declined access

Screen Shot 2019 07 18 at 12.18.38 PM

Apps wanting to access Bluetooth will also ask for your consent. Although apps can use Bluetooth to connect to gadgets, like fitness bands and watches, Bluetooth-enabled tracking devices known as beacons can be used to monitor your whereabouts. These beacons are found everywhere — from stores to shopping malls. They can grab your device’s unique Bluetooth identifier and track your physical location between places, building up a picture of where you go and what you do — often for targeting you with ads. Blocking Bluetooth connections from apps that clearly don’t need it will help protect your privacy.

Find My gets a new name — and offline tracking

5 find my

Find My, the new app name for locating your friends and lost devices, now comes with offline tracking. If you lost your laptop, you’d rely on its last Wi-Fi connected location. Now it broadcasts its location using Bluetooth, which is securely uploaded to Apple’s servers using nearby cellular-connected iPhones and other Apple devices. The location data is cryptographically scrambled and anonymized to prevent anyone other than the device owner — including Apple — from tracking your lost devices.

Your apps will no longer be able to snoop on your contacts’ notes

8 contact snoop

Another area that Apple is trying to button down is your contacts. Apps have to ask for your permission before they can access to your contacts. But in doing so they were also able to access the personal notes you wrote on each contact, like their home alarm code or a PIN number for phone banking, for example. Now, apps will no longer be able to see what’s in each “notes” field in a user’s contacts.

Sign In With Apple lets you use a fake relay email address

6 sign in

This is one of the cooler features coming soon — Apple’s new sign-in option allows users to sign in to apps and services with one tap, and without having to turn over any sensitive or private information. Any app that requires a sign-in option must use Sign In With Apple as an option. In doing so users can choose to share their email with the app maker, or choose a private “relay” email, which hides a user’s real email address so the app only sees a unique Apple-generated email instead. Apple says it doesn’t collect users’ data, making it a more privacy-minded solution. It works across all devices, including Android devices and websites.

You can silence unknown callers

4 block callers

Here’s one way you can cut down on disruptive spam calls: iOS 13 will let you send unknown callers straight to voicemail. This catches anyone who’s not in your contacts list will be considered an unknown caller.

You can strip location metadata from your photos

7 strip location

Every time you take a photo your iPhone stores the precise location of where the photo was taken as metadata in the photo file. But that can reveal sensitive or private locations — such as your home or office — if you share those photos on social media or other platforms, many of which don’t strip the data when they’re uploaded. Now you can. With a few taps, you can remove the location data from a photo before sharing it.

And Safari gets better anti-tracking features

9 safari improvements

Apple continues to advance its new anti-tracking technologies in its native Safari browser, like preventing cross-site tracking and browser fingerprinting. These features make it far more difficult for ads to track users across the web. iOS 13 has its cross-site tracking technology enabled by default so users are protected from the very beginning.

Read more:

Powered by WPeMatico

FaceApp gets federal attention as Sen. Schumer raises alarm on data use

Posted by | Apps, artificial intelligence, charles schumer, FaceApp, Government, Mobile, privacy, TC | No Comments

It’s been hard to get away from FaceApp over the last few days, whether it’s your friends posting weird selfies using the app’s aging and other filters, or the brief furore over its apparent (but not actual) circumvention of permissions on iPhones. Now even the Senate is getting in on the fun: Sen. Chuck Schumer (D-NY) has asked the FBI and the FTC to look into the app’s data handling practices.

“I write today to express my concerns regarding FaceApp,” he writes in a letter sent to FBI Director Christopher Wray and FTC Chairman Joseph Simons. I’ve excerpted his main concerns below:

In order to operate the application, users must provide the company full and irrevocable access to their personal photos and data. According to its privacy policy, users grant FaceApp license to use or publish content shared with the application, including their username or even their real name, without notifying them or providing compensation.

Furthermore, it is unclear how long FaceApp retains a user’s data or how a user may ensure their data is deleted after usage. These forms of “dark patterns,” which manifest in opaque disclosures and broader user authorizations, can be misleading to consumers and may even constitute a deceptive trade practices. Thus, I have serious concerns regarding both the protection of the data that is being aggregated as well as whether users are aware of who may have access to it.

In particular, FaceApp’s location in Russia raises questions regarding how and when the company provides access to the data of U.S. citizens to third parties, including potentially foreign governments.

For the cave-dwellers among you (and among whom I normally would proudly count myself) FaceApp is a selfie app that uses AI-esque techniques to apply various changes to faces, making them look older or younger, adding accessories, and, infamously, changing their race. That didn’t go over so well.

There’s been a surge in popularity over the last week, but it was also noticed that the app seemed to be able to access your photos whether you said it could or not. It turns out that this is actually a normal capability of iOS, but it was being deployed here in somewhat of a sneaky manner and not as intended. And arguably it was a mistake on Apple’s part to let this method of selecting a single photo go against the “never” preference for photo access that a user had set.

Fortunately the Senator’s team is not worried about this or even the unfounded (we checked) concerns that FaceApp was secretly sending your data off in the background. It isn’t. But it very much does send your data to Russia when you tell it to give you an old face, or a hipster face, or whatever. Because the computers that do the actual photo manipulation are located there — these filters are being applied in the cloud, not directly on your phone.

His concerns are over the lack of transparency that user data is being sent out to servers who knows where, to be kept for who knows how long, and sold to who knows whom. Fortunately the obliging FaceApp managed to answer most of these questions before the Senator’s letter was ever posted.

The answers to his questions, should we choose to believe them, are that user data is not in fact sent to Russia, the company doesn’t track users and usually can’t, doesn’t sell data to third parties, and deletes “most” photos within 48 hours.

Although the “dark patterns” of which the Senator speaks are indeed an issue, and although it would have been much better if FaceApp had said up front what it does with your data, this is hardly an attempt by a Russian adversary to build up a database of U.S. citizens.

While it is good to see Congress engaging with digital privacy, asking the FBI and FTC to look into a single app seems unproductive when that app is not doing much that a hundred others, American and otherwise, have been doing for years. Cloud-based processing and storage of user data is commonplace — though usually disclosed a little better.

Certainly as Sen. Schumer suggests, the FTC should make sure that “there are adequate safeguards in place to protect the privacy of Americans…and if not, that the public be made aware of the risks associated with the use of this application or others similar to it.” But this seems the wrong nail to hang that on. We see surreptitious slurping of contact lists, deceptive deletion promises, third-party sharing of poorly anonymized data, and other bad practices in apps and services all the time — if the federal government wants to intervene, let’s have it. But let’s have a law or a regulation, not a strongly worded letter written after the fact.

Schumer Faceapp Letter by TechCrunch on Scribd

Powered by WPeMatico

Facebook’s regulation dodge: Let us, or China will

Posted by | Apps, blockchain, China, cryptocurrency, David Marcus, eCommerce, Facebook, Facebook Regulation, Finance, Government, Libra, Mark Zuckerberg, Mobile, Nick Clegg, payments, Policy, privacy, Sheryl Sandberg, TC | No Comments

Facebook is leaning on fears of China exporting its authoritarian social values to counter arguments that it should be broken up or slowed down. Its top executives have each claimed that if the U.S. limits its size, blocks its acquisitions or bans its cryptocurrency, Chinese company’s absent these restrictions will win abroad, bringing more power and data to their government. CEO Mark Zuckerberg, COO Sheryl Sandberg and VP of communications Nick Clegg have all expressed this position.

The latest incarnation of this talking point came in today’s and yesterday’s congressional hearings over Libra, the Facebook-spearheaded digital currency it hopes to launch in the first half of 2020. Facebook’s head of its blockchain subsidiary Calibra, David Marcus, wrote in his prepared remarks to the House Financial Services Committee today that (emphasis added):

I believe that if America does not lead innovation in the digital currency and payments area, others will. If we fail to act, we could soon see a digital currency controlled by others whose values are dramatically different.

Senate Banking Committee Holds Hearing On Facebook's Proposed Crypto Currency

WASHINGTON, DC – JULY 16: Head of Facebook’s Calibra David Marcus testifies during a hearing before Senate Banking, Housing and Urban Affairs Committee July 16, 2019 on Capitol Hill in Washington, DC. The committee held the hearing on “Examining Facebook’s Proposed Digital Currency and Data Privacy Considerations.” (Photo by Alex Wong/Getty Images)

Marcus also told the Senate Banking Subcommittee yesterday that “I believe if we stay put we’re going to be in a situation in 10, 15 years where half the world is on a blockchain technology that is out of reach of our national-security apparatus.”.

This argument is designed to counter House-drafted “Keep Big Tech Out of Finance” legislation that Reuters reports would declare that companies like Facebook that earn over $25 billion in annual revenue “may not establish, maintain, or operate a digital asset . . .  that is intended to be widely used as medium of exchange, unit of account, store of value, or any other similar function.”

The message Facebook is trying to deliver is that cryptocurrencies are inevitable. Blocking Libra would just open the door to even less scrupulous actors controlling the technology. Facebook’s position here isn’t limited to cryptocurrencies, though.

The concept crystallized exactly a year ago when Zuckerberg said in an interview with Recode’s Kara Swisher, “I think you have this question from a policy perspective, which is, do we want American companies to be exporting across the world?” (emphasis added):

We grew up here, I think we share a lot of values that I think people hold very dear here, and I think it’s generally very good that we’re doing this, both for security reasons and from a values perspective. Because I think that the alternative, frankly, is going to be the Chinese companies. If we adopt a stance which is that, ‘Okay, we’re gonna, as a country, decide that we wanna clip the wings of these companies and make it so that it’s harder for them to operate in different places, where they have to be smaller,’ then there are plenty of other companies out that are willing and able to take the place of the work that we’re doing.

When asked if he specifically meant Chinese companies, Zuckerberg doubled down, saying (emphasis added):

Yeah. And they do not share the values that we have. I think you can bet that if the government hears word that it’s election interference or terrorism, I don’t think Chinese companies are going to wanna cooperate as much and try to aid the national interest there.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

This April, Zuckerberg went deeper when he described how Facebook would refuse to comply with data localization laws in countries with poor track records on human rights. The CEO explained the risk of data being stored in other countries, which is precisely what might happen if regulators hamper Facebook and innovation happens elsewhere. Zuckerberg told philosopher Yuval Harari that (emphasis added):

When I look towards the future, one of the things that I just get very worried about is the values that I just laid out [for the internet and data] are not values that all countries share. And when you get into some of the more authoritarian countries and their data policies, they’re very different from the kind of regulatory frameworks that across Europe and across a lot of other places, people are talking about or put into place . . . And the most likely alternative to each country adopting something that encodes the freedoms and rights of something like GDPR, in my mind, is the authoritarian model, which is currently being spread, which says every company needs to store everyone’s data locally in data centers and then, if I’m a government, I can send my military there and get access to whatever data I want and take that for surveillance or military.

I just think that that’s a really bad future. And that’s not the direction, as someone who’s building one of these internet services, or just as a citizen of the world, I want to see the world going. If a government can get access to your data, then it can identify who you are and go lock you up and hurt you and your family and cause real physical harm in ways that are just really deep.

facebook logo down glitch

Facebook’s newly hired head of communications, Nick Clegg, told reporters back in January that (emphasis added):

These are of course legitimate questions, but we don’t hear so much about China, which combines astonishing ingenuity with the ability to process data on a vast scale without the legal and regulatory constraints on privacy and data protection that we require on both sides of the Atlantic . . .  [and this data could be] put to more sinister surveillance ends, as we’ve seen with the Chinese government’s controversial social credit system.

In response to Facebook co-founder Chris Hughes’ call that Facebook should be broken up, Clegg wrote in May that “Facebook shouldn’t be broken up — but it does need to be held to account. Anyone worried about the challenges we face in an online world should look at getting the rules of the internet right, not dismantling successful American companies.”

He hammered home the alternative the next month during a speech in Berlin (emphasis added):

If we in Europe and America don’t turn off the white noise and begin to work together, we will sleepwalk into a new era where the internet is no longer a universal space but a series of silos where different countries set their own rules and authoritarian regimes soak up their citizens’ data while restricting their freedom . . . If the West doesn’t engage with this question quickly and emphatically, it may be that it isn’t ours to answer. The common rules created in our hemisphere can become the example the rest of the world follows.

COO Sheryl Sandberg made the point most directly in an interview with CNBC in May (emphasis added):

You could break us up, you could break other tech companies up, but you actually don’t address the underlying issues people are concerned about . . . While people are concerned with the size and power of tech companies, there’s also a concern in the United States about the size and power of Chinese tech companies and the … realization that those companies are not going to be broken up.

WASHINGTON, DC – SEPTEMBER 5: Facebook chief operating officer Sheryl Sandberg testifies during a Senate Intelligence Committee hearing concerning foreign influence operations’ use of social media platforms, on Capitol Hill, September 5, 2018 in Washington, DC. Twitter CEO Jack Dorsey and Facebook chief operating officer Sheryl Sandberg faced questions about how foreign operatives use their platforms in attempts to influence and manipulate public opinion. (Photo by Drew Angerer/Getty Images)

Scared tactics

Indeed, China does not share the United States’ values on individual freedoms and privacy. And yes, breaking up Facebook could weaken its products like WhatsApp, providing more opportunities for apps like Chinese tech giant Tencent’s WeChat to proliferate.

But letting Facebook off the hook won’t solve the problems China’s influence poses to an open and just internet. Framing the issue as “strong regulation lets China win” creates a false dichotomy. There are more constructive approaches if Zuckerberg seriously wants to work with the government on exporting freedom via the web. And the distrust Facebook has accrued through the mistakes it’s made in the absence of proper regulation arguably do plenty to hurt the perception of how American ideals are spread through its tech companies.

Breaking up Facebook may not be the answer, especially if it’s done in retaliation for its wrong-doings instead of as a coherent way to prevent more in the future. To that end, a better approach might be stopping future acquisitions of large or rapidly growing social networks, forcing it to offer true data portability so existing users have the freedom to switch to competitors, applying proper oversight of its privacy policies and requiring a slow rollout of Libra with testing in each phase to ensure it doesn’t screw consumers, enable terrorists or jeopardize the world economy.

Resorting to scare tactics shows that it’s Facebook that’s scared. Years of growth over safety strategy might finally catch up with it. The $5 billion FTC fine is a slap on the wrist for a company that profits more than that per quarter, but a break-up would do real damage. Instead of fear-mongering, Facebook would be better served by working with regulators in good faith while focusing more on preempting abuse. Perhaps it’s politically savvy to invoke the threat of China to stoke the worries of government officials, and it might even be effective. That doesn’t make it right.

Powered by WPeMatico

Highlights from Facebook’s Libra Senate hearing

Posted by | Apps, blockchain, cryptocurrency, David Marcus, eCommerce, Education, Facebook, Finance, Government, Libra, Libra Association, Mobile, payments, Policy, privacy, Social, TC, U.S. Senate | No Comments

Facebook will only build its own Calibra cryptocurrency wallet into Messenger and WhatsApp, and will refuse to embed competing wallets, the head of Calibra David Marcus told the Senate Banking Committee today. While some, like Senator Brown, blustered that “Facebook is dangerous!,” others surfaced poignant questions about Libra’s risks.

Calibra will be interoperable, so users can send money back and forth with other wallets, and Marcus committed to data portability so users can switch entirely to a competitor. But solely embedding Facebook’s own wallet into its leading messaging apps could give the company a sizable advantage over banks, PayPal, Coinbase or any other potential wallet developer.

Other highlights from the “Examining Facebook’s Proposed Digital Currency and Data Privacy Considerations” hearing included Marcus saying:

  • The U.S. should “absolutely” lead the world in rule-making for cryptocurrencies
  • The Libra Association chose to be headquartered in Switzerland “not to evade any responsibilities of oversight” but since it’s where international financial groups like the Bank for International Settlements, though Calibra will be regulated by the U.S. Department of the Treasury’s Financial Crimes Enforcement Network
  • “Yes,” Libra will comply with all U.S. regulations and not launch until the U.S. lawmakers’ concerns have been answered
  • “You will not have to trust Facebook” because it’s only one of 28 current and potentially 100 or more Libra Association members and it won’t have special privileges
  • “Yes I would” accept compensation from Facebook in the form of Libra as a show of trust in the currency
  • It is “not the intention at all” for Calibra to sell or directly monetize user data directly, though if it offered additional financial services in partnership with other financial organizations it would ask consent to use their data specifically for those purposes
  • Facebook’s core revenue model around Libra is that more online commerce will lead businesses to spend more on Facebook ads
  • When repeatedly asked why Facebook is pushing Libra to happen, Marcus noted that blockchain technology is inevitable and if the U.S. doesn’t lead in building and regulating it, the tech will come from places “out of reach of our national security apparatus,” raising the spectre of China

But Marcus also didn’t clearly answer some critical questions about Libra and Calibra, and may be asked again when he testifies before the House Financial Services Committee tomorrow.

Unanswered Questions

Chairman Crapo asked if Facebook would collect data about transactions made with Calibra that are made on Facebook, such as when users buy products from businesses they discover through Facebook. Marcus instead merely noted that Facebook would still let users pay with credit cards and other mediums as well as Calibra. That means that even though Facebook might not know how much money is in someone’s Calibra wallet or their other transactions, it might know how much they paid and for what if that transaction happens over their social networks.

Senator Tillis asked how much Facebook has invested in the formation of Libra. TechCrunch has also asked specifically how much Facebook has invested in the Libra Investment Token that will earn it a share of interest earned from the fiat currencies in the Libra Reserve. Marcus said Facebook and Calibra hadn’t determined exactly how much it would invest in the project. Marcus also didn’t clearly answer Senator Toomey’s question of why the Libra Association is considered a not-for-profit organization if it will pay out interest to members.

Senator Menendez asked if the Libra Association would freeze the assets if terrorist organizations were identified. Marcus said that Calibra and other custodial wallets that actually hold users’ Libra could do that, and that regulated off-ramps could block them from converting Libra into fiat. But this answer underscores that there may be no way for the Libra Association to stop transfers between terrorists’ non-custodial wallets, especially if local governments where those terrorists operate don’t step in.

Perhaps the most worrying moment of the hearing was when Senator Sinema brought up TechCrunch’s article citing that “The real risk of Libra is crooked developers.” There I wrote that Facebook’s VP of product Kevin Weil told me that “There are no plans for the Libra Association to take a role in actively vetting [developers],” which I believe leaves the door open to a crypto Cambridge Analytica situation where shady developers steal users money, not just their data.

Senator Sinema asked if an Arizonan was scammed out of their Libra by a Pakistani developer via a Thai exchange and a Spanish wallet, would that U.S. citizen be entitled to protection to recuperate their lost funds. Marcus responded that U.S. citizens would likely use American Libra wallets that are subject to protections and that the Libra Association will work to educate users on how to avoid scams. But Sinema stressed that if Libra is designed to assist the poor who are often less educated, they could be especially vulnerable to scammers.

Here @SenatorSinema cites my article warning that we need protection from Facebook Libra’s unvetted developers https://t.co/gYeYIQVFLj pic.twitter.com/QSqVDztpCU

— Josh Constine (@JoshConstine) July 16, 2019

Crypto openness versus a dangerous Wild West

Overall, the hearing was surprisingly coherent. Many Senators showed strong base knowledge of how Libra worked and asked the right questions. Marcus was generally forthcoming, beyond the topics of how much Facebook has invested in the Libra project and what data it will glean from transactions atop its social network.

Some of the top concerns, such as terrorist money laundering, encompass the entire cryptocurrency ecosystem and can’t be solved even by strong rules around Libra. Little regard was given to how Libra could improve remittance or cut transaction fees that see corporations profit off families and small businesses.

Still, if Libra actually becomes popular and evolves as an open ecosystem full of unvetted developers, the currency could be used to facilitate scams. Precisely because of the lack of trust in Facebook that many Senators harped on, consumers could go seeking Libra wallet alternatives to the company that might push them into the hands of evildoers. The Libra Association may need to shift the balance further toward safety and away from cryptocurrency’s prevailing philosophies from openness. Otherwise, the frontiers of this Wild West could prove dangerous, even if its civilized regions are well-regulated.

Powered by WPeMatico

No technical reason to exclude Huawei as 5G supplier, says UK committee

Posted by | 5g, Asia, Australia, China, cyber security, Ericsson, Europe, huawei, human rights, Ian Levy, Internet of Things, jeremy wright, Mobile, National Cyber Security Centre, national security, Nokia, privacy, Security, TC, telecommunications, United Kingdom, United States, zte | No Comments

A UK parliamentary committee has concluded there are no technical grounds for excluding Chinese network kit vendor Huawei from the country’s 5G networks.

In a letter from the chair of the Science & Technology Committee to the UK’s digital minister Jeremy Wright, the committee says: “We have found no evidence from our work to suggest that the complete exclusion of Huawei from the UK’s telecommunications networks would, from a technical point of view, constitute a proportionate response to the potential security threat posed by foreign suppliers.”

Though the committee does go on to recommend the government mandate the exclusion of Huawei from the core of 5G networks, noting that UK mobile network operators have “mostly” done so already — but on a voluntary basis.

If it places a formal requirement on operators not to use Huawei for core supply the committee urges the government to provide “clear criteria” for the exclusion so that it could be applied to other suppliers in future.

Reached for a response to the recommendations, a government spokesperson told us: “The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.”

The spokesperson for the Department for Digital, Media, Culture and Sport added: “The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.”

In recent years the US administration has been putting pressure on allies around the world to entirely exclude Huawei from 5G networks — claiming the Chinese company poses a national security risk.

Australia announced it was banning Huawei and another Chinese vendor ZTE from providing kit for its 5G networks last year. Though in Europe there has not been a rush to follow the US lead and slam the door on Chinese tech giants.

In April leaked information from a UK Cabinet meeting suggested the government had settled on a policy of granting Huawei access as a supplier for some non-core parts of domestic 5G networks, while requiring they be excluded from supplying components for use in network cores.

On this somewhat fuzzy issue of delineating core vs non-core elements of 5G networks, the committee writes that it “heard unanimously and clearly” from witnesses that there will still be a distinction between the two in the next-gen networks.

It also cites testimony by the technical director of the UK’s National Cyber Security Centre (NCSC), Dr Ian Levy, who told it “geography matters in 5G”, and pointed out Australia and the UK have very different “laydowns” — meaning “we may have exactly the same technical understanding, but come to very different conclusions”.

In a response statement to the committee’s letter, Huawei SVP Victor Zhang welcomed the committee’s “key conclusion” before going on to take a thinly veiled swiped at the US — writing: “We are reassured that the UK, unlike others, is taking an evidence based approach to network security. Huawei complies with the laws and regulations in all the markets where we operate.”

The committee’s assessment is not all comfortable reading for Huawei, though, with the letter also flagging the damning conclusions of the most recent Huawei Oversight Board report which found “serious and systematic defects” in its software engineering and cyber security competence — and urging the government to monitor Huawei’s response to the raised security concerns, and to “be prepared to act to restrict the use of Huawei equipment if progress is unsatisfactory”.

Huawei has previously pledged to spend $2BN addressing security shortcomings related to its UK business — a figure it was forced to qualify as an “initial budget” after that same Oversight Board report.

“It is clear that Huawei must improve the standard of its cybersecurity,” the committee warns.

It also suggests the government consults on whether telecoms regulator Ofcom needs stronger powers to be able to force network suppliers to clean up their security act, writing that: “While it is reassuring to hear that network operators share this point of view and are ready to use commercial pressure to encourage this, there is currently limited regulatory power to enforce this.”

Another committee recommendation is for the NCSC to be consulted on whether similar security evaluation mechanisms should be established for other 5G vendors — such as Ericsson and Nokia: Two European based kit vendors which, unlike Huawei, are expected to be supplying core 5G.

“It is worth noting that an assurance system comparable to the Huawei Cyber Security Evaluation Centre does not exist for other vendors. The shortcomings in Huawei’s cyber security reported by the Centre cannot therefore be directly compared to the cyber security of other vendors,” it notes.

On the issue of 5G security generally the committee dubs this “critical”, adding that “all steps must be taken to ensure that the risks are as low as reasonably possible”.

Where “essential services” that make use of 5G networks are concerned, the committee says witnesses were clear such services must be able to continue to operate safely even if the network connection is disrupted. Government must ensure measures are put in place to safeguard operation in the event of cyber attacks, floods, power cuts and other comparable events, it adds. 

While the committee concludes there is no technical reason to limit Huawei’s access to UK 5G, the letter does make a point of highlighting other considerations, most notably human rights abuses, emphasizing its conclusion does not factor them in at all — and pointing out: “There may well be geopolitical or ethical grounds… to enact a ban on Huawei’s equipment”.

It adds that Huawei’s global cyber security and privacy officer, John Suffolk, confirmed that a third party had supplied Huawei services to Xinjiang’s Public Security Bureau, despite Huawei forbidding its own employees from misusing IT and comms tech to carry out surveillance of users.

The committee suggests Huawei technology may therefore be being used to “permit the appalling treatment of Muslims in Western China”.

Powered by WPeMatico