General Data Protection Regulation

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

Posted by | Advertising Tech, america, Android, cookies, data processing, data protection, data security, ePrivacy Regulation, Europe, european union, Facebook, France, GDPR, General Data Protection Regulation, Germany, Google, information commissioner's office, instagram, law, online advertising, privacy, spamming, TC, United States, University of Michigan | No Comments

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

Powered by WPeMatico

Law enforcement needs to protect citizens and their data

Posted by | Android, Australia, Column, computer security, crypto wars, cryptography, encryption, european union, Facebook, Federal Bureau of Investigation, General Data Protection Regulation, human rights, law, law enforcement, national security, privacy, Security, United Kingdom | No Comments
Robert Anderson
Contributor

Robert Anderson served for 21 years in the FBI, retiring as executive assistant director of the Criminal, Cyber, Response and Services Branch. He is currently an advisor at The Chertoff Group and the chief executive of Cyber Defense Labs.

Over the past several years, the law enforcement community has grown increasingly concerned about the conduct of digital investigations as technology providers enhance the security protections of their offerings—what some of my former colleagues refer to as “going dark.”

Data once readily accessible to law enforcement is now encrypted, protecting consumers’ data from hackers and criminals. However, these efforts have also had what Android’s security chief called the “unintended side effect” of also making this data inaccessible to law enforcement. Consequently, many in the law enforcement community want the ability to compel providers to allow them to bypass these protections, often citing physical and national security concerns.

I know first-hand the challenges facing law enforcement, but these concerns must be addressed in a broader security context, one that takes into consideration the privacy and security needs of industry and our citizens in addition to those raised by law enforcement.

Perhaps the best example of the law enforcement community’s preferred solution is Australia’s recently passed Assistance and Access Bill, an overly-broad law that allows Australian authorities to compel service providers, such as Google and Facebook, to re-engineer their products and bypass encryption protections to allow law enforcement to access customer data.

While the bill includes limited restrictions on law enforcement requests, the vague definitions and concentrated authorities give the Australian government sweeping powers that ultimately undermine the security and privacy of the very citizens they aim to protect. Major tech companies, such as Apple and Facebook, agree and have been working to resist the Australian legislation and a similar bill in the UK.

Image: Bryce Durbin/TechCrunch

Newly created encryption backdoors and work-arounds will become the target of criminals, hackers, and hostile nation states, offering new opportunities for data compromise and attack through the newly created tools and the flawed code that inevitably accompanies some of them. These vulnerabilities undermine providers’ efforts to secure their customers’ data, creating new and powerful vulnerabilities even as companies struggle to address existing ones.

And these vulnerabilities would not only impact private citizens, but governments as well, including services and devices used by the law enforcement and national security communities. This comes amidst government efforts to significantly increase corporate responsibility for the security of customer data through laws such as the EU’s General Data Protection Regulation. Who will consumers, or the government, blame when a government-mandated backdoor is used by hackers to compromise user data? Who will be responsible for the damage?

Companies have a fiduciary responsibility to protect their customers’ data, which not only includes personally identifiable information (PII), but their intellectual property, financial data, and national security secrets.

Worse, the vulnerabilities created under laws such as the Assistance and Access Bill would be subject almost exclusively to the decisions of law enforcement authorities, leaving companies unable to make their own decisions about the security of their products. How can we expect a company to protect customer data when their most fundamental security decisions are out of their hands?

phone encryption

Image: Bryce Durbin/TechCrunch

Thus far law enforcement has chosen to downplay, if not ignore, these concerns—focusing singularly on getting the information they need. This is understandable—a law enforcement officer should use every power available to them to solve a case, just as I did when I served as a State Trooper and as a FBI Special Agent, including when I served as Executive Assistant Director (EAD) overseeing the San Bernardino terror attack case during my final months in 2015.

Decisions regarding these types of sweeping powers should not and cannot be left solely to law enforcement. It is up to the private sector, and our government, to weigh competing security and privacy interests. Our government cannot sacrifice the ability of companies and citizens to properly secure their data and systems’ security in the name of often vague physical and national security concerns, especially when there are other ways to remedy the concerns of law enforcement.

That said, these security responsibilities cut both ways. Recent data breaches demonstrate that many companies have a long way to go to adequately protect their customers’ data. Companies cannot reasonably cry foul over the negative security impacts of proposed law enforcement data access while continuing to neglect and undermine the security of their own users’ data.

Providers and the law enforcement community should be held to robust security standards that ensure the security of our citizens and their data—we need legal restrictions on how government accesses private data and on how private companies collect and use the same data.

There may not be an easy answer to the “going dark” issue, but it is time for all of us, in government and the private sector, to understand that enhanced data security through properly implemented encryption and data use policies is in everyone’s best interest.

The “extra ordinary” access sought by law enforcement cannot exist in a vacuum—it will have far reaching and significant impacts well beyond the narrow confines of a single investigation. It is time for a serious conversation between law enforcement and the private sector to recognize that their security interests are two sides of the same coin.

Powered by WPeMatico

The other smartphone business

Posted by | africa, antitrust, Asia, Bolivia, China, data protection, Europe, finland, GDPR, General Data Protection Regulation, geopolitics, google-android, india, Jalasoft, Jolla, Mobile, mobile linux, privacy, Rostelecom, russia, sailfish, Sami Pienimäki, Security, Startups, TC | No Comments

With the smartphone operating system market sewn up by Google’s Android platform, which has a close to 90% share globally, leaving Apple’s iOS a slender (but lucrative) premium top-slice, a little company called Jolla and its Linux-based Sailfish OS is a rare sight indeed: A self-styled ‘independent alternative’ that’s still somehow in business.

The Finnish startup’s b2b licensing sales pitch is intended to appeal to corporates and governments that want to be able to control their own destiny where device software is concerned.

And in a world increasingly riven with geopolitical tensions that pitch is starting to look rather prescient.

Political uncertainties around trade, high tech espionage risks and data privacy are translating into “opportunities” for the independent platform player — and helping to put wind in Jolla’s sails long after the plucky Sailfish team quit their day jobs for startup life.

Building an alternative to Google Android

Jolla was founded back in 2011 by a band of Nokia staffers who left the company determined to carry on development of mobile Linux as the European tech giant abandoned its own experiments in favor of pivoting to Microsoft’s Windows Phone platform. (Fatally, as it would turn out.)

Nokia exited mobile entirely in 2013, selling the division to Microsoft. It only returned to the smartphone market in 2017, via a brand-licensing arrangement, offering made-in-China handsets running — you guessed it — Google’s Android OS.

If the lesson of the Jolla founders’ former employer is ‘resistance to Google is futile’ they weren’t about to swallow that. The Finns had other ideas.

Indeed, Jolla’s indie vision for Sailfish OS is to support a whole shoal of differently branded, regionally flavored and independently minded (non-Google-led) ecosystems all swimming around in parallel. Though getting there means not just surviving but thriving — and doing so in spite of the market being so thoroughly dominated by the U.S. tech giant.

TechCrunch spoke to Jolla ahead of this year’s Mobile World Congress tradeshow where co-founder and CEO, Sami Pienimäki, was taking meetings on the sidelines. He told us his hope is for Jolla to have a partner booth of its own next year — touting, in truly modest Finnish fashion, an MWC calendar “maybe fuller than ever” with meetings with “all sorts of entities and governmental representatives”.

Jolla co-founder, Sami Pienimaki, showing off a Jolla-branded handset in May 2013, back when the company was trying to attack the consumer smartphone space. 
(Photo credit: KIMMO MANTYLA/AFP/Getty Images)

Even a modestly upbeat tone signals major progress here because an alternative smartphone platform licensing business is — to put it equally mildly — an incredibly difficult tech business furrow to plough.

Jolla almost died at the end of 2015 when the company hit a funding crisis. But the plucky Finns kept paddling, jettisoning their early pursuit of consumer hardware (Pienimäki describes attempting to openly compete with Google in the consumer smartphone space as essentially “suicidal” at this point) to narrow their focus to a b2b licensing play.

The early b2b salespitch targeted BRIC markets, with Jolla hitting the road to seek buy in for a platform it said could be moulded to corporate or government needs while still retaining the option of Android app compatibility.

Then in late 2016 signs of a breakthrough: Sailfish gained certification in Russia for government and corporate use.

Its licensing partner in the Russian market was soon touting the ability to go “absolutely Google-free!“.

Buy in from Russia

Since then the platform has gained the backing of Russian telco Rostelecom, which acquired Jolla’s local licensing customer last year (as well as becoming a strategic investor in Jolla itself in March 2018 — “to ensure there is a mutual interest to drive the global Sailfish OS agenda”, as Pienimäki puts it).

Rostelecom is using the brand name ‘Aurora OS‘ for Sailfish in the market which Pienimäki says is “exactly our strategy” — likening it to how Google’s Android has been skinned with different user experiences by major OEMs such as Samsung and Huawei.

“What we offer for our customers is a fully independent, regional licence and a tool chain so that they can develop exactly this kind of solution,” he tells TechCrunch. “We have come to a maturity point together with Rostelecom in the Russia market, and it was natural move plan together, that they will take a local identity and proudly carry forward the Sailfish OS ecosystem development in Russia under their local identity.”

“It’s fully compatible with Sailfish operating system, it’s based on Sailfish OS and it’s our joint interest, of course, to make it fly,” he adds. “So that as we, hopefully, are able to extend this and come out to public with other similar set-ups in different countries those of course — eventually, if they come to such a fruition and maturity — will then likely as well have their own identities but still remain compatible with the global Sailfish OS.”

Jolla says the Russian government plans to switch all circa 8M state officials to the platform by the end of 2021 — under a project expected to cost RUB 160.2 billion (~$2.4BN). (A cut of which will go to Jolla in licensing fees.)

It also says Sailfish-powered smartphones will be “recommended to municipal administrations of various levels,” with the Russian state planning to allocate a further RUB 71.3 billion (~$1.1BN) from the federal budget for that. So there’s scope for deepening the state’s Sailfish uptake.

Russian Post is one early customer for Jolla’s locally licensed Sailfish flavor. Having piloted devices last year, Pienimäki says it’s now moving to a full commercial deployment across the whole organization — which has around 300,000 employees (to give a sense of how many Sailfish powered devices could end up in the hands of state postal workers in Russia).

A rugged Sailfish-powered device piloted by Russian post

Jolla is not yet breaking out end users for Sailfish OS per market but Pienimäki says that overall the company is now “clearly above” 100k (and below 500k) devices globally.

That’s still of course a fantastically tiny number if you compare it to the consumer devices market — top ranked Android smartphone maker Samsung sold around 70M handsets in last year’s holiday quarter, for instance — but Jolla is in the b2b OS licensing business, not the handset making business. So it doesn’t need hundreds of millions of Sailfish devices to ship annually to turn a profit.

Scaling a royalty licensing business to hundreds of thousands of users is sums to “good business”, , says Pienimäki, describing Jolla’s business model for Sailfish as “practically a royalty per device”.

“The success we have had in the Russian market has populated us a lot of interesting new opening elsewhere around the world,” he continues. “This experience and all the technology we have built together with Open Mobile Platform [Jolla’s Sailfish licensing partner in Russia which was acquired by Rostelecom] to enable that case — that enables a number of other cases. The deployment plan that Rostelecom has for this is very big. And this is now really happening and we are happy about it.”

Jolla’s “Russia operation” is now beginning “a mass deployment phase”, he adds, predicting it will “quickly ramp up the volume to very sizeable”. So Sailfish is poised to scale.

Step 3… profit?

While Jolla is still yet to turn a full-year profit Pienimäki says several standalone months of 2018 were profitable, and he’s no longer worried whether the business is sustainable — asserting: “We don’t have any more financial obstacles or threats anymore.”

It’s quite the turnaround of fortunes, given Jolla’s near-death experience a few years ago when it almost ran out of money, after failing to close a $10.6M Series C round, and had to let go of half its staff.

It did manage to claw in a little funding at the end of 2015 to keep going, albeit as much leaner fish. But bagging Russia as an early adopter of its ‘independent’ mobile Linux ecosystem looks to have been the key tipping point for Jolla to be able to deliver on the hard-graft ecosystem-building work it’s been doing all along the way. And Pienimäki now expresses easy confidence that profitability will flow “fairly quickly” from here on in.

“It’s not an easy road. It takes time,” he says of the ecosystem-building company Jolla hard-pivoted to at its point of acute financial distress. “The development of this kind of business — it requires patience and negotiation times, and setting up the ecosystem and ecosystem partners. It really requires patience and takes a lot of time. And now we have come to this point where actually there starts to be an ecosystem which will then extend and start to carry its own identity as well.”

In further signs of Jolla’s growing confidence he says it hired more than ten people last year and moved to new and slightly more spacious offices — a reflection of the business expanding.

“It’s looking very good and nice for us,” Pienimäki continues. “Let’s say we are not taking too much pressure, with our investors and board, that what is the day that we are profitable. It’s not so important anymore… It’s clear that that is soon coming — that very day. But at the same time the most important is that the business case behind is proven and it is under aggressive deployment by our customers.”

The main focus for the moment is on supporting deployments to ramp up in Russia, he says, emphasizing: “That’s where we have to focus.” (Literally he says “not screwing up” — and with so much at stake you can see why nailing the Russia case is Jolla’s top priority.)

While the Russian state has been the entity most keen to embrace an alternative (non-U.S.-led) mobile OS — perhaps unsurprisingly — it’s not the only place in the world where Jolla has irons in the fire.

Another licensing partner, Bolivian IT services company Jalasoft, has co-developed a Sailfish-powered smartphone called Accione.

Jalasoft’s ‘liberty’-touting Accione Sailfish smartphone

It slates the handset on its website as being “designed for Latinos by Latinos”. “The digitalization of the economy is inevitable and, if we do not control the foundation of this digitalization, we have no future,” it adds.

Jalasoft founder and CEO Jorge Lopez says the company’s decision to invest effort in kicking the tyres of Jolla’s alternative mobile ecosystem is about gaining control — or seeking “technological libration” as the website blurb puts it.

“With Sailfish OS we have control of the implementation, while with Android it is the opposite,” Lopez tells TechCrunch. “We are working on developing smart buildings and we need a private OS that is not Android or iOS. This is mainly because our product will allow the end user to control the whole building and doing this with Android or iOS a hackable OS will bring concerns on security.”

Lopez says Jalasoft is using Accione as its development platform — “to gather customer feedback and to further develop our solution” — so the project clearly remains in an early phase, and he says that no more devices are likely to be announced this year.

But Jolla can point to more seeds being sewn with the potential, with work, determination and patience, to sprout into another sizeable crop of Sailfish-powered devices down the line.

Complexity in China

Even more ambitiously Jolla is also targeting China, where investment has been taken in to form a local consortium to develop a Chinese Sailfish ecosystem.

Although Pienimäki cautions there’s still much work to be done to bring Sailfish to market in China.

“We completed a major pilot with our licensing customer, Sailfish China Consortium, in 2017-18,” he says, giving an update on progress to date. “The public in market solution is not there yet. That is something that we are working together with the customer — hopefully we can see it later this year on the market. But these things take time. And let’s say that we’ve been somewhat surprised at how complex this kind of decision-making can be.”

“It wasn’t easy in Russia — it took three years of tight collaboration together with our Russian partners to find a way. But somehow it feels that it’s going to take even more in China. And I’m not necessarily talking about calendar time — but complexity,” he adds.

While there’s no guarantee of success for Jolla in China, the potential win is so big given the size of the market that even if they can only carve out a tiny slice, such as a business or corporate sector, it’s still worth going after. And he points to the existence of a couple of native mobile Linux operating systems he reckons could make “very lucrative partners”.

That said, the get-to-market challenge for Jolla in China is clearly distinctly different vs the rest of the world. This is because Android has developed into an independent (i.e. rather than Google-led) ecosystem in China as a result of state restrictions on the Internet and Internet companies. So the question is what could Sailfish offer that forked Android doesn’t already?

An Oppo Android powered smartphone on show at MWC 2017

Again, Jolla is taking the long view that ultimately there will be appetite — and perhaps also state-led push — for a technology platform bolster against political uncertainty in U.S.-China relations.

“What has happened now, in particular last year, is — because of the open trade war between the nations — many of the technology vendors, and also I would say the Chinese government, has started to gradually tighten their perspective on the fact that ‘hey simply it cannot be a long term strategy to just keep forking Android’. Because in the end of the day it’s somebody else’s asset. So this is something that truly creates us the opportunity,” he suggests.

“Openly competing with the fact that there are very successful Android forks in China, that’s going to be extremely difficult. But — let’s say — tapping into the fact that there are powers in that nation that wish that there would be something else than forking Android, combined with the fact that there is already something homegrown in China which is not forking Android — I think that’s the recipe that can be successful.”

Not all Jolla’s Sailfish bets have paid off, of course. An earlier foray by an Indian licensing partner into the consumer handset market petered out. Albeit, it does reinforce their decision to zero in on government and corporate licensing.

“We got excellent business connections,” says Pienimäki of India, suggesting also that it’s still a ‘watch this space’ for Jolla. The company has a “second move” in train in the market that he’s hopeful to be talking about publicly later this year.

It’s also pitching Sailfish in Africa. And in markets where target customers might not have their own extensive in-house IT capability to plug into Sailfish co-development work Pienimäki says it’s offering a full solution — “a ready made package”, together with partners, including device management, VPN, secure messaging and secure email — which he argues “can be still very lucrative business cases”.

Looking ahead and beyond mobile, Pienimäki suggests the automotive industry could be an interesting target for Sailfish in the future — though not literally plugging the platform into cars; but rather licensing its technologies where appropriate — arguing car makers are also keen to control the tech that’s going into their cars.

“They really want to make sure that they own the cockpit. It’s their property, it’s their brand and they want to own it — and for a reason,” he suggests, pointing to the clutch of major investments from car companies in startups and technologies in recent years.

“This is definitely an interesting area. We are not directly there ourself — and we are not capable to extend ourself there but we are discussing with partners who are in that very business whether they could utilize our technologies there. That would then be more or less like a technology licensing arrangement.”

A trust balancing model

While Jolla looks to be approaching a tipping point as a business, in terms of being able to profit off of licensing an alternative mobile platform, it remains a tiny and some might say inconsequential player on the global mobile stage.

Yet its focus on building and maintaining trusted management and technology architectures also looks timely — again, given how geopolitical spats are intervening to disrupt technology business as usual.

Chinese giant Huawei used an MWC keynote speech last month to reject U.S.-led allegations that its 5G networking technology could be repurposed as a spying tool by the Chinese state. And just this week it opened a cybersecurity transparency center in Brussels, to try to bolster trust in its kit and services — urging industry players to work together on agreeing standards and structures that everyone can trust.

In recent years U.S.-led suspicions attached to Russia have also caused major headaches for security veteran Kaspersky — leading the company to announce its own trust and transparency program and decentralize some of its infrastructure, including by spinning up servers in Europe last year.

Businesses finding ways to maintain and deepen the digital economy in spite of a little — or even a lot — of cross-border mistrust may well prove to be the biggest technology challenge of all moving forward.

Especially as next-gen 5G networks get rolled out — and their touted ‘intelligent connectivity’ reaches out to transform many more types of industries, bringing new risks and regulatory complexity.

The geopolitical problem linked to all this boils down to how to trust increasing complex technologies without any one entity being able to own and control all the pieces. And Jolla’s business looks interesting in light of that because it’s selling the promise of neutral independence to all its customers, wherever they hail from — be it Russia, LatAm, China, Africa or elsewhere — which makes its ability to secure customer trust not just important but vital to its success.

Indeed, you could argue its customers are likely to rank above average on the ‘paranoid’ scale, given their dedicated search for an alternative (non-U.S.-led) mobile OS in the first place.

“It’s one of the number one questions we get,” admits Pienimäki, discussing Jolla’s trust balancing act — aka how it manages and maintains confidence in Sailfish’s independence, even as it takes business backing and code contributions from a state like Russia.

“We tell about our reference case in Russia and people quickly ask ‘hey okay, how can I trust that there is no blackbox inside’,” he continues, adding: “This is exactly the core question and this is exactly the problem we have been able to build a solution for.”

Jolla’s solution sums to one line: “We create a transparent platform and on top of fully transparent platform you can create secure solutions,” as Pienimäki puts it.

“The way it goes is that Jolla with Sailfish OS is always offering the transparent Sailfish operating system core, on source code level, all the time live, available for all the customers. So all the customers constantly, in real-time, have access to our source code. Most of it’s in public open source, and the proprietary parts are also constantly available from our internal infrastructure. For all the customers, at the same time in real-time,” he says, fleshing out how it keeps customers on board with a continually co-developing software platform.

“The contributions we take from these customers are always on source code level only. We don’t take any binary blobs inside our software. We take only source code level contributions which we ourselves authorize, integrate and then we make available for all the customers at the very same moment. So that loopback in a way creates us the transparency.

“So if you want to be suspicion of the contributions of the other guys, so to say, you can always read it on the source code. It’s real-time. Always available for all the customers at the same time. That’s the model we have created.”

“It’s honestly quite a unique model,” he adds. “Nobody is really offering such a co-development model in the operating system business.

“Practically how Android works is that Google, who’s leading the Android development, makes the next release of Android software, then releases it under Android Open Source and then people start to backboard it — so that’s like ‘source, open’ in a way, not ‘open source’.”

Sailfish’s community of users also have real-time access to and visibility of all the contributions — which he dubs “real democracy”.

“People can actually follow it from the code-line all the time,” he argues. “This is really the core of our existence and how we can offer it to Russia and other countries without creating like suspicion elements each side. And that is very important.

“That is the only way we can continue and extend this regional licensing and we can offer it independently from Finland and from our own company.”

With global trade and technology both looking increasingly vulnerable to cross-border mistrust, Jolla’s approach to collaborative transparency may offer something of a model if other businesses and industries find they need to adapt themselves  in order for trade and innovation to keep moving forward in uncertain political times.

Antitrust and privacy uplift

Last but not least there’s regulatory intervention to consider.

A European Commission antitrust decision against Google’s Android platform last year caused headlines around the world when the company was slapped with a $5BN fine.

More importantly for Android rivals Google was also ordered to change its practices — leading to amended licensing terms for the platform in Europe last fall. And Pienimäki says Jolla was a “key contributor” to the Commission case against Android.

European competition commissioner Margrethe Vestager, on April 15, 2015 in Brussels, as the Commission said it would open an antitrust investigation into Google’s Android operating system. (Photo credit: JOHN THYS/AFP/Getty Images)

The new Android licensing terms make it (at least theoretically) possible for new types of less-heavily-Google-flavored Android devices to be developed for Europe. Though there have been complaints the licensing tweaks don’t go far enough to reset Google’s competitive Android advantage.

Asked whether Jolla has seen any positive impacts on its business following the Commission’s antitrust decision, Pienimäki responds positively, recounting how — “one or two weeks after the ruling” — Jolla received an inbound enquiry from a company in France that had felt hamstrung by Google requiring its services to be bundled with Android but was now hoping “to realize a project in a special sector”.

The company, which he isn’t disclosing at this stage, is interested in “using Sailfish and then having selected Android applications running in Sailfish but no connection with the Google services”.

“We’ve been there for five years helping the European Union authorities [to build the case] and explain how difficult it is to create competitive solutions in the smartphone market in general,” he continues. “Be it consumer or be it anything else. That’s definitely important for us and I don’t see this at all limited to the consumer sector. The very same thing has been a problem for corporate clients, for companies who provide specialized mobile device solutions for different kind of corporations and even governments.”

While he couches the Android ruling as a “very important” moment for Jolla’s business last year, he also says he hopes the Commission will intervene further to level the smartphone playing field.

“What I’m after here, and what I would really love to see, is that within the European Union we utilize Linux-based, open platform solution which is made in Europe,” he says. “That’s why we’ve been pushing this [antitrust action]. This is part of that. But in bigger scheme this is very good.”

He is also very happy with Europe’s General Data Protection Regulation (GDPR) — which came into force last May, plugging in a long overdue update to the bloc’s privacy rules with a much beefed up enforcement regime.

GDPR has been good for Jolla’s business, according to Pienimäki, who says interest is flowing its way from customers who now perceive a risk to using Android if customer data flows outside Europe and they cannot guarantee adequate privacy protections are in place.

“Already last spring… we have had plenty of different customer discussions with European companies who are really afraid that ‘hey I cannot offer this solution to my government or to my corporate customer in my country because I cannot guarantee if I use Android that this data doesn’t go outside the European Union’,” he says.

“You can’t indemnify in a way that. And that’s been really good for us as well.”

Powered by WPeMatico

Huawei opens a cybersecurity transparency center in the heart of Europe

Posted by | 5g, Asia, Brussels, China, computer security, cybersecurity, EC, Europe, General Data Protection Regulation, huawei, Internet of Things, Mobile, Network Security, Security, telecommunications | No Comments

5G kit maker Huawei opened a Cyber Security Transparency center in Brussels yesterday as the Chinese tech giant continues to try to neutralize suspicion in Western markets that its networking gear could be used for espionage by the Chinese state.

Huawei announced its plan to open a European transparency center last year but giving a speech at an opening ceremony for the center yesterday the company’s rotating CEO, Ken Hu, said: “Looking at the events from the past few months, it’s clear that this facility is now more critical than ever.”

Huawei said the center, which will demonstrate the company’s security solutions in areas including 5G, IoT and cloud, aims to provide a platform to enhance communication and “joint innovation” with all stakeholders, as well as providing a “technical verification and evaluation platform for our
customers”.

“Huawei will work with industry partners to explore and promote the development of security standards and verification mechanisms, to facilitate technological innovation in cyber security across the industry,” it said in a press release.

“To build a trustworthy environment, we need to work together,” Hu also said in his speech. “Both trust and distrust should be based on facts, not feelings, not speculation, and not baseless rumour.

“We believe that facts must be verifiable, and verification must be based on standards. So, to start, we need to work together on unified standards. Based on a common set of standards, technical verification and legal verification can lay the foundation for building trust. This must be a collaborative effort, because no single vendor, government, or telco operator can do it alone.”

The company made a similar plea at Mobile World Congress last week when its rotating chairman, Guo Ping, used a keynote speech to claim its kit is secure and will never contain backdoors. He also pressed the telco industry to work together on creating standards and structures to enable trust.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” he urged. “Let experts decide whether networks are safe or not.”

Also speaking at MWC last week the EC’s digital commissioner, Mariya Gabriel, suggested the executive is prepared to take steps to prevent security concerns at the EU Member State level from fragmenting 5G rollouts across the Single Market.

She told delegates at the flagship industry conference that Europe must have “a common approach to this challenge” and “we need to bring it on the table soon”.

Though she did not suggest exactly how the Commission might act.

A spokesman for the Commission confirmed that EC VP Andrus Ansip and Huawei’s Hu met in person yesterday to discuss issues around cybersecurity, 5G and the Digital Single Market — adding that the meeting was held at the request of Hu.

“The Vice-President emphasised that the EU is an open rules based market to all players who fulfil EU rules,” the spokesman told us. “Specific concerns by European citizens should be addressed. We have rules in place which address security issues. We have EU procurement rules in place, and we have the investment screening proposal to protect European interests.”

“The VP also mentioned the need for reciprocity in respective market openness,” he added, further noting: “The College of the European Commission will hold today an orientation debate on China where this issue will come back.”

In a tweet following the meeting Ansip also said: “Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security.”

Met with @Huawei rotating CEO Ken Hu to discuss #5G and #cybersecurity.

Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security. pic.twitter.com/ltATdnnzvL

— Andrus Ansip (@Ansip_EU) March 4, 2019

Reuters reports Hu saying the pair had discussed the possibility of setting up a cybersecurity standard along the lines of Europe’s updated privacy framework, the General Data Protection Regulation (GDPR).

Although the Commission did not respond when we asked it to confirm that discussion point.

GDPR was multiple years in the making and before European institutions had agreed on a final text that could come into force. So if the Commission is keen to act “soon” — per Gabriel’s comments on 5G security — to fashion supportive guardrails for next-gen network rollouts a full blown regulation seems an unlikely template.

More likely GDPR is being used by Huawei as a byword for creating consensus around rules that work across an ecosystem of many players by providing standards that different businesses can latch on in an effort to keep moving.

Hu referenced GDPR directly in his speech yesterday, lauding it as “a shining example” of Europe’s “strong experience in driving unified standards and regulation” — so the company is clearly well-versed in how to flatter hosts.

“It sets clear standards, defines responsibilities for all parties, and applies equally to all companies operating in Europe,” he went on. “As a result, GDPR has become the golden standard for privacy protection around the world. We believe that European regulators can also lead the way on similar mechanisms for cyber security.”

Hu ended his speech with a further industry-wide plea, saying: “We also commit to working more closely with all stakeholders in Europe to build a system of trust based on objective facts and verification. This is the cornerstone of a secure digital environment for all.”

Huawei’s appetite to do business in Europe is not in doubt, though.

The question is whether Europe’s telcos and governments can be convinced to swallow any doubts they might have about spying risks and commit to working with the Chinese kit giant as they roll out a new generation of critical infrastructure.

Powered by WPeMatico

Europe agrees platform rules to tackle unfair business practices

Posted by | Amazon, Android, antitrust, competition, e-commerce, eBay, EC, eCommerce, Europe, european commission, european parliament, european union, General Data Protection Regulation, Google, google search, Google Shopping, Margrethe Vestager, microsoft store, online marketplaces, online platforms, search engine, search engines, search results | No Comments

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales.

The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday the European Parliament, Council of the EU and Commission reached a political deal on regulating the business environment of platforms, announcing the accord in a press release today.

The political agreement paves the way for adoption and publication of the regulation, likely later this year. The rules will apply 12 months after that point.

Online platform intermediaries such as ecommerce marketplaces and search engines are covered by the new rules if they provide services to businesses established in the EU and which offer goods or services to consumers located in the EU.

The Commission estimates there are some 7,000 such platforms and marketplaces which will be covered by the regulation, noting this includes “world giants as well as very small start-ups”.

Under the new rules, sudden and unexpected account suspensions will be banned — with the Commission saying platforms will have to provide “clear reasons” for any termination and also possibilities for appeal.

Terms and conditions must also be “easily available and provided in plain and intelligible language”.

There must also be advance notice of changes — of at least 15 days, with longer notice periods applying for more complex changes.

For search engines the focus is on ranking transparency. And on that front dominant search engine Google has attracted more than its fair share of criticism in Europe from a range of rivals (not all of whom are European).

In 2017, the search giant was also slapped with a $2.7BN antitrust fine related to its price comparison service, Google Shopping. The EC found Google had systematically given prominent placement to its own search comparison service while also demoting rival services in search results. (Google rejects the findings and is appealing.)

Given the history of criticism of Google’s platform business practices, and the multi-year regulatory tug of war over anti-competitive impacts, the new transparency provisions look intended to make it harder for a dominant search player to use its market power against rivals.

Changing the online marketplace

The importance of legislating for platform fairness was flagged by the Commission’s antitrust chief, Margrethe Vestager, last summer — when she handed Google another very large fine ($5BN) for anti-competitive behavior related to its mobile platform Android.

Vestager said then she wasn’t sure breaking Google up would be an effective competition fix, preferring to push for remedies to support “more players to have a real go”, as her Android decision attempts to do. But she also stressed the importance of “legislation that will ensure that you have transparency and fairness in the business to platform relationship”.

If businesses have legal means to find out why, for example, their traffic has stopped and what they can do to get it back that will “change the marketplace, and it will change the way we are protected as consumers but also as businesses”, she argued.

Just such a change is now in sight thanks to EU political accord on the issue.

The regulation represents the first such rules for online platforms in Europe and — commissioners’ contend — anywhere in the world.

“Our target is to outlaw some of the most unfair practices and create a benchmark for transparency, at the same time safeguarding the great advantages of online platforms both for consumers and for businesses,” said Andrus Ansip, VP for the EU’s Digital Single Market initiative in a statement.

Elżbieta Bieńkowska, commissioner for internal market, industry, entrepreneurship, and SMEs, added that the rules are “especially designed with the millions of SMEs in mind”.

“Many of them do not have the bargaining muscle to enter into a dispute with a big platform, but with these new rules they have a new safety net and will no longer worry about being randomly kicked off a platform, or intransparent ranking in search results,” she said in another supporting statement.

In a factsheet about the new rules, the Commission specifies they cover third-party ecommerce market places (e.g. Amazon Marketplace, eBay, Fnac Marketplace, etc.); app stores (e.g. Google Play, Apple App Store, Microsoft Store etc.); social media for business (e.g. Facebook pages, Instagram used by makers/artists etc.); and price comparison tools (e.g. Skyscanner, Google Shopping etc.).

The regulation does not target every online platform. For example, it does not cover online advertising (or b2b ad exchanges), payment services, SEO services or services that do not intermediate direct transactions between businesses and consumers.

The Commission also notes that online retailers that sell their own brand products and/or don’t rely on third party sellers on their own platform are also excluded from the regulation, such as retailers of brands or supermarkets.

Where transparency is concerned, the rules require that regulated marketplaces and search engines disclose the main parameters they use to rank goods and services on their site “to help sellers understand how to optimise their presence” — with the Commission saying the aim is to support sellers without allowing gaming of the ranking system.

Some platform business practices will also require mandatory disclosure — such as for platforms that not only provide a marketplace for sellers but sell on their platform themselves, as does Amazon for example.

The ecommerce giant’s use of merchant data remains under scrutiny in the EU. Vestager revealed a preliminary antitrust probe of Amazon last fall — when she said her department was gathering information to “try to get a full picture”. She said her concern is dual platforms could gain an unfair advantage as a consequence of access to merchants’ data.

And, again, the incoming transparency rules look intended to shrink that risk — requiring what the Commission couches as exhaustive disclosure of “any advantage” a platform may give to their own products over others.

“They must also disclose what data they collect, and how they use it — and in particular how such data is shared with other business partners they have,” it continues, noting also that: “Where personal data is concerned, the rules of the GDPR [General Data Protection Regulation] apply.”

(GDPR of course places further transparency requirements on platforms by, for example, empowering individuals to request any personal data held on them, as well as the reasons why their information is being processed.)

The platform regulation also includes new avenues for dispute resolution by requiring platforms set up an internal complaint-handling system to assist business users.

“Only the smallest platforms in terms of head count or turnover will be exempt from this obligation,” the Commission notes. (The exemption limit is set at fewer than 50 staff and less than €10M revenue.)

It also says: “Platforms will have to provide businesses with more options to resolve a potential problem through mediators. This will help resolve more issues out of court, saving businesses time and money.”

But, at the same time, the new rules allow business associations to take platforms to court to stop any non-compliance — mirroring a provision in the GDPR which also allows for collective enforcement and redress of individual privacy rights (where Member States adopt it).

“This will help overcome fear of retaliation, and lower the cost of court cases for individual businesses, when the new rules are not followed,” the Commission argues.

“In addition, Member States can appoint public authorities with enforcement powers, if they wish, and businesses can turn to those authorities.”

One component of the regulation that appears to be being left up to EU Member States to tackle is penalties for non-compliance — with no clear regime of fines set out (as there is in GDPR). So it’s not clear whether the platform regulation might not have rather more bark than bite, at least initially.

“Member States shall need to take measures that are sufficiently dissuasive to ensure that the online intermediation platforms and search engines comply with the requirements in the Regulation,” the Commission writes in a section of its factsheet dealing with how to make sure platforms respect the new rules.

It also points again to the provision allowing business associations or organisations to take action in national courts on behalf of members — saying this offers a legal route to “stop or prohibit non-compliance with one or more of the requirements of the Regulation”. So, er, expect lawsuits.

The Commission says the rules will be subject to review within 18 months after they come into force — in a bid to ensure the regulation keeps pace with fast-paced tech developments.

A dedicated Online Platform Observatory has been established in the EU for the purpose of “monitoring the evolution of the market and the effective implementation of the rules”, it adds.

Powered by WPeMatico

Is Europe closing in on an antitrust fix for surveillance technologists?

Posted by | Android, antitrust, competition law, data protection, data protection law, DCMS committee, digital media, EC, Europe, european commission, european union, Facebook, General Data Protection Regulation, Germany, Giovanni Buttarelli, Google, instagram, Margrethe Vestager, Messenger, photo sharing, privacy, Social, social media, social networks, surveillance capitalism, TC, terms of service, United Kingdom, United States | No Comments

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

#GDPR allows imposing a permanent ban on data processing. This is the nuclear option. Much more severe than any fine you can imagine, in most cases. https://t.co/X772NvU51S

— Lukasz Olejnik (@lukOlejnik) January 28, 2019

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Powered by WPeMatico

This early GDPR adtech strike puts the spotlight on consent

Posted by | Advertising Tech, Android, Apps, artificial intelligence, China, data processing, data protection, Europe, european union, Facebook, Fidzup, GDPR, General Data Protection Regulation, Google, location based services, mobile advertising, mobile device, online advertising, privacy, retail, smartphone, TC, terms of service | No Comments

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

Powered by WPeMatico

Popular avatar app Boomoji exposed millions of users’ contact lists and location data

Posted by | Android, california, database, General Data Protection Regulation, privacy, Security, social media, Software, spokesperson, web browser | No Comments

Popular animated avatar creator app Boomoji, with more than five million users across the world, exposed the personal data of its entire user base after it failed to put passwords on two of its internet-facing databases.

The China-based app developer left the ElasticSearch databases online without passwords — a U.S.-based database for its international customers and a Hong Kong-based database containing mostly Chinese users’ data in an effort to comply with China’s data security laws, which requires Chinese citizens’ data to be located on servers inside the country.

Anyone who knew where to look could access, edit or delete the database using their web browser. And, because the database was listed on Shodan, a search engine for exposed devices and databases, they were easily found with a few keywords.

After TechCrunch reached out, Boomoji pulled the two databases offline. “These two accounts were made by us for testing purposes,” said an unnamed Boomoji spokesperson in an email.

But that isn’t true.

The database contained records on all of the company’s iOS and Android users — some 5.3 million users as of this week. Each record contained their username, gender, country and phone type.

Each record also included a user’s unique Boomoji ID, which was linked to other tables in the database. Those other tables included if and which school they go to — a feature Boomoji touts as a way for users to get in touch with their fellow students. That unique ID also included the precise geolocation of more than 375,000 users that had allowed the app to know their location at any given time.

Worse, the database contained every phone book entry of every user who had allowed the app access to their contacts.

One table had more than 125 million contacts, including their names (as written in a user’s phone book) and their phone numbers. Each record was linked to a Boomoji’s unique ID, making it relatively easy to know whose contact list belonged to whom.

Even if you didn’t use the app, anyone who has your phone number stored on their device and used the app more than likely uploaded your number to Boomoji’s database. To our knowledge, there’s no way to opt out or have your information deleted.

Given Boomoji’s response, we verified the contents of the database by downloading the app on a dedicated iPhone using a throwaway phone number, containing a few dummy, but easy-to-search contact list entries. To find friends, the app matches your contacts with those registered with the app in its database. When we were prompted to allow the app access to our contacts list, the entire dummy contact list was uploaded instantly — and viewable in the database.

So long as the app was installed and had access to the contacts, new phone numbers would be automatically uploaded.

Yet, none of the data was encrypted. All of the data was stored in plaintext.

Although Boomoji is based in China, it claims to follow California state law, where data protection and privacy rules are some of the strongest in the U.S. We asked Boomoji if it has or plans to inform California’s attorney general of the exposure as required by state law, but the company did not answer.

Given the vast amount of European users’ information in the database, the company may also face penalties under the EU’s General Data Protection Regulation, which can impose fines of up to four percent of the company’s global annual revenue for serious breaches.

But given its China-based presence, it’s not clear, however, what actionable repercussions the company could face.

This is the latest in a series of exposures involving ElasticSearch instances, a popular open source search and database software. In recent weeks, several high-profile data exposures have been reported as a result of companies’ failure to practice basic data security measures — including Urban Massage exposing its own customer database, Mindbody-owned FitMetrix forgetting to put a password on its servers and Voxox, a communications company, which leaked phone numbers and two-factor codes on millions of unsuspecting users.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Powered by WPeMatico

Google faces GDPR complaint over ‘deceptive’ location tracking

Posted by | Android, Apps, Europe, european union, GDPR, General Data Protection Regulation, Google, google search, Mobile, Norwegian Consumer Council, privacy, smartphones, TC | No Comments

A group of European consumer watchdogs has filed a privacy complaint against Google — arguing the company uses manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes.

The consumer organizations are making the complaint under the EU’s new data protection framework, GDPR, which regulators can use to levy major fines for compliance breaches — of up to 4 percent of a company’s global annual turnover.

Under GDPR, a consent-based legal basis for processing personal data (e.g. person’s location) must be specific, informed and freely given.

In their complaint, the groups, which include Norway’s Consumer Council, argue that Google does not have proper legal basis to track users through “Location History” and “Web & App Activity” — settings which are integrated into all Google accounts, and which, for users of Android -based smartphones, they assert are particularly difficult to avoid.

The Google mobile OS remains the dominant smartphone platform globally, as well as across Europe.

“Google is processing incredibly detailed and extensive personal data without proper legal grounds, and the data has been acquired through manipulation techniques,” said Gro Mette Moen, acting head of the Norwegian Consumer Council’s digital services unit in a statement.

“When we carry our phones, Google is recording where we go, down to which floor we are on and how we are moving. This can be combined with other information about us, such as what we search for, and what websites we visit. Such information can in turn be used for things such as targeted advertising meant to affect us when we are receptive or vulnerable.”

Responding to the complaint, a Google spokesperson sent TechCrunch the following statement:

Location History is turned off by default, and you can edit, delete, or pause it at any time. If it’s on, it helps improve services like predicted traffic on your commute. If you pause it, we make clear that — depending on your individual phone and app settings — we might still collect and use location data to improve your Google experience. We enable you to control location data in other ways too, including in a different Google setting called Web & App Activity, and on your device. We’re constantly working to improve our controls, and we’ll be reading this report closely to see if there are things we can take on board.

Earlier this year the Norwegian watchdog produced a damning report calling out dark pattern design tricks being deployed by Google and Facebook meant to manipulate users by nudging them toward “privacy intrusive options.” It also examined Microsoft’s consent flows, but judged the company to be leaning less heavily on such unfair tactics.

Among the underhand techniques that the Google-targeted GDPR complaint, which draws on the earlier report, calls out are allegations of deceptive click-flow, with the groups noting that a “location history” setting can be enabled during Android set-up without a user being aware of it; key settings being both buried in menus (hidden) and enabled by default; users being presented at the decision point with insufficient and misleading information; repeat nudges to enable location tracking even after a user has previously turned it off; and the bundling of “invasive location tracking” with other unrelated Google services, such as photo sorting by location.

GDPR remains in the early implementation phrase — just six months since the regulation came into force across Europe. But a large chunk of the first wave of complaints have been focused on consent, according to Europe’s data protection supervisor, who also told us in October that more than 42,000 complaints had been lodged in total since the regulation came into force.

Where Google is concerned, the location complaint is by no means the only GDPR — or GDPR consent-related — complaint it’s facing.

Another complaint, filed back in May also by a consumer-focused organization, took aim at what it dubbed the use of “forced consent” by Google and Facebook — pointing out that the companies were offering users no choice but to have their personal data processed to make use of certain services, yet the GDPR requires consent to be freely given.

Powered by WPeMatico

Facebook, Google face first GDPR complaints over ‘forced consent’

Posted by | Advertising Tech, Android, data protection, Europe, european union, Facebook, General Data Protection Regulation, Google, instagram, lawsuit, Mark Zuckerberg, Max Schrems, privacy, Social, social network, social networking, terms of service, WhatsApp | No Comments

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response. Update: Facebook has now sent the following statement, attributed to its chief privacy officer, Erin Egan: “We have prepared for the past 18 months to ensure we meet the requirements of the GDPR. We have made our policies clearer, our privacy settings easier to find and introduced better tools for people to access, download, and delete their information. Our work to improve people’s privacy doesn’t stop on May 25th. For example, we’re building Clear History: a way for everyone to see the websites and apps that send us information when you use them, clear this information from your account, and turn off our ability to store it associated with your account going forward.”

Schrems most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights is an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the power imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record of defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation as a data protection rights champion.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build community a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to unilaterally apply and extract ‘consent’ as a consequence of the reach and power of their platforms — arguing there’s an underlying competition concern that GDPR could also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Powered by WPeMatico