privacy

Firefox for iOS gets persistent private browsing tabs

Posted by | Apps, Firefox, iOS, Mobile, privacy | No Comments

Firefox for iOS is getting an update today that brings a new layout for its menu and settings, as well as new organization settings in the New Tabs features to iPhone and iPad users. But more importantly, it is also introducing persistent Private Browsing tabs that allow you to keep private browsing tabs alive across sessions.

Typically, when you exit Firefox, your private browsing sessions will exit, too. Now, when you relaunch Firefox, you’ll be right back in your private browsing sessions. And while it’s important to remember that private browsing doesn’t render you anonymous, it does automatically erase your cookies, passwords and browsing history. Sometimes you want those to persist across your sessions, though, given that it’s annoying to have to re-enter your passwords every time you quite the app, for example, and now Firefox lets you do that until you actively exit the private browsing mode.

“Keeping your private browsing preferences seamless is just another way we’re making it simple and easy to give you back control of the privacy of your online experience,” Mozilla explains in today’s announcement.

With this updates, users now also get different options to organize the view they see when they open a blank new tab. You can now chose between having new tabs open to your bookmarks list, Firefox Home (which features your top sites and recommendations from the Mozilla-owned Pocket), a list of your recent history or a custom URL (with your own homepage, for example). Or, if you just like to see a white page, you can also opt to see a blank page.

As for the new settings and menu layout, Mozilla notes that these now closely mirror the Firefox desktop version. That means you can now access your bookmarks, history, Reading List and download from the Library menu item, for example.

Powered by WPeMatico

Is Europe closing in on an antitrust fix for surveillance technologists?

Posted by | Android, antitrust, competition law, data protection, data protection law, DCMS committee, digital media, EC, Europe, european commission, european union, Facebook, General Data Protection Regulation, Germany, Giovanni Buttarelli, Google, instagram, Margrethe Vestager, Messenger, photo sharing, privacy, Social, social media, social networks, surveillance capitalism, TC, terms of service, United Kingdom, United States | No Comments

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

#GDPR allows imposing a permanent ban on data processing. This is the nuclear option. Much more severe than any fine you can imagine, in most cases. https://t.co/X772NvU51S

— Lukasz Olejnik (@lukOlejnik) January 28, 2019

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Powered by WPeMatico

Many popular iPhone apps secretly record your screen without asking

Posted by | analyst, app-store, apple inc, Banking, iOS, iPhone, iTunes, Mobile, mobile app, mobile software, operating systems, privacy, Security, smartphones, terms of service, travel sites | No Comments

Many major companies, like Air Canada, Hollister and Expedia, are recording every tap and swipe you make on their iPhone apps. In most cases you won’t even realize it. And they don’t need to ask for permission.

You can assume that most apps are collecting data on you. Some even monetize your data without your knowledge. But TechCrunch has found several popular iPhone apps, from hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, that don’t ask or make it clear — if at all — that they know exactly how you’re using their apps.

Worse, even though these apps are meant to mask certain fields, some inadvertently expose sensitive data.

Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers.

Or, as Glassbox said in a recent tweet: “Imagine if your website or mobile app could see exactly what your customers do in real time, and why they did it?”

The App Analyst, a mobile expert who writes about his analyses of popular apps on his eponymous blog, recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles.

“This gives Air Canada employees — and anyone else capable of accessing the screenshot database — to see unencrypted credit card and password information,” he told TechCrunch.

In the case of Air Canada’s app, although the fields are masked, the masking didn’t always stick (Image: The App Analyst/supplied)

We asked The App Analyst to look at a sample of apps that Glassbox had listed on its website as customers. Using Charles Proxy, a man-in-the-middle tool used to intercept the data sent from the app, the researcher could examine what data was going out of the device.

Not every app was leaking masked data; none of the apps we examined said they were recording a user’s screen — let alone sending them back to each company or directly to Glassbox’s cloud.

That could be a problem if any one of Glassbox’s customers aren’t properly masking data, he said in an email. “Since this data is often sent back to Glassbox servers I wouldn’t be shocked if they have already had instances of them capturing sensitive banking information and passwords,” he said.

The App Analyst said that while Hollister and Abercrombie & Fitch sent their session replays to Glassbox, others like Expedia and Hotels.com opted to capture and send session replay data back to a server on their own domain. He said that the data was “mostly obfuscated,” but did see in some cases email addresses and postal codes. The researcher said Singapore Airlines also collected session replay data but sent it back to Glassbox’s cloud.

Without analyzing the data for each app, it’s impossible to know if an app is recording a user’s screens of how you’re using the app. We didn’t even find it in the small print of their privacy policies.

Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen. Glassbox doesn’t require any special permission from Apple or from the user, so there’s no way a user would know.

Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either.

We asked all of the companies to point us to exactly where in its privacy policies it permits each app to capture what a user does on their phone.

Only Abercombie responded, confirming that Glassbox “helps support a seamless shopping experience, enabling us to identify and address any issues customers might encounter in their digital experience.” The spokesperson pointing to Abercrombie’s privacy policy makes no mention of session replays, neither does its sister-brand Hollister’s policy.

“I think users should take an active role in how they share their data, and the first step to this is having companies be forthright in sharing how they collect their users data and who they share it with,” said The App Analyst.

When asked, Glassbox said it doesn’t enforce its customers to mention its usage in their privacy policy.

“Glassbox has a unique capability to reconstruct the mobile application view in a visual format, which is another view of analytics, Glassbox SDK can interact with our customers native app only and technically cannot break the boundary of the app,” the spokesperson said, such as when the system keyboard covers part of the native app, “Glassbox does not have access to it,” the spokesperson said.

Glassbox is one of many session replay services on the market. Appsee actively markets its “user recording” technology that lets developers “see your app through your user’s eyes,” while UXCam says it lets developers “watch recordings of your users’ sessions, including all their gestures and triggered events.” Most went under the radar until Mixpanel sparked anger for mistakenly harvesting passwords after masking safeguards failed.

It’s not an industry that’s likely to go away any time soon — companies rely on this kind of session replay data to understand why things break, which can be costly in high-revenue situations.

But for the fact that the app developers don’t publicize it just goes to show how creepy even they know it is.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Powered by WPeMatico

Senator Warner calls on Zuckerberg to support market research consent rules

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Government, mark warner, market research, Mobile, Policy, privacy, senate, Social, vpn | No Comments

In response to TechCrunch’s investigation of Facebook paying teens and adults to install a VPN that lets it analyze all their phone’s traffic, Senator Mark Warner (D-VA) has sent a letter to Mark Zuckerberg. It admonishes Facebook for not spelling out exactly which data the Facebook Research app was collecting or giving users adequate information necessary to determine if they should accept payment in exchange for selling their privacy. Following our report, Apple banned Facebook’s Research app from iOS and shut down its internal employee-only workplace apps too as punishment, causing mayhem in Facebook’s office.

Warner wrote to Zuckerberg, “In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection. Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

Warner is working on writing new laws to govern data collection initiatives like Facebook Research. He asks Zuckerberg, “Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?”

Senator Blumenthal’s fierce statement

Meanwhile, Senator Richard Blumenthal (D-CT) provided TechCrunch with a fiery statement regarding our investigation. He calls Facebook anti-competitive, which could fuel calls to regulate or break up Facebook, says the FTC must address the issue and that he’s planning to work with congress to safeguard teens’ privacy:

“Wiretapping teens is not research, and it should never be permissible. This is yet another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behavior. Instead of learning its lesson when it was caught spying on consumers using the supposedly ‘private’ Onavo VPN app, Facebook rebranded the intrusive app and circumvented Apple’s attempts to protect iPhone users. Facebook continues to demonstrate its eagerness to look over everyone’s shoulder and watch everything they do in order to make money. 

Mark Zuckerberg’s empty promises are not enough. The FTC needs to step up to the plate, and the Onavo app should be part of its investigation. I will also be writing to Apple and Google on Facebook’s egregious behavior, and working in Congress to make sure that teens are protected from Big Tech’s privacy intrusions.”

Senator Markey says stop surveiling teens

And finally, Senator Edward J. Markey (D-MA) requests that Facebook stop recruiting teens for its Research program, and notes he’ll push his “Do Not Track Kids” act in Congress:

“It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding how much data they’re handing over and how sensitive it is. I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens. 

But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.”

The senators’ statements do go a bit overboard. Though Facebook Research was aggressively competitive and potentially misleading, Blumenthal calling it “anti-competitive” is a stretch. And Warner’s questioning on whether “any user reasonably understood that they were giving Facebook root device access through the enterprise certificate” or that it uses the data to track competitors oversteps the bounds. Surely some savvy technologists did, but the question is whether all the teens and everyone else understood.

Facebook isn’t the only one paying users to analyze all their phone data. TechCrunch found that Google had a similar program called Screenwise Meter. Though it was more upfront about it, Google also appears to have violated Apple’s employee-only Enterprise Certificate rules. We may be seeing the start to an industry-wide crack down on market research surveillance apps that dangle gift cards in front of users to get them to give up a massive amount of privacy.

Warner’s full letter to Zuckerberg can be found below:

Dear Mr. Zuckerberg: 

I write to express concerns about allegations of Facebook’s latest efforts to monitor user activity. On January 29th, TechCrunch revealed that under the auspices of partnerships with beta testing firms, Facebook had begun paying users aged 13 to 35 to install an enterprise certificate, allowing Facebook to intercept all internet traffic to and from user devices. According to subsequent reporting by TechCrunch, Facebook relied on intermediaries that often “did not disclose Facebook’s involvement until users had begun the signup process.” Moreover, the advertisements used to recruit participants and the “Project Disclosure” make no mention of Facebook or the commercial purposes to which this data was allegedly put.

This arrangement comes in the wake of revelations that Facebook had previously engaged in similar efforts through a virtual private network (VPN) app, Onavo, that it owned and operated. According to a series of articles by the Wall Street Journal, Facebook used Onavo to scout emerging competitors by monitoring user activity – acquiring competitors in order to neutralize them as competitive threats, and in cases when that did not work, monitor usage patterns to inform Facebook’s own efforts to copy the features and innovations driving adoption of competitors’ apps. In 2017, my staff contacted Facebook with questions about how Facebook was promoting Onavo through its Facebook app – in particular, framing the app as a VPN that would “protect” users while omitting any reference to the main purpose of the app: allowing Facebook to gather market data on competitors.

Revelations in 2017 and 2018 prompted Apple to remove Onavo from its App Store in 2018 after concluding that the app violated its terms of service prohibitions on monitoring activity of other apps on a user’s device, as well as a requirement to make clear what user data will be collected and how it will be used. In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection.

Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me. As you recall, I wrote the Federal Trade Commission in 2014 in the wake of revelations that Facebook had undertaken a behavioral experiment on hundreds of thousands of users, without obtaining their informed consent. In submitted questions to your Chief Operating Officer, Sheryl Sandberg, I once again raised these concerns, asking if Facebook provided for “individualized, informed consent” in all research projects with human subjects – and whether users had the ability to opt out of such research. In response, we learned that Facebook does not rely on individualized, informed consent (noting that users consent under the terms of the general Data Policy) and that users have no opportunity to opt out of being enrolled in research studies of their activity. In large part for this reason, I am working on legislation to require individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users. 

Fair, robust competition serves as an impetus for innovation, product differentiation, and wider consumer choice. For these reasons, I request that you respond to the following questions: 

1. Do you think any user reasonably understood that they were giving Facebook root device access through the enterprise certificate? What specific steps did you take to ensure that users were properly informed of this access? 

2. Do you think any user reasonably understood that Facebook was using this data for commercial purposes, including to track competitors?

3. Will you release all participants from the confidentiality agreements Facebook made them sign?

4. As you know, I have begun working on legislation that would require large platforms such as Facebook to provide users, on a continual basis, with an estimate of the overall value of their data to the service provider. In this instance, Facebook seems to have developed valuations for at least some uses of the data that was collected (such as market research). This further emphasizes the need for users to understand fully what data is collected by Facebook, the full range of ways in which it is used, and how much it is worth to the company. Will you commit to supporting this legislation and exploring methods for valuing user data holistically?

5. Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?

I look forward to receiving your responses within the next two weeks. If you should have any questions or concerns, please contact my office at 202-224-2023.

Powered by WPeMatico

Apple bans Facebook’s Research app that paid users for data

Posted by | Apple, Apps, Facebook, facebook research, Mark Zuckerberg, Mobile, Policy, privacy, Social, TC, Teens, Tim Cook, vpn | No Comments

In the wake of TechCrunch’s investigation yesterday, Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down. The Research app asked users for root network access to all data passing through their phone in exchange for $20 per month. Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store.

TechCrunch had reported that Facebook was breaking Apple’s policy that the Enterprise system is only for distributing internal corporate apps to employees, not paid external testers. That was actually before Facebook released a statement last night saying that it had shut down the iOS version of the Research program without mentioning that it was forced by Apple to do so.

TechCrunch’s investigation discovered that Facebook has been quietly operated the Research program on iOS and Android since 2016, recently under the name Project Atlas. It recruited 13 to 35 year olds, 5 percent of which were teenagers, with ads on Instagram and Snapchat and paid them a monthly fee plus referral bonuses to install Facebook’s Research app, the included VPN app that routes traffic to Facebook, and to ‘Trust’ the company with root network access to their phone. That lets Facebook pull in a user’s web browsing activity, what apps are on their phone and how they use them, and even decrypt their encrypted traffic. Facebook went so far as to ask users to screenshot and submit their Amazon order history. Facebook uses all this data to track competitors, assess trends, and plan its product roadmap.

Facebook was forced to remove its similar Onavo Protect app in August last year after Apple changed its policies to prohibit the VPN app’s data collection practices. But Facebook never shut down the Research app with the same functionality it was running in parallel. In fact, TechCrunch commissioned security expert Will Strafach to dig into the Facebook Research app, and we found that it featured tons of similar code and references to Onavo Protect. That means Facebook was purposefully disobeying the spirit of Apple’s 2018 privacy policy change while also abusing the Enterprise Certificate program.

Sources tell us that Apple revoking Facebook’s Enterprise Certificate has broken all of the company’s legitimate employee-only apps. Those include pre-launch internal-testing versions of Facebook and Instagram, as well as the employee apps for coordinating office collaboration, commutes, seeing the day’s lunch schedule, and more. That’s causing mayhem at Facebook, disrupting their daily work flow and ability to do product development. We predicted yesterday that Apple could take this drastic step to punish Facebook much harder than just removing its Research app. The disruption will translate into a huge loss of productivity for Facebook’s 33,000 employees.

[Update: Facebook later confirmed to TechCrunch that its internal apps were broken by Apple’s punishment and that it’s in talks with Apple to try to resolve the issue and get their employee tools running again.]

For reference, Facebook’s main iOS app still functions normally. Also, you can’t get paid for installing Onavo Protect on Android, only for the Facebook Research app. And Facebook isn’t the only one violating Apple’s Enterprise Certificate policy, as TechCrunch discovered Google’s Screenwise Meter surveillance app breaks the rules too.

This morning, Apple informed us it had banned Facebook’s Research app yesterday before the social network seemingly pulled it voluntarily. Apple provided us with this strongly worded statement condemning the social network’s behavior:

“We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

That comes in direct contradiction to Facebook’s initial response to our investigation. Facebook claimed it was in alignment with Apple’s Enterprise Certificate policy and that the program was no different than a focus group.

Seven hours later, a Facebook spokesperson said it was pulling its Research program from iOS without mentioning that Apple forced it to do so, and issued this statement disputing the characterization of our story:

“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”

We refute those accusations by Facebook. As we wrote yesterday night, Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.

Senator Mark Warner has since called on Facebook CEO Mark Zuckerberg to support legislation requiring individual informed consent for market research initiatives like Facebook Research. Meanwhile, Senator Richard Blumenthal issued a fierce statement that “Wiretapping teens is not research, and it should never be permissible.”

The situation will surely worsen the relationship between Facebook and Apple after years of mounting animosity between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices, and Zuckerberg has countered that it offers products for free for everyone rather than making products few can afford like Apple. Flared tensions could see Facebook receive less promotion in the App Store, fewer integrations into iOS, and more jabs from Cook. Meanwhile, the world sees Facebook as having been caught red-handed threatening user privacy and breaking Apple policy.

Powered by WPeMatico

India’s largest bank SBI leaked account data on millions of customers

Posted by | Asia, Banking, Mobile, privacy, Security | No Comments

India’s largest bank has secured an unprotected server that allowed anyone to access financial information on millions of its customers, like bank balances and recent transactions.

The server, hosted in a regional Mumbai-based data center, stored two months of data from SBI Quick, a text message and call-based system used to request basic information about their bank accounts by customers of the government-owned State Bank of India (SBI), the largest bank in the country and a highly ranked company in the Fortune 500.

But the bank had not protected the server with a password, allowing anyone who knew where to look to access the data on millions of customers’ information.

It’s not known for how long the server was open, but long enough for it to be discovered by a security researcher, who told TechCrunch of the leak, but did not want to be named for the story.

SBI Quick allows SBI’s banking customers to text the bank, or make a missed call, to retrieve information back by text message about their finances and accounts. It’s ideal for millions of the banking giant’s customers who don’t use smartphones or have limited data service. By using predefined keywords, like “BAL” for a customer’s current balance, the service recognizes the customer’s registered phone number and will send back the current amount in that customer’s bank account. The system can also be used to send back the last five transactions, block an ATM card and make inquiries about home or car loans.

It was the back-end text message system that was exposed, TechCrunch can confirm, storing millions of text messages each day.

A redacted example of some of the banking and credit information found in the database (Image: TechCrunch)

The passwordless database allowed us to see all of the text messages going to customers in real time, including their phone numbers, bank balances and recent transactions. The database also contained the customer’s partial bank account number. Some would say when a check had been cashed, and many of the bank’s sent messages included a link to download SBI’s YONO app for internet banking.

The bank sent out close to three million text messages on Monday alone.

The database also had daily archives of millions of text messages each, going back to December, allowing anyone with access a detailed view into millions of customers’ finances.

We verified the data by asking India-based security researcher Karan Saini to send a text message to the system. Within seconds, we found his phone number in the database, including the text message he received back.

“The data available could potentially be used to profile and target individuals that are known to have high account balances,” said Saini in a message to TechCrunch. Saini previously found a data leak in India’s Aadhaar, the country’s national identity database, and a two-factor bypass bug in Uber’s ridesharing app.

Saini said that knowing a phone number “could be used to aid social engineering attacks — which is one of the most common attack vectors in the country with regard to financial fraud,” he said.

SBI claims more than 500 million customers across the glob,e with 740 million accounts.

Just days earlier, SBI accused Aadhaar’s authority, UIDAI, of mishandling citizen data that allowed fake Aadhaar identity cards to be created, despite numerous security lapses and misuse of the system. UIDAI denied the report, saying there was “no security breach” of its system. (UIDAI often uses the term “fake news” to describe coverage it doesn’t like.)

TechCrunch reached out to SBI and India’s National Critical Information Infrastructure Protection Centre, which receives vulnerability reports for the banking sector. The database was secured overnight.

Despite several emails, SBI did not comment prior to publication.

Powered by WPeMatico

Facebook pays teens to install VPN that spies on them

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Facebook Teens, Mobile, Onavo, Policy, privacy, Social, vpn | No Comments

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.

Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits, and it has no plans to stop.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Facebook’s Research app requires users to ‘Trust’ it with extensive access to their data

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple could seek to block Facebook from continuing to distribute its Research app, or even revoke it permission to offer employee-only apps, and the situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point. TechCrunch has spoken to Apple and it’s aware of the issue, but the company did not provide a statement before press time.

Facebook’s Research program is referred to as Project Atlas on sign-up sites that don’t mention Facebook’s involvement

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

Facebook’s surveillance app

Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.

Over the years since, Onavo clued Facebook in to what apps to copy, features to build and flops to avoid. By 2018, Facebook was promoting the Onavo app in a Protect bookmark of the main Facebook app in hopes of scoring more users to snoop on. Facebook also launched the Onavo Bolt app that let you lock apps behind a passcode or fingerprint while it surveils you, but Facebook shut down the app the day it was discovered following privacy criticism. Onavo’s main app remains available on Google Play and has been installed more than 10 million times.

The backlash heated up after security expert Strafach detailed in March how Onavo Protect was reporting to Facebook when a user’s screen was on or off, and its Wi-Fi and cellular data usage in bytes even when the VPN was turned off. In June, Apple updated its developer policies to ban collecting data about usage of other apps or data that’s not necessary for an app to function. Apple proceeded to inform Facebook in August that Onavo Protect violated those data collection policies and that the social network needed to remove it from the App Store, which it did, Deepa Seetharaman of the WSJ reported.

But that didn’t stop Facebook’s data collection.

Project Atlas

TechCrunch recently received a tip that despite Onavo Protect being banished by Apple, Facebook was paying users to sideload a similar VPN app under the Facebook Research moniker from outside of the App Store. We investigated, and learned Facebook was working with three app beta testing services to distribute the Facebook Research app: BetaBound, uTest and Applause. Facebook began distributing the Research VPN app in 2016. It has been referred to as Project Atlas since at least mid-2018, around when backlash to Onavo Protect magnified and Apple instituted its new rules that prohibited Onavo. [Update: Previously, a similar program was called Project Kodiak.] Facebook didn’t want to stop collecting data on people’s phone usage and so the Research program continued, in disregard for Apple banning Onavo Protect.

Facebook’s Research App on iOS

Ads (shown below) for the program run by uTest on Instagram and Snapchat sought teens 13-17 years old for a “paid social media research study.” The sign-up page for the Facebook Research program administered by Applause doesn’t mention Facebook, but seeks users “Age: 13-35 (parental consent required for ages 13-17).” If minors try to sign-up, they’re asked to get their parents’ permission with a form that reveal’s Facebook’s involvement and says “There are no known risks associated with the project, however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of apps. You will be compensated by Applause for your child’s participation.” For kids short on cash, the payments could coerce them to sell their privacy to Facebook.

The Applause site explains what data could be collected by the Facebook Research app (emphasis mine):

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.”

Meanwhile, the BetaBound sign-up page with a URL ending in “Atlas” explains that “For $20 per month (via e-gift cards), you will install an app on your phone and let it run in the background.” It also offers $20 per friend you refer. That site also doesn’t initially mention Facebook, but the instruction manual for installing Facebook Research reveals the company’s involvement.

Facebook’s intermediary uTest ran ads on Snapchat and Instagram, luring teens to the Research program with the promise of money

 

Facebook seems to have purposefully avoided TestFlight, Apple’s official beta testing system, which requires apps to be reviewed by Apple and is limited to 10,000 participants. Instead, the instruction manual reveals that users download the app from r.facebook-program.com and are told to install an Enterprise Developer Certificate and VPN and “Trust” Facebook with root access to the data their phone transmits. Apple requires that developers agree to only use this certificate system for distributing internal corporate apps to their own employees. Randomly recruiting testers and paying them a monthly fee appears to violate the spirit of that rule.

Security expert Will Strafach found Facebook’s Research app contains lots of code from Onavo Protect, the Facebook-owned app Apple banned last year

Once installed, users just had to keep the VPN running and sending data to Facebook to get paid. The Applause-administered program requested that users screenshot their Amazon orders page. This data could potentially help Facebook tie browsing habits and usage of other apps with purchase preferences and behavior. That information could be harnessed to pinpoint ad targeting and understand which types of users buy what.

TechCrunch commissioned Strafach to analyze the Facebook Research app and find out where it was sending data. He confirmed that data is routed to “vpn-sjc1.v.facebook-program.com” that is associated with Onavo’s IP address, and that the facebook-program.com domain is registered to Facebook, according to MarkMonitor. The app can update itself without interacting with the App Store, and is linked to the email address PeopleJourney@fb.com. He also discovered that the Enterprise Certificate indicates Facebook renewed it on June 27th, 2018 — weeks after Apple announced its new rules that prohibited the similar Onavo Protect app.

“It is tricky to know what data Facebook is actually saving (without access to their servers). The only information that is knowable here is what access Facebook is capable of based on the code in the app. And it paints a very worrisome picture,” Strafach explains. “They might respond and claim to only actually retain/save very specific limited data, and that could be true, it really boils down to how much you trust Facebook’s word on it. The most charitable narrative of this situation would be that Facebook did not think too hard about the level of access they were granting to themselves . . . which is a startling level of carelessness in itself if that is the case.”

“Flagrant defiance of Apple’s rules”

In response to TechCrunch’s inquiry, a Facebook spokesperson confirmed it’s running the program to learn how people use their phones and other services. The spokesperson told us “Like many companies, we invite people to participate in research that helps us identify things we can be doing better. Since this research is aimed at helping Facebook understand how people use their mobile devices, we’ve provided extensive information about the type of data we collect and how they can participate. We don’t share this information with others and people can stop participating at any time.”

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone

Facebook’s spokesperson claimed that the Facebook Research app was in line with Apple’s Enterprise Certificate program, but didn’t explain how in the face of evidence to the contrary. They said Facebook first launched its Research app program in 2016. They tried to liken the program to a focus group and said Nielsen and comScore run similar programs, yet neither of those ask people to install a VPN or provide root access to the network. The spokesperson confirmed the Facebook Research program does recruit teens but also other age groups from around the world. They claimed that Onavo and Facebook Research are separate programs, but admitted the same team supports both as an explanation for why their code was so similar.

Facebook’s Research program requested users screenshot their Amazon order history to provide it with purchase data

However, Facebook claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.

Facebook disobeying Apple so directly could hurt their relationship. “The code in this iOS app strongly indicates that it is simply a poorly re-branded build of the banned Onavo app, now using an Enterprise Certificate owned by Facebook in direct violation of Apple’s rules, allowing Facebook to distribute this app without Apple review to as many users as they want,” Strafach tells us. ONV prefixes and mentions of graph.onavo.com, “onavoApp://” and “onavoProtect://” custom URL schemes litter the app. “This is an egregious violation on many fronts, and I hope that Apple will act expeditiously in revoking the signing certificate to render the app inoperable.”

Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram. Insights into how popular with teens is Chinese video music app TikTok and meme sharing led Facebook to launch a clone called Lasso and begin developing a meme-browsing feature called LOL, TechCrunch first reported. But Facebook’s desire for data about teens riles critics at a time when the company has been battered in the press. Analysts on tomorrow’s Facebook earnings call should inquire about what other ways the company has to collect competitive intelligence.

Last year when Tim Cook was asked what he’d do in Mark Zuckerberg’s position in the wake of the Cambridge Analytica scandal, he said “I wouldn’t be in this situation . . . The truth is we could make a ton of money if we monetized our customer, if our customer was our product. We’ve elected not to do that.” Zuckerberg told Ezra Klein that he felt Cook’s comment was “extremely glib.”

Now it’s clear that even after Apple’s warnings and the removal of Onavo Protect, Facebook is still aggressively collecting data on its competitors via Apple’s iOS platform. “I have never seen such open and flagrant defiance of Apple’s rules by an App Store developer,” Strafach concluded. If Apple shuts the Research program down, Facebook will either have to invent new ways to surveil our behavior amidst a climate of privacy scrutiny, or be left in the dark.

Additional reporting by Zack Whittaker.

Powered by WPeMatico

Apple Pay is coming to Target, Taco Bell, Speedway and two other US chains

Posted by | 7-eleven, Apple, apple inc, Apple Pay, costco, cvs, Germany, jack in the box, Mobile, mobile payments, payments, privacy, Security, taco bell, Target | No Comments

A little more retail momentum for Apple Pay: Apple has announced another clutch of U.S. retailers will soon support its eponymous mobile payment tech — most notably discount retailer Target.

Apple Pay is rolling out to Target stores now, according to Apple, which says it will be available in all 1,850 of its U.S. retail locations “in the coming weeks.”

Also signing up to Apple Pay are fast food chains Taco Bell and Jack in the Box; Speedway convenience stores; and Hy-Vee supermarkets in the Midwest.

“With the addition of these national retailers, 74 of the top 100 merchants in the US and 65 per cent of all retail locations across the country will support Apple Pay,” notes Apple in a press release.

Speedway customers can use Apple Pay at all of its approximately 3,000 locations across the Midwest, East Coast and Southeast from today, according to Apple, as well as at Hy-Vee stores’ more than 245 outlets in the Midwest.

It says the payment tech is also rolling out to more than 7,000 Taco Bell and 2,200 Jack in the Box locations “in the next few months.”

Back in the summer Apple announced it had signed up longtime holdout CVS, with the pharmacy introducing Apple Pay across its ~8,400 standalone locations last year.

Also signing up then: 7-Eleven, which Apple says has now launched support for Apple Pay in 95 percent of its U.S. convenience stores in 2018.

Last year retail giant Costco also completed the rollout of Apple Pay to its more than 500 U.S. warehouses.

While, in December, Apple Pay also finally launched in Germany — where Apple slated it would be accepted at a range of “supermarkets, boutiques, restaurants and hotels and many other places” at launch, albeit “cash only” remains a common demand from the country’s small businesses.

Update: In a blog post about the Apple Pay launch, Target confirmed that users of its Target REDcard credit or debit cards cannot use the store payment card with Apple Pay.

The retail giant also said it will soon support contactless mobile payment technologies on the Android smartphone platform, naming Google Pay and Samsung Pay specifically, as well as supporting contactless payment cards from Mastercard, Visa, American Express and Discover.

“Offering guests more ways to conveniently and quickly pay is just another way we’re making it easier than ever to shop Target,” said Target’s chief information officer, Mike McNamara, in a statement.

Powered by WPeMatico

Google starts pulling unvetted Android apps that access call logs and SMS messages

Posted by | Android, Apps, computing, Google, Google Play, google search, Mobile, privacy, product management, Security, smartphones, SMS | No Comments

Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.

The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.

Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.

“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.

Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.

Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access and the risks associated with having access to call and texting data.

Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.

So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.

Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.

The last two years alone has seen several high-profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.

Powered by WPeMatico

Wrest control from a snooping smart speaker with this teachable ‘parasite’

Posted by | Advertising Tech, Alexa, artificial intelligence, connected devices, Europe, Gadgets, GitHub, Google, google home, hardware, Home Automation, Internet of Things, IoT, neural network, privacy, Security, smart assistant, smart speaker, Speaker | No Comments

What do you get when you put one internet-connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite’” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.

The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Amazon Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.

Project Alias from Bjørn Karmann on Vimeo.

Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.

The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.

The open-source TensorFlow library was used for building the name training component.

So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.

This means you could rename Alexa “Bezosallseeingeye,” or refer to your Google Home as “Carelesswhispers.” Whatever floats your boat.

Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.

“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write-up of the project here. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”

Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.

And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)

If you’re hankering after your own Alexa-disrupting blob-topper, the pair have uploaded a build guide to Instructables and put the source code on GitHub. So fill yer boots.

Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.

That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And — unlike this creative physical IoT add-on — that kind of tech would not be at all legal.

Powered by WPeMatico