privacy

Dating and fertility apps among those snitching to ‘out of control’ ad tech, report finds

Posted by | Adtech, Advertising Tech, Android, AppNexus, AT&T, data processing, digital advertising, digital marketing, Europe, european union, Facebook, France, GDPR, General Data Protection Regulation, Google, gps, information commissioner's office, leanplum, Marketing, Norwegian Consumer Council, OKCupid, online advertising, online privacy, OpenX, privacy, programmatic advertising, smaato, smartphone, targeted advertising, terms of service, Twitter | No Comments

The latest report to warn that surveillance capitalism is out of control — and “free” digital services can in fact be very costly to people’s privacy and rights — comes courtesy of the Norwegian Consumer Council, which has published an analysis of how popular apps are sharing user data with the behavioral ad industry.

It suggests smartphone users have little hope of escaping ad tech’s pervasive profiling machinery — short of not using a smartphone at all.

A majority of the apps that were tested for the report were found to transmit data to “unexpected third parties” — with users not being clearly informed about who was getting their information and what they were doing with it. Most of the apps also did not provide any meaningful options or on-board settings for users to prevent or reduce the sharing of data with third parties.

“The evidence keeps mounting against the commercial surveillance systems at the heart of online advertising,” the Council writes, dubbing the current situation “completely out of control, harming consumers, societies, and businesses,” and calling for curbs to prevalent practices in which app users’ personal data is broadcast and spread “with few restraints.” 

“The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.

“The comprehensive digital surveillance happening across the ad tech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions,” it also warns.

In the report, app users’ data is documented being shared with tech giants such as Facebook, Google and Twitter — which operate their own mobile ad platforms and/or other key infrastructure related to the collection and sharing of smartphone users’ data for ad targeting purposes — but also with scores of other faceless entities that the average consumer is unlikely to have heard of.

The Council commissioned a data flow analysis of 10 popular apps running on Google’s Android smartphone platform — generating a snapshot of the privacy black hole that mobile users inexorably tumble into when they try to go about their digital business, despite the existence (in Europe) of a legal framework that’s supposed to protect people by giving citizens a swathe of rights over their personal data.

Among the findings are a makeup filter app sharing the precise GPS coordinates of its users; ovulation, period and mood-tracking apps sharing users’ intimate personal data with Facebook and Google (among others); dating apps exchanging user data with each other, and also sharing with third parties sensitive user info like individuals’ sexual preferences (and real-time device specific tells such as sensor data from the gyroscope… ); and a games app for young children that was found to contain 25 embedded SDKs and which shared the Android Advertising ID of a test device with eight third parties.

The 10 apps whose data flows were analyzed for the report are the dating apps Grindr, Happn, OkCupid, and Tinder; fertility/period tracker apps Clue and MyDays; makeup app Perfect365; religious app Muslim: Qibla Finder; children’s app My Talking Tom 2; and the keyboard app Wave Keyboard.

“Altogether, Mnemonic [the company which the Council commissioned to conduct the technical analysis] observed data transmissions from the apps to 216 different domains belonging to a large number of companies. Based on their analysis of the apps and data transmissions, they have identified at least 135 companies related to advertising. One app, Perfect365, was observed communicating with at least 72 different such companies,” the report notes.

“Because of the scope of tests, size of the third parties that were observed receiving data, and popularity of the apps, we regard the findings from these tests to be representative of widespread practices in the adtech industry,” it adds.

Aside from the usual suspect (ad)tech giants, less well-known entities seen receiving user data include location data brokers Fysical, Fluxloop, Placer, Places/Fouraquare, Safegraph and Unacast; behavioral ad targeting players like Receptiv/Verve, Neura, Braze and LeanPlum; mobile app marketing analytics firms like AppsFlyer; and ad platforms and exchanges like AdColony, AT&T’s AppNexus, Bucksense, OpenX, PubNative, Smaato and Vungle.

In the report, the Forbrukerrådet concludes that the pervasive tracking of smartphone users which underpins the behavioral ad industry is all but impossible for smartphone users to escape — even if they are able to locate an on-device setting to opt out of behavioral ads.

This is because multiple identifiers are being attached to them and their devices, and also because of frequent sharing/syncing of identifiers by ad tech players across the industry. (It also points out that on the Android platform, a setting where users can opt-out of behavioral ads does not actually obscure the identifier — meaning users have to take it on trust that ad tech entities won’t just ignore their request and track them anyway.)

The Council argues its findings suggest widespread breaches of Europe’s General Data Protection Regulation (GDPR), given that key principles of that pan-EU framework — such as data protection by design and default — are in stark conflict with the systematic, pervasive background profiling of app users it found (apps were, for instance, found sharing personal data by default, requiring users to actively seek out an obscure device setting to try to prevent being profiled).

“The extent of tracking and complexity of the ad tech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the ad tech industry is systematically at odds with our fundamental rights and freedoms,” it also argues.

Where (user) consent is being relied upon as a legal basis to process personal data, the standard required by GDPR states it must be informed, freely given and specific.

But the Council’s analysis of the apps found them sorely lacking on that front.

“In the cases described in this report, none of the apps or third parties appear to fulfil the legal conditions for collecting valid consent,” it writes. “Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.”

It also dismisses another possible legal base — known as legitimate interests — arguing app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases.”

The report points out that other forms of digital advertising (such as contextual advertising) which do not rely on third parties processing personal data are available — arguing that further undermines any ad tech industry claims of “legitimate interests” as a valid base for helping themselves to smartphone users’ data.

“The large amount of personal data being sent to a variety of third parties, who all have their own purposes and policies for data processing, constitutes a widespread violation of data subjects’ privacy,” the Council argues. “Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.”

The suggestion, therefore, is that “a large number of third parties that collect consumer data for purposes such as behavioural profiling, targeted advertising and real-time bidding, are in breach of the General Data Protection Regulation.”

The report also discusses the harms attached to such widespread violation of privacy — pointing out risks such as discrimination and manipulation of vulnerable individuals, as well as chilling effects on speech, added fuel for ad fraud and the torching of trust in the digital economy, among other society-afflicting ill being fueled by ad tech’s obsession with profiling everyone…

Some of the harm of this data exploitation stems from significant knowledge and power asymmetries that render consumers powerless. The overarching lack of transparency of the system makes consumers vulnerable to manipulation, particularly when unknown companies know almost everything about the individual consumer. However, even if regular consumers had comprehensive knowledge of the technologies and systems driving the adtech industry, there would still be very limited ways to stop or control the data exploitation.

Since the number and complexity of actors involved in digital marketing is staggering, consumers have no meaningful ways to resist or otherwise protect themselves from the effects of profiling. These effects include different forms of discrimination and exclusion, data being used for new and unknowable purposes, widespread fraud, and the chilling effects of massive commercial surveillance systems. In the long run, these issues are also contributing to the erosion of trust in the digital industry, which may have serious consequences for the digital economy.

To shift what it dubs the “significant power imbalance between consumers and third party companies,” the Council calls for an end to the current practices of “extensive tracking and profiling” — either by companies changing their practices to “respect consumers’ rights,” or — where they won’t — urging national regulators and enforcement authorities to “take active enforcement measures, to establish legal precedent to protect consumers against the illegal exploitation of personal data.”

It’s fair to say that enforcement of GDPR remains a work in progress at this stage, some 20 months after the regulation came into force, back in May 2018. With scores of cross-border complaints yet to culminate in a decision (though there have been a couple of interesting ad tech and consent-related enforcements in France).

We reached out to Ireland’s Data Protection Commission (DPC) and the U.K.’s Information Commissioner’s Office (ICO) for comment on the Council’s report. The Irish regulator has multiple investigations ongoing into various aspects of ad tech and tech giants’ handling of online privacy, including a probe related to security concerns attached to Google’s ad exchange and the real-time bidding process which features in some programmatic advertising. It has previously suggested the first decisions from its hefty backlog of GDPR complaints will be coming early this year. But at the time of writing the DPC had not responded to our request for comment on the report.

A spokeswoman for the ICO — which last year put out its own warnings to the behavioral advertising industry, urging it to change its practices — sent us this statement, attributed to Simon McDougall, its executive director for technology and innovation, in which he says the regulator has been prioritizing engaging with the ad tech industry over its use of personal data and has called for change itself — but which does not once mention the word “enforcement”…

Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.

Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.

Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.

Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.

Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

Powered by WPeMatico

At CES, companies slowly start to realize that privacy matters

Posted by | Apple, camera+, CES, CES 2020, connected home, Facebook, Gadgets, IoT, Nest, privacy, ring, Security, security camera | No Comments

Every year, Consumer Electronics Show attendees receive a branded backpack, but this year’s edition was special; made out of transparent plastic, the bag’s contents were visible without the wearer needing to unzip. It isn’t just a fashion decision. Over the years, security has become more intense and cumbersome, but attendees with transparent backpacks didn’t have to open their bags when entering.

That cheap backpack is a metaphor for an ongoing debate — how many of us are willing to exchange privacy for convenience?

Privacy was on everyone’s mind at this year’s CES in Las Vegas, from CEOs to policymakers, PR agencies and people in charge of programming the panels. For the first time in decades, Apple had a formal presence at the event; Senior Director of Global Privacy Jane Horvath spoke on a panel focused on privacy with other privacy leaders.

Powered by WPeMatico

DuckDuckGo still critical of Google’s EU Android choice screen auction, after wining a universal slot

Posted by | Android, antitrust, competition law, DuckDuckGo, Ecosia, eu commission, Europe, european commission, european union, Google, google search, Info.com, Microsoft, privacy, Qwant, search engine, search engines, smartphones, Yandex | No Comments

Google has announced which search engines have won an auction process it has devised for an Android “choice screen” — as its response to an antitrust intervention by the region’s competition regulator.

The prompt is shown to users of Android smartphones in the European Union as they set up a device, asking them to choose a search engine from a list of four that always includes Google’s own search engine.

In mid-2018 the European Commission fined Google $5 billion for antitrust violations attached to how it operates the Android platform, including related to how it bundles its own services with the dominant smartphone OS, and ordered it to remedy the infringements — while leaving it up to the tech giant to devise a fix.

Google responded by creating a choice screen for Android users to pick a search engine from a short list — with the initial choices seemingly based on local market share. But last summer it announced it would move to auctioning slots on the screen via a fixed sealed-bid auction process.

The big winners of the initial auction, for the period March 1, 2020 to June 30, 2020, are pro-privacy search engine DuckDuckGo — which gets one of three paid-for slots in all 31 European markets — and a product called Info.com, which will also be shown as an option in all those markets. (Per Wikipedia, the latter is a veteran metasearch engine that provides results from multiple search engines and directories, including Google.)

French pro-privacy search engine Qwant will be shown as an option to Android users in eight European markets. While Russia’s Yandex will appear as an option in five markets in the east of the region.

Other search engines that will appear as choices in a minority of European markets are GMX, Seznam, Givero and PrivacyWall.

At a glance the big loser looks to be Microsoft’s Bing search engine — which will only appear as an option on the choice screen shown in the U.K.

Tree-planting search engine Ecosia does not appear anywhere on the list at all, despite appearing on some initial Android choice screens — having taken the decision to boycott the auction because it objects to Google’s “pay-to-play” approach.

“We believe this auction is at odds with the spirit of the July 2018 EU Commission ruling,” Ecosia CEO Christian Kroll told the BBC. “Internet users deserve a free choice over which search engine they use and the response of Google with this auction is an affront to our right to a free, open and federated internet. Why is Google able to pick and choose who gets default status on Android?”

It’s not the only search engine critical of Google’s move, with Qwant and DuckDuckGo both raising concerns immediately after Google announced it would shift to a paid auction last year.

Despite participating in the process — and winning a universal slot — DuckDuckGo told us it still does not agree with the pay-to-play approach.

“We believe a search preference menu is an excellent way to meaningfully increase consumer choice if designed properly. Our own research has reinforced this point and we look forward to the day when Android users in Europe will have the opportunity to easily make DuckDuckGo their default search engine while setting up their phones. However, we still believe a pay-to-play auction with only 4 slots isn’t right because it means consumers won’t get all the choices they deserve and Google will profit at the expense of the competition,” a spokesperson said in a statement.

A spokesperson for Qwant also told us: “Qwant has repeatedly called for all competitors to be granted access to the mobile market in an open manner, with the same chances for all to be chosen by users as their default search engine. We don’t believe it is fair from Google to require competing search engines to pay them for the chance to be offered as an alternative to Google, when Google was found to abuse its dominant position through its Android mobile system. Nevertheless, given the importance of the mobile market for any ambitious search engine, we had to participate in this first bidding process and are relieved that users finally have the possibility to choose Qwant as their default search engine on Android devices in some countries. We wished it was the case in all countries and that our competitors had all the same opportunity, since search engines should compete on their merits and not on their capability to pay Google for a slot in a choice screen.”

This report was updated with additional comment from Qwant 

Powered by WPeMatico

How Ring is rethinking privacy and security

Posted by | Amazon, CES, CES 2020, Gadgets, jamie siminoff, privacy, ring, Security | No Comments

Ring is now a major player when it comes to consumer video doorbells, security cameras — and privacy protection.

Amazon acquired the company and promotes its devices heavily on its e-commerce websites. Ring has even become a cultural phenomenon with viral videos being shared on social networks and the RingTV section on the company’s website.

But that massive success has come with a few growing pains; as Motherboard found out, customers don’t have to use two-factor authentication, which means that anybody could connect to their security camera if they re-use the same password everywhere.

When it comes to privacy, Ring’s Neighbors app has attracted a ton of controversy. Some see it as a libertarian take on neighborhood watch that empowers citizens to monitor their communities using surveillance devices.

Others have questioned partnerships between Ring and local police to help law enforcement authorities request videos from Ring users.

In a wide-ranging interview, Ring founder Jamie Siminoff looked back at the past six months, expressed some regrets and defended his company’s vision. The interview was edited for clarity and brevity.


TechCrunch: Let’s talk about news first. You started mostly focused on security cameras, but you’ve expanded way beyond security cameras. And in particular, I think the light bulb that you introduced is pretty interesting. Do you want to go deeper in this area and go head to head against Phillips Hue for instance?

Jamie Siminoff: We try not to ever look at competition — like the company is going head to head with… we’ve always been a company that has invented around a mission of making neighborhoods safer.

Sometimes, that puts us into a place that would be competing with another company. But we try to look at the problem and then come up with a solution and not look at the market and try to come up with a competitive product.

No one was making — and I still don’t think there’s anyone making — a smart outdoor light bulb. We started doing the floodlight camera and we saw how important light was. We literally saw it through our camera. With motion detection, someone will come over a fence, see the light and jump back over. We literally could see the impact of light.

So you don’t think you would have done it if it wasn’t a light bulb that works outside as well as inside?

For sure. We’ve seen the advantage of linking all the lights around your home. When you walk up on a step light and that goes off, then everything goes off at the same time. It’s helpful for your own security and safety and convenience.

The light bulbs are just an extension of the floodlight. Now again, it can be used indoor because there’s no reason why it can’t be used indoor.

Following Amazon’s acquisition, do you think you have more budget, you can hire more people and you can go faster and release all these products?

It’s not a budget issue. Money was never a constraint. If you had good ideas, you could raise money — I think that’s Silicon Valley. So it’s not money. It’s knowledge and being able to reach a critical mass.

As a consumer electronics company, you need to have specialists in different areas. You can’t just get them with money, you kind of need to have a big enough thing. For example, wireless antennas. We had good wireless antennas. We did the best we thought we could do. But we get into Amazon and they have a group that’s super highly focused on each individual area of that. And we make much better antennas today.

Our reviews are up across the board, our products are more liked by our customers than they were before. Jamie Siminoff

Our reviews are up across the board, our products are more liked by our customers than they were before. To me, that’s a good measure — after Amazon, we have made more products and they’re more beloved by our customers. And I think part of that is that we can tap into resources more efficiently.

And would you say the teams are still very separate?

Amazon is kind of cool. I think it’s why a lot of companies that have been bought by Amazon stay for a long time. Amazon itself is almost an amalgamation of a lot of little startups. Internally, almost everyone is a startup CEO — there’s a lot of autonomy there.

Powered by WPeMatico

Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Posted by | Apps, augmented reality, Facebook, Facebook Augmented Reality, Facebook Policy, Facebook Regulation, Government, Health, Mark Zuckerberg, Mobile, Oculus, Personnel, Policy, privacy, remote work, Social, TC | No Comments

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030,” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade but augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again — Facebook is building more private groups and messaging options
  • Regulation – The big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents:

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forth interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition and privacy questions it’s now asking the government to answer for it

Noticeably absent from Zuckerberg’s post are explicit mentions of some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead — people who understand human impulses toward greed and vanity. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020

For more on why Facebook can’t succeed on idealism alone, read:

 

Powered by WPeMatico

ByteDance & TikTok have secretly built a deepfakes maker

Posted by | Apps, artificial intelligence, bytedance, deepfakes, douyin, Entertainment, face swapping, Media, Mobile, Policy, privacy, Social, TC, tiktok | No Comments

TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.

With ByteDance’s new Face Swap feature, users scan themselves, pick a video and have their face overlaid on the body of someone in the clip

The deepfakes feature, if launched in Douyin and TikTok, could create a more controlled environment where face swapping technology plus a limited selection of source videos can be used for fun instead of spreading misinformation. It might also raise awareness of the technology so more people are aware that they shouldn’t believe everything they see online. But it’s also likely to heighten fears about what ByteDance could do with such sensitive biometric data — similar to what’s used to set up Face ID on iPhones.

Several other tech companies have recently tried to consumerize watered-down versions of deepfakes. The app Morphin lets you overlay a computerized rendering of your face on actors in GIFs. Snapchat offered a FaceSwap option for years that would switch the visages of two people in frame, or replace one on camera with one from your camera roll, and there are standalone apps that do that too, like Face Swap Live. Then last month, TechCrunch spotted Snapchat’s new Cameos for inserting a real selfie into video clips it provides, though the results aren’t meant to look confusingly realistic.

Most problematic has been Chinese deepfakes app Zao, which uses artificial intelligence to blend one person’s face into another’s body as they move and synchronize their expressions. Zao went viral in September despite privacy and security concerns about how users’ facial scans might be abused. Zao was previously blocked by China’s WeChat for presenting “security risks.” [Correction: While “Zao” is mentioned in the discovered code, it refers to the general concept rather than a partnership between ByteDance and Zao.]

But ByteDance could bring convincingly life-like deepfakes to TikTok and Douyin, two of the world’s most popular apps with over 1.5 billion downloads.

Zao in the Chinese iOS App Store

Zao in the Chinese iOS App Store

Hidden inside TikTok and Douyin

TechCrunch received a tip about the news from Israeli in-app market research startup Watchful.ai. The company had discovered code for the deepfakes feature in the latest version of TikTok and Douyin’s Android apps. Watchful.ai was able to activate the code in Douyin to generate screenshots of the feature, though it’s not currently available to the public.

First, users scan their face into TikTok. This also serves as an identity check to make sure you’re only submitting your own face so you can’t make unconsented deepfakes of anyone else using an existing photo or a single shot of their face. By asking you to blink, nod and open and close your mouth while in focus and proper lighting, Douyin can ensure you’re a live human and create a manipulable scan of your face that it can stretch and move to express different emotions or fill different scenes.

You’ll then be able to pick from videos ByteDance claims to have the rights to use, and it will replace with your own the face of whomever is in the clip. You can then share or download the deepfake video, though it will include an overlayed watermark the company claims will help distinguish the content as not being real. I received confidential access to videos made by Watchful using the feature, and the face swapping is quite seamless. The motion tracking, expressions and color blending all look very convincing.

Watchful also discovered unpublished updates to TikTok and Douyin’s terms of service that cover privacy and usage of the deepfakes feature. Inside the U.S. version of TikTok’s Android app, English text in the code explains the feature and some of its terms of use:

Your facial pattern will be used for this feature. Read the Drama Face Terms of Use and Privacy Policy for more details. Make sure you’ve read and agree to the Terms of Use and Privacy Policy before continuing. 1. To make this feature secure for everyone, real identity verification is required to make sure users themselves are using this feature with their own faces. For this reason, uploaded photos can’t be used; 2. Your facial pattern will only be used to generate face-change videos that are only visible to you before you post it. To better protect your personal information, identity verification is required if you use this feature later. 3. This feature complies with Internet Personal Information Protection Regulations for Minors. Underage users won’t be able to access this feature. 4. All video elements related to this feature provided by Douyin have acquired copyright authorization.

ZHEJIANG, CHINA – OCTOBER 18 2019 Two U.S. senators have sent a letter to the U.S. national intelligence agency saying TikTok could pose a threat to U.S. national security and should be investigated. Visitors visit the booth of Douyin (Tiktok) at the 2019 Smart Expo in Hangzhou, east China’s Zhejiang province, Oct. 18, 2019.- PHOTOGRAPH BY Costfoto / Barcroft Media via Getty Images.

A longer terms of use and privacy policy was also found in Chinese within Douyin. Translated into English, some highlights from the text include:

  • “The ‘face-changing’ effect presented by this function is a fictional image generated by the superimposition of our photos based on your photos. In order to show that the original work has been modified and the video generated using this function is not a real video, we will mark the video generated using this function. Do not erase the mark in any way.”

  • “The information collected during the aforementioned detection process and using your photos to generate face-changing videos is only used for live detection and matching during face-changing. It will not be used for other purposes . . . And matches are deleted immediately and your facial features are not stored.”

  • “When you use this function, you can only use the materials provided by us, you cannot upload the materials yourself. The materials we provide have been authorized by the copyright owner”.

  • “According to the ‘Children’s Internet Personal Information Protection Regulations’ and the relevant provisions of laws and regulations, in order to protect the personal information of children / youths, this function restricts the use of minors”.

We reached out to TikTok and Douyin for comment regarding the deepfakes feature, when it might launch, how the privacy of biometric scans are protected and the age limit. However, TikTok declined to answer those questions. Instead, a spokesperson insisted that “after checking with the teams I can confirm this is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin, and a privacy policy that mentions Douyin. That said, we don’t work on Douyin here at TikTok.” They later told TechCrunch that “The inactive code fragments are being removed to eliminate any confusion,” which implicitly confirms that Face Swap code was found in TikTok.

A Douyin spokesperson tells TechCrunch “Douyin follows the laws and regulations of the jurisdictions in which it operates, which is China.” They denied that the Face Swap terms of service appear in TikTok despite TechCrunch reviewing code from the app showing those terms of service and the feature’s functionality.

This is suspicious, and doesn’t explain why code for the deepfakes feature and special terms of service in English for the feature appear in TikTok, and not just Douyin, where the app can already be activated and a longer terms of service was spotted. TikTok’s U.S. entity has previously denied complying with censorship requests from the Chinese government in contradiction to sources who told The Washington Post that TikTok did censor some political and sexual content at China’s behest.

Consumerizing deepfakes

It’s possible that the deepfakes Face Swap feature never officially launches in China or the U.S. But it’s fully functional, even if unreleased, and demonstrates ByteDance’s willingness to embrace the controversial technology despite its reputation for misinformation and non-consensual pornography. At least it’s restricting the use of the feature by minors, only letting you face-swap yourself, and preventing users from uploading their own source videos. That avoids it being used to create dangerous misinformational videos like the slowed down one making House Speaker Nancy Pelosi seem drunk, or clips of people saying things as if they were President Trump.

“It’s very rare to see a major social networking app restrict a new, advanced feature to their users 18 and over only,” Watchful.ai co-founder and CEO Itay Kahana tells TechCrunch. “These deepfake apps might seem like fun on the surface, but they should not be allowed to become trojan horses, compromising IP rights and personal data, especially personal data from minors who are overwhelmingly the heaviest users of TikTok to date.”

TikTok has already been banned by the U.S. Navy and ByteDance’s acquisition and merger of Musical.ly into TikTok is under investigation by the Committee on Foreign Investment in The United States. Deepfake fears could further heighten scrutiny.

With the proper safeguards, though, face-changing technology could usher in a new era of user-generated content where the creator is always at the center of the action. It’s all part of a new trend of personalized media that could be big in 2020. Social media has evolved from selfies to Bitmoji to Animoji to Cameos, and now consumerized deepfakes. When there are infinite apps and videos and notifications to distract us, making us the star could be the best way to hold our attention.

Powered by WPeMatico

Many smart home device makers still won’t say if they give your data to the government

Posted by | arlo, Cloud, Gadgets, google nest, hardware, Internet of Things, law enforcement, privacy, Samsung, Security, smart devices, technology, transparency report | No Comments

A year ago, we asked some of the most prominent smart home device makers if they have given customer data to governments. The results were mixed.

The big three smart home device makers — Amazon, Facebook and Google (which includes Nest) — all disclosed in their transparency reports if and when governments demand customer data. Apple said it didn’t need a report, as the data it collects was anonymized.

As for the rest, none had published their government data-demand figures.

In the year that’s past, the smart home market has grown rapidly, but the remaining device makers have made little to no progress on disclosing their figures. And in some cases, it got worse.

Smart home and other internet-connected devices may be convenient and accessible, but they collect vast amounts of information on you and your home. Smart locks know when someone enters your house, and smart doorbells can capture their face. Smart TVs know which programs you watch and some smart speakers know what you’re interested in. Many smart devices collect data when they’re not in use — and some collect data points you may not even think about, like your wireless network information, for example — and send them back to the manufacturers, ostensibly to make the gadgets — and your home — smarter.

Because the data is stored in the cloud by the devices manufacturers, law enforcement and government agencies can demand those companies turn over that data to solve crimes.

But as the amount of data collection increases, companies are not being transparent about the data demands they receive. All we have are anecdotal reports — and there are plenty: Police obtained Amazon Echo data to help solve a murder; Fitbit turned over data that was used to charge a man with murder; Samsung helped catch a sex predator who watched child abuse imagery; Nest gave up surveillance footage to help jail gang members; and recent reporting on Amazon-owned Ring shows close links between the smart home device maker and law enforcement.

Here’s what we found.

Smart lock and doorbell maker August gave the exact same statement as last year, that it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA).” But August spokesperson Stephanie Ng would not comment on the number of non-national security requests — subpoenas, warrants and court orders — that the company has received, only that it complies with “all laws” when it receives a legal demand.

Roomba maker iRobot said, as it did last year, that it has “not received” any government demands for data. “iRobot does not plan to issue a transparency report at this time,” but it may consider publishing a report “should iRobot receive a government request for customer data.”

Arlo, a former Netgear smart home division that spun out in 2018, did not respond to a request for comment. Netgear, which still has some smart home technology, said it does “not publicly disclose a transparency report.”

Amazon-owned Ring, whose cooperation with law enforcement has drawn ire from lawmakers and faced questions over its ability to protect users’ privacy, said last year it planned to release a transparency report in the future, but did not say when. This time around, Ring spokesperson Yassi Shahmiri would not comment and stopped responding to repeated follow-up emails.

Honeywell spokesperson Megan McGovern would not comment and referred questions to Resideo, the smart home division Honeywell spun out a year ago. Resideo’s Bruce Anderson did not comment.

And just as last year, Samsung, a maker of smart devices and internet-connected televisions and other appliances, also did not respond to a request for comment.

On the whole, the companies’ responses were largely the same as last year.

But smart switch and sensor maker Ecobee, which last year promised to publish a transparency report “at the end of 2018,” did not follow through with its promise. When we asked why, Ecobee spokesperson Kristen Johnson did not respond to repeated requests for comment.

Based on the best available data, August, iRobot, Ring and the rest of the smart home device makers have hundreds of millions of users and customers around the world, with the potential to give governments vast troves of data — and users and customers are none the wiser.

Transparency reports may not be perfect, and some are less transparent than others. But if big companies — even after bruising headlines and claims of co-operation with surveillance states — disclose their figures, there’s little excuse for the smaller companies.

This time around, some companies fared better than their rivals. But for anyone mindful of their privacy, you can — and should — expect better.

Powered by WPeMatico

Instagram still doesn’t age-check kids. That must change.

Posted by | Apps, coppa, Education, Facebook, Facebook age policy, Government, instagram, Instagram age policy, Mobile, Opinion, Policy, privacy, Snapchat, Social, TC, tiktok | No Comments

Instagram dodges child safety laws. By not asking users their age upon signup, it can feign ignorance about how old they are. That way, it can’t be held liable for $40,000 per violation of the Child Online Privacy Protection Act. The law bans online services from collecting personally identifiable information about kids under 13 without parental consent. Yet Instagram is surely stockpiling that sensitive info about underage users, shrouded by the excuse that it doesn’t know who’s who.

But here, ignorance isn’t bliss. It’s dangerous. User growth at all costs is no longer acceptable.

It’s time for Instagram to step up and assume responsibility for protecting children, even if that means excluding them. Instagram needs to ask users’ age at sign up, work to verify they volunteer their accurate birthdate by all practical means, and enforce COPPA by removing users it knows are under 13. If it wants to allow tweens on its app, it needs to build a safe, dedicated experience where the app doesn’t suck in COPPA-restricted personal info.

Minimum Viable Responsibility

Instagram is woefully behind its peers. Both Snapchat and TikTok require you to enter your age as soon as you start the sign up process. This should really be the minimum regulatory standard, and lawmakers should close the loophole allowing services to skirt compliance by not asking. If users register for an account, they should be required to enter an age of 13 or older.

Instagram’s parent company Facebook has been asking for birthdate during account registration since its earliest days. Sure, it adds one extra step to sign up, and impedes its growth numbers by discouraging kids to get hooked early on the social network. But it also benefits Facebook’s business by letting it accurately age-target ads.

Most importantly, at least Facebook is making a baseline effort to keep out underage users. Of course, as kids do when they want something, some are going to lie about their age and say they’re old enough. Ideally, Facebook would go further and try to verify the accuracy of a user’s age using other available data, and Instagram should too.

Both Facebook and Instagram currently have moderators lock the accounts of any users they stumble across that they suspect are under 13. Users must upload government-issued proof of age to regain control. That policy only went into effect last year after UK’s Channel 4 reported a Facebook moderator was told to ignore seemingly underage users unless they explicitly declared they were too young or were reported for being under 13. An extreme approach would be to require this for all signups, though that might be expensive, slow, significantly hurt signup rates, and annoy of-age users.

Instagram is currently on the other end of the spectrum. Doing nothing around age-gating seems recklessly negligent. When asked for comment about how why it doesn’t ask users’ ages, how it stops underage users from joining, and if it’s in violation of COPPA, Instagram declined to comment. The fact that Instagram claims to not know users’ ages seems to be in direct contradiction to it offering marketers custom ad targeting by age such as reaching just those that are 13.

Instagram Prototypes Age Checks

Luckily, this could all change soon.

Mobile researcher and frequent TechCrunch tipster Jane Manchun Wong has spotted Instagram code inside its Android app that shows it’s prototyping an age-gating feature that rejects users under 13. It’s also tinkering with requiring your Instagram and Facebook birthdates to match. Instagram gave me a “no comment” when I asked about if these features would officially roll out to everyone.

Code in the app explains that “Providing your birthday helps us make sure you get the right Instagram experience. Only you will be able to see your birthday.” Beyond just deciding who to let in, Instagram could use this info to make sure users under 18 aren’t messaging with adult strangers, that users under 21 aren’t seeing ads for alcohol brands, and that potentially explicit content isn’t shown to minors.

Instagram’s inability to do any of this clashes with it and Facebook’s big talk this year about its commitment to safety. Instagram has worked to improve its approach to bullying, drug sales, self-harm, and election interference, yet there’s been not a word about age gating.

Meanwhile, underage users promote themselves on pages for hashtags like #12YearOld where it’s easy to find users who declare they’re that age right in their profile bio. It took me about 5 minutes to find creepy “You’re cute” comments from older men on seemingly underage girls’ photos. Clearly Instagram hasn’t been trying very hard to stop them from playing with the app.

Illegal Growth

I brought up the same unsettling situations on Musically, now known as TikTok, to its CEO Alex Zhu on stage at TechCrunch Disrupt in 2016. I grilled Zhu about letting 10-year-olds flaunt their bodies on his app. He tried to claim parents run all of these kids’ accounts, and got frustrated as we dug deeper into Musically’s failures here.

Thankfully, TikTok was eventually fined $5.7 million this year for violating COPPA and forced to change its ways. As part of its response, TikTok started showing an age gate to both new and existing users, removed all videos of users under 13, and restricted those users to a special TikTok Kids experience where they can’t post videos, comment, or provide any COPPA-restricted personal info.

If even a Chinese app social media app that Facebook CEO has warned threatens free speech with censorship is doing a better job protecting kids than Instagram, something’s gotta give. Instagram could follow suit, building a special section of its apps just for kids where they’re quarantined from conversing with older users that might prey on them.

Perhaps Facebook and Instagram’s hands-off approach stems from the fact that CEO Mark Zuckerberg doesn’t think the ban on under-13-year-olds should exist. Back in 2011, he said “That will be a fight we take on at some point . . . My philosophy is that for education you need to start at a really, really young age.” He’s put that into practice with Messenger Kids which lets 6 to 12-year-olds chat with their friends if parents approve.

The Facebook family of apps’ ad-driven business model and earnings depend on constant user growth that could be inhibited by stringent age gating. It surely doesn’t want to admit to parents it’s let kids slide into Instagram, that advertisers were paying to reach children too young to buy anything, and to Wall Street that it might not have 2.8 billion legal users across its apps as it claims.

But given Facebook and Instagram’s privacy scandals, addictive qualities, and impact on democracy, it seems like proper age-gating should be a priority as well as the subject of more regulatory scrutiny and public concern. Society has woken up to the harms of social media, yet Instagram erects no guards to keep kids from experiencing those ills for themselves. Until it makes an honest effort to stop kids from joining, the rest of Instagram’s safety initiatives ring hollow.

Powered by WPeMatico

Now even the FBI is warning about your smart TV’s security

Posted by | chromecast, digital television, Federal Bureau of Investigation, Gadgets, hardware, Internet of Things, Multimedia, privacy, Samsung, Security, smart tv, streaming services, technology, telecommunications | No Comments

If you just bought a smart TV on Black Friday or plan to buy one for Cyber Monday tomorrow, the FBI wants you to know a few things.

Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority.

That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose.

“Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI.

The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.

Active attacks and exploits against smart TVs are rare, but not unheard of. Because every smart TV comes with their manufacturer’s own software and are at the mercy of their often unreliable and irregular security patching schedule, some devices are more vulnerable than others. Earlier this year, hackers showed it was possible to hijack Google’s Chromecast streaming stick and broadcast random videos to thousands of victims.

In fact, some of the biggest exploits targeting smart TVs in recent years were developed by the Central Intelligence Agency, but were stolen. The files were later published online by WikiLeaks.

But as much as the FBI’s warning is responding to genuine fears, arguably one of the bigger issues that should cause as much if not greater concerns are how much tracking data is collected on smart TV owners.

The Washington Post earlier this year found that some of the most popular smart TV makers — including Samsung and LG — collect tons of information about what users are watching in order to help advertisers better target ads against their viewers and to suggest what to watch next, for example. The TV tracking problem became so problematic a few years ago that smart TV maker Vizio had to pay $2.2 million in fines after it was caught secretly collecting customer viewing data. Earlier this year, a separate class action suit related to the tracking again Vizio was allowed to go ahead.

The FBI recommends placing black tape over an unused smart TV camera, keeping your smart TV up-to-date with the latest patches and fixes, and to read the privacy policy to better understand what your smart TV is capable of.

As convenient as it might be, the most secure smart TV might be one that isn’t connected to the internet at all.

Powered by WPeMatico

Gift Guide: Essential security and privacy gifts to help protect your friends and family

Posted by | Apple, facial recognition, Gadgets, Gift Guide 2019, hardware, hong kong, journalist, online ads, Password, password manager, privacy, Security, webcam | No Comments

There’s no such thing as perfect privacy or security, but there’s a lot you can do to lock down your online life. And the holiday season is a great time to encourage others to do the same. Some people are more likely to take security into their own hands if they’re given a nudge along the way.

Here we have a selection of gift ideas — from helpful security solutions to unique and interesting gadgets that will keep your information safe, but without breaking the bank.

A hardware security key for two-factor

Your online accounts have everything about you and you’d want to keep them safe. Two-factor authentication is great, but for the more security minded there’s an even stronger solution. A security key is a physical hardware key that’s even stronger than having a two-factor code going to your phone. These keys plug into your USB port on your computer (or the charger port on your phone) to prove to online services, like Facebook, Google, and Twitter, that you are who you say you are. Google’s own data shows security keys offer near-unbeatable protection against even the most powerful and resourced nation-state hackers. Yubikeys are our favorite and come in all shapes and sizes. They’re also cheap. Google also has a range of its own branded Titan security keys, one of which also offers Bluetooth connectivity.

Price: from $20.
Available from: Yubico Store | Google Store

Webcam cover

Surveillance-focused malware, like remote access trojans, can infect computers and remotely switch on your webcam without your permission. Most computer webcams these days have an indicator light that shows you when the camera is active. But what if your camera is blocked, preventing any accidental exposure in the first place? Enter the simple but humble webcam blocker. It slides open when you need to access your camera, and slides to cover the lens when you don’t. Support local businesses and non-profits — you can search for unique and interesting webcam covers on Etsy

Price: from $5 – $10.
Available from: Etsy | Electronic Frontier Foundation

A microphone blocker

Now you have you webcam cover, what about your microphone? Just as hackers can tap into your webcam, they can also pick up on your audio. Microphone blockers contain a semiconductor that tricks your computer or device into thinking that it’s a working microphone, when in fact it’s not able to pick up any audio. Anyone hacking into your device won’t hear a thing. Some modern Macs already come with a new Apple T2 security chip which prevents hackers from snooping on your microphone when your laptop’s lid is shut. But a microphone blocker will work all the time, even when the lid is open.

Price: $6.99 – $16.99.
Available from: Nope Blocker | Mic Lock

A USB data blocker

You might have heard about “juice-jacking,” where hackers plant malicious implants in USB outlets, which steal a person’s device data when an unsuspecting victim plugs in. It’s a threat that’s almost unheard of, but proof-of-concepts have shown how easy it is to implant malicious components in legitimate-looking cables. A USB data blocker essentially acts as a data barrier, preventing any information going in or out of your device, while letting power through to charge your battery. They’re cheap but effective.

Price: from $6.99 and $11.49.
Available from: Amazon | SyncStop

A privacy screen for your computer or phone

How often have you seen someone’s private messages or document as you look over their shoulder, or see them in the next aisle over? Privacy screens can protect you from “visual hacking.” These screens make it near-impossible for anyone other than the device user to snoop at what you’re working on. And, you can get them for all kinds of devices and displays — including phones. But make sure you get the right size!

Price: from about $17.
Available from: Amazon

A password manager subscription

Password managers are a real lifesaver. One strong, unique password lets you into your entire bank of passwords. They’re great for storing your passwords, but also for encouraging you to use better, stronger, unique passwords. And because many are cross-platform, you can bring your passwords with you. Plenty of password managers exist — from LastPass, Lockbox, and Dashlane, to open-source versions like KeePass. Many are free, but a premium subscription often comes with benefits and better features. And if you’re a journalist, 1Password has a free subscription for you.

Price: Many free, premium offerings start at $35.88 – $44.28 annually
Available from: 1Password | LastPass | Dashlane | KeePass

Anti-surveillance clothing

Whether you’re lawfully protesting or just want to stay in “incognito mode,” there are — believe it or not — fashion lines that can help prevent facial recognition and other surveillance systems from identifying you. This clothing uses a kind of camouflage that confuses surveillance technology by giving them more interesting things to detect, like license plates and other detectable patterns.

Price: $35.99.
Available from: Adversarial Fashion

Pi-hole

Think of a Pi-hole as a “hardware ad-blocker.” A Pi-hole is a essentially a Raspberry Pi mini-computer that runs ad-blocking technology as a box that sits on your network. It means that everyone on your home network benefits from ad blocking. Ads may generate revenue for websites but online ads are notorious for tracking users across the web. Until ads can behave properly, a Pi-hole is a great way to capture and sinkhole bad ad traffic. The hardware may be cheap, but the ad-blocking software is free. Donations to the cause are welcome.

Price: From $35.
Available from: Pi-hole | Raspberry Pi

And finally, some light reading…

There are two must-read books this year. NSA whistleblower Edward Snowden’s “Permanent Record” autobiography covers his time as he left the shadowy U.S. intelligence agency to Hong Kong, where he spilled thousands of highly classified government documents to reporters about the scope and scale of its massive global surveillance partnerships and programs. And, Andy Greenberg’s book on “Sandworm”, a beautifully written deep-dive into a group of Russian hackers blamed for the most disruptive cyberattack in history, NotPetya, This incredibly detailed investigative book leaves no stone unturned, unravelling the work of a highly secretive group that caused billions of dollars of damage.

Price: From $14.99.
Available from: Amazon (Permanent Record) | Amazon (Sandworm)

Powered by WPeMatico