Policy

US regulators need to catch up with Europe on fintech innovation 

Posted by | Adyen, Amazon, Android, Apple, Chime, Column, Consumer Financial Protection Bureau, eqt ventures, Facebook, Finance, Google, Kabbage, Libra, monzo, Open Banking, Policy, Revolut, TC, Uber, Venmo, visa | No Comments
Alastair Mitchell
Contributor

Alastair Mitchell is a partner at multi-stage VC fund EQT Ventures and the fund’s B2B sales, marketing and SaaS expert. Ali also focuses on helping US companies scale into Europe and vice versa.
More posts by this contributor

Fintech companies are fundamentally changing how the financial services ecosystem operates, giving consumers powerful tools to help with savings, budgeting, investing, insurance, electronic payments and many other offerings. This industry is growing rapidly, filling gaps where traditional banks and financial institutions have failed to meet customer needs.

Yet progress has been uneven. Notably, consumer fintech adoption in the United States lags well behind much of Europe, where forward-thinking regulation has sparked an outpouring of innovation in digital banking services — as well as the backend infrastructure onto which products are built and operated.

That might seem counterintuitive, as regulation is often blamed for stifling innovation. Instead, European regulators have focused on reducing barriers to fintech growth rather than protecting the status quo. For example, the U.K.’s Open Banking regulation requires the country’s nine big high-street banks to share customer data with authorized fintech providers.

The EU’s PSD2 (Payment Services Directive 2) obliges banks to create application programming interfaces (APIs) and related tools that let customers share data with third parties. This creates standards that level the playing field and nurture fintech innovation. And the U.K.’s Financial Conduct Authority supports new fintech entrants by running a “sandbox” for software testing that helps speed new products into service.

Regulations, if implemented effectively as demonstrated by those in Europe, will lead to a net positive to consumers. While it is inevitable that regulations will come, if fintech entrepreneurs take the action to engage early and often with regulators, it will ensure that the regulations put in place support innovation and ultimately benefit the consumer.

Powered by WPeMatico

TechCrunch’s Top 10 investigative reports from 2019

Posted by | Airbnb, Apple, Apps, Bing, Diversity, Drama, Education, Elon Musk, Facebook, Facebook Researchgate, giphy, Hack, hardware, HQ Trivia, microsoft bing, Mobile, Personnel, pi-top, Policy, Security, Social, Startups, TC, The Boring Company, Transportation, tufts, Twitter, WannaCry | No Comments

Facebook spying on teens, Twitter accounts hijacked by terrorists, and sexual abuse imagery found on Bing and Giphy were amongst the ugly truths revealed by TechCrunch’s investigating reporting in 2019. The tech industry needs more watchdogs than ever as its size enlargens the impact of safety failures and the abuse of power. Whether through malice, naivety, or greed, there was plenty of wrongdoing to sniff out.

Led by our security expert Zack Whittaker, TechCrunch undertook more long-form investigations this year to tackle these growing issues. Our coverage of fundraises, product launches, and glamorous exits only tell half the story. As perhaps the biggest and longest running news outlet dedicated to startups (and the giants they become), we’re responsible for keeping these companies honest and pushing for a more ethical and transparent approach to technology.

If you have a tip potentially worthy of an investigation, contact TechCrunch at tips@techcrunch.com or by using our anonymous tip line’s form.

Image: Bryce Durbin/TechCrunch

Here are our top 10 investigations from 2019, and their impact:

Facebook pays teens to spy on their data

Josh Constine’s landmark investigation discovered that Facebook was paying teens and adults $20 in gift cards per month to install a VPN that sent Facebook all their sensitive mobile data for market research purposes. The laundry list of problems with Facebook Research included not informing 187,000 users the data would go to Facebook until they signed up for “Project Atlas”, not receiving proper parental consent for over 4300 minors, and threatening legal action if a user spoke publicly about the program. The program also abused Apple’s enterprise certificate program designed only for distribution of employee-only apps within companies to avoid the App Store review process.

The fallout was enormous. Lawmakers wrote angry letters to Facebook. TechCrunch soon discovered a similar market research program from Google called Screenwise Meter that the company promptly shut down. Apple punished both Google and Facebook by shutting down all their employee-only apps for a day, causing office disruptions since Facebookers couldn’t access their shuttle schedule or lunch menu. Facebook tried to claim the program was above board, but finally succumbed to the backlash and shut down Facebook Research and all paid data collection programs for users under 18. Most importantly, the investigation led Facebook to shut down its Onavo app, which offered a VPN but in reality sucked in tons of mobile usage data to figure out which competitors to copy. Onavo helped Facebook realize it should acquire messaging rival WhatsApp for $19 billion, and it’s now at the center of anti-trust investigations into the company. TechCrunch’s reporting weakened Facebook’s exploitative market surveillance, pitted tech’s giants against each other, and raised the bar for transparency and ethics in data collection.

Protecting The WannaCry Kill Switch

Zack Whittaker’s profile of the heroes who helped save the internet from the fast-spreading WannaCry ransomware reveals the precarious nature of cybersecurity. The gripping tale documenting Marcus Hutchins’ benevolent work establishing the WannaCry kill switch may have contributed to a judge’s decision to sentence him to just one year of supervised release instead of 10 years in prison for an unrelated charge of creating malware as a teenager.

The dangers of Elon Musk’s tunnel

TechCrunch contributor Mark Harris’ investigation discovered inadequate emergency exits and more problems with Elon Musk’s plan for his Boring Company to build a Washington D.C.-to-Baltimore tunnel. Consulting fire safety and tunnel engineering experts, Harris build a strong case for why state and local governments should be suspicious of technology disrupters cutting corners in public infrastructure.

Bing image search is full of child abuse

Josh Constine’s investigation exposed how Bing’s image search results both showed child sexual abuse imagery, but also suggested search terms to innocent users that would surface this illegal material. A tip led Constine to commission a report by anti-abuse startup AntiToxin (now L1ght), forcing Microsoft to commit to UK regulators that it would make significant changes to stop this from happening. However, a follow-up investigation by the New York Times citing TechCrunch’s report revealed Bing had made little progress.

Expelled despite exculpatory data

Zack Whittaker’s investigation surfaced contradictory evidence in a case of alleged grade tampering by Tufts student Tiffany Filler who was questionably expelled. The article casts significant doubt on the accusations, and that could help the student get a fair shot at future academic or professional endeavors.

Burned by an educational laptop

Natasha Lomas’ chronicle of troubles at educational computer hardware startup pi-top, including a device malfunction that injured a U.S. student. An internal email revealed the student had suffered a “a very nasty finger burn” from a pi-top 3 laptop designed to be disassembled. Reliability issues swelled and layoffs ensued. The report highlights how startups operating in the physical world, especially around sensitive populations like students, must make safety a top priority.

Giphy fails to block child abuse imagery

Sarah Perez and Zack Whittaker teamed up with child protection startup L1ght to expose Giphy’s negligence in blocking sexual abuse imagery. The report revealed how criminals used the site to share illegal imagery, which was then accidentally indexed by search engines. TechCrunch’s investigation demonstrated that it’s not just public tech giants who need to be more vigilant about their content.

Airbnb’s weakness on anti-discrimination

Megan Rose Dickey explored a botched case of discrimination policy enforcement by Airbnb when a blind and deaf traveler’s reservation was cancelled because they have a guide dog. Airbnb tried to just “educate” the host who was accused of discrimination instead of levying any real punishment until Dickey’s reporting pushed it to suspend them for a month. The investigation reveals the lengths Airbnb goes to in order to protect its money-generating hosts, and how policy problems could mar its IPO.

Expired emails let terrorists tweet propaganda

Zack Whittaker discovered that Islamic State propaganda was being spread through hijacked Twitter accounts. His investigation revealed that if the email address associated with a Twitter account expired, attackers could re-register it to gain access and then receive password resets sent from Twitter. The article revealed the savvy but not necessarily sophisticated ways terrorist groups are exploiting big tech’s security shortcomings, and identified a dangerous loophole for all sites to close.

Porn & gambling apps slip past Apple

Josh Constine found dozens of pornography and real-money gambling apps had broken Apple’s rules but avoided App Store review by abusing its enterprise certificate program — many based in China. The report revealed the weak and easily defrauded requirements to receive an enterprise certificate. Seven months later, Apple revealed a spike in porn and gambling app takedown requests from China. The investigation could push Apple to tighten its enterprise certificate policies, and proved the company has plenty of its own problems to handle despite CEO Tim Cook’s frequent jabs at the policies of other tech giants.

Bonus: HQ Trivia employees fired for trying to remove CEO

This Game Of Thrones-worthy tale was too intriguing to leave out, even if the impact was more of a warning to all startup executives. Josh Constine’s look inside gaming startup HQ Trivia revealed a saga of employee revolt in response to its CEO’s ineptitude and inaction as the company nose-dived. Employees who organized a petition to the board to remove the CEO were fired, leading to further talent departures and stagnation. The investigation served to remind startup executives that they are responsible to their employees, who can exert power through collective action or their exodus.

If you have a tip for Josh Constine, you can reach him via encrypted Signal or text at (585)750-5674, joshc at TechCrunch dot com, or through Twitter DMs

Powered by WPeMatico

Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Posted by | Apps, augmented reality, Facebook, Facebook Augmented Reality, Facebook Policy, Facebook Regulation, Government, Health, Mark Zuckerberg, Mobile, Oculus, Personnel, Policy, privacy, remote work, Social, TC | No Comments

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030,” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade but augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again — Facebook is building more private groups and messaging options
  • Regulation – The big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents:

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forth interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition and privacy questions it’s now asking the government to answer for it

Noticeably absent from Zuckerberg’s post are explicit mentions of some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead — people who understand human impulses toward greed and vanity. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020

For more on why Facebook can’t succeed on idealism alone, read:

 

Powered by WPeMatico

Twitter’s new reply blockers could let Trump hide critics

Posted by | Apps, bullying, CES 2020, donald trump, Free Speech, Government, Mobile, Policy, Social, TC, Twitter, twitter safety | No Comments

What if politicians could only display Twitter replies from their supporters while stopping everyone else from adding their analysis to the conversation? That’s the risk of Twitter’s upcoming Conversation Participants tool it’s about to start testing that lets you choose if you want replies from everyone, only those your follow or mention or no one.

For most, the reply limiter could help repel trolls and harassment. Unfortunately, it still puts the burden of safety on the victims rather than the villains. Instead of routing out abusers, Twitter wants us to retreat and wall off our tweets from everyone we don’t know. That could reduce the spontaneous yet civil reply chains between strangers that are part of what makes Twitter so powerful.

But in the hands of politicians hoping to avoid scrutiny, the tools could make it appear that their tweets and policies are uniformly supported. By only allowing their sycophants to add replies below their posts, anyone reading along will be exposed to a uniformity of opinion that clashes with Twitter’s position as a marketplace of ideas.

We’ve reached out to Twitter for comment on this issue and whether anyone such as politicians would be prevented from using the new reply-limiting tools. Twitter plans to test the reply-selection tool in Q1, monitor usage, and make modifications if necessary before rolling it out. The company provided this statement:

We want to help people feel safe participating in the conversation on Twitter by giving them more control over the conversations they start. We’ll be experimenting with different options for who can reply to Tweets in early 2020.”

Here’s how the new Conversation Participants feature works, according to the preview shared by Twitter’s Suzanne Xie at CES today, though it could change during testing. When users go to tweet, they’ll have the option of selecting who can reply, unlike now when everyone can leave replies but authors can hide certain ones that viewers can opt to reveal. Conversation Participants offers four options:

Global: Replies from anyone

Group: Replies from those you follow or mention in this tweet

Panel: Replies from only those you mention in this tweet

Statement: No replies allowed

Now imagine President Trump opts to make all of his tweets Group-only. Only those who support him and he therefore follows — like his sons, Fox News’ Sean Hannity and his campaign team — could reply. Gone would be the reels of critics fact-checking his statements or arguing against his policies. His tweets would be safeguarded from reproach, establishing an echo chamber filter bubble for his acolytes.

It’s true that some of these responses from the public might constitute abuse or harassment. But those should be dealt with specifically through strong policy and consistent enforcement of adequate punishments when rules are broken. By instead focusing on stopping replies from huge swaths of the community, the secondary effects have the potential to prop up politicians that consistently lie and undam the flow of misinformation.

There’s also the practical matter that this won’t stop abuse, it will merely move it. Civil discussion will be harder to find for the rest of the public, but harassers will still reach their targets. Users blocked from replying to specific tweets can just tweet directly at the author. They can also continue to mention the author separately or screenshot their tweets and then discuss them.

It’s possible that U.S. law prevents politicians discriminating against citizens with different viewpoints by restricting their access to the politician’s comments on a public forum. Judges ruled this makes it illegal for Trump to block people on social media. But with this new tool, because anyone could still see the tweets, reply to the author separately and not be followed by the author likely doesn’t count as discrimination like blocking does, use of the Conversation Participants tool could be permissible. Someone could sue to push the issue to the courts, though, and judges might be wise to deem this unconstitutional.

Again, this is why Twitter needs to refocus on cleaning up its community rather than only letting people build tiny, temporary shelters from the abuse. It could consider blocking replies and mentions from brand new accounts without sufficient engagement or a linked phone number, as I suggested in 2017. It could also create a new mid-point punishment of a “time-out” from sending replies for harassment that it (sometimes questionably) deems below the threshold of an account suspension.

The combination of Twitter’s decade of weakness in the face of trolls with a new political landscape of normalized misinformation threaten to overwhelm its attempts to get a handle on safety.

CES 2020 coverage - TechCrunch

Powered by WPeMatico

What to expect in digital media in 2020

Posted by | Entertainment, Gaming, Media, Policy, Social, TC | No Comments

As we start 2020, the media and entertainment sectors are in flux. New technologies are enabling new types of content, streaming platforms in multiple content categories are spending billions in their fight for market share and the interplay between social platforms and media is a central topic of global political debate (to put it lightly).

As TechCrunch’s media columnist, I spoke to hundreds of entrepreneurs and executives in North America and Europe last year about the shifts underway across everything from vertically-oriented video series to physics engines in games to music royalty payments. Looking toward the year ahead, here are some of the high-level changes I expect we will see in media in 2020, broken into seven categories: film & TV, gaming, visual & audio effects, social media, music, podcasts and publishing.

Film and TV

In film and television, the battle to compete with Netflix continues with more robust competition than last year. In the U.S., Disney is off to a momentous start with 10 million Disney+ subscribers upon its launch in November and some predicting it will hit 25 million by March (including those on free trials or receiving it for free via Disney’s partnership with Verizon). Bundled with its two other streaming properties, Hulu and ESPN+, Disney+ puts Disney alongside Amazon and Netflix as the Big Three.

Consumers will only pay for so many subscriptions, often one, two, or all of the Big Three (since Amazon Prime Video is included with the broader Prime membership) then a smaller service that best aligns with their personal taste and favorite show of the moment.

AT&T’s HBOMax launches in May with a $14.99/month price tag and is unlikely to break into the echelon of the Big Three, but could be a formidable second tier competitor. Alongside it will be Apple TV+. With a $4.99/month subscription, Apple’s service only includes a small number of original productions, an HBO strategy as HBO gets bundled into a larger library. CBS All Access, Showtime, and NBCUniversal’s upcoming (in April) Peacock fall in this camp as well.

Across Europe, regional media conglomerates will find success in expanding local SVOD and AVOD competitors to Netflix that launched last year — or are set to launch in the next few weeks — like BritBox in the UK, Joyn in Germany and Salto in France. Netflix’s growth is coming from outside the U.S. now so its priority is buying more international shows that will compel new demographics to subscribe.

The most interesting new development in 2020 though will be the April launch of Quibi, the $4.99/month service offering premium shows shot for mobile-first viewing that has already secured $1 billion in funding commitments and $150 million in advertising revenue. Quibi shows will be bite-size in length (less than 15 minutes) and vertically-oriented. The company has poured hundreds of millions of dollars into commissioning established names to create dozens of them. Steven Spielberg and Guillermo del Toro each have Quibi programs and NBC and CBS are creating news shows. The terms it is offering are enticing.

Quibi, which plans to release 125 pieces of content (i.e. episodes) per week and spend $470 million on marketing this year, is an all-or-nothing bet with little room to iterate if it doesn’t get it right the first time; it needs hit shows that break into mainstream pop culture to survive. Billionaire founders Jeffrey Katzenberg and Meg Whitman have set expectations sky-high for the launch; expect the press to slam it in April for failing to meet those expectations and for the platform to redeem itself as a few of its shows gain traction in the months that follow.

Meanwhile, live sports remains the last hope of broadcast TV networks as all other shows go to streaming. Consumers still value watching sports in real-time. Streaming services are coming for live sports too, however, and will make progress toward that goal in 2020. Three weeks ago, DAZN secured the rights to the 2021/22 season of Germany’s Champions League, beating out broadcaster Sky which has shown the matches for the last 20 years. Amazon and YouTube continue to explore bids for sports rights while Facebook and Twitter are stepping back from their efforts. YouTube’s “YouTube TV” and Disney’s “Hulu with Live TV” will cause more consumers to cancel cable TV subscriptions in 2020 and go streaming-only.

The winners in the film & TV sector right now are top production companies. The war for streaming video dominance is driving several of the world’s wealthiest companies (and individuals) to pour tens of billions of dollars into content. Large corporations own the distribution platforms here; the only “startups” to enter with strength — DAZN and Quibi — have been launched by billionaires and started with billion-dollar spending commitments. The entrepreneurial opportunity is on the content creation side — with producers creating shows not with software developers creating platforms.

Gaming

The gaming market is predicted to grow nearly 9% year-over-year from $152 billion globally in 2019 to $165 billion in 2020, according to research firm Newzoo, with roughly 2.5 billion people playing games each year. Gaming is now widespread across all demographic groups. Casual mobile games are responsible for the largest portion of this (and 45% of industry revenue) but PC gaming continues to grow (+4% last year) and console gaming was the fastest growing category last year (+13%).

The big things to watch in gaming this year: cross-platform play, greater focus on social interaction in virtual worlds and the expansion of cloud gaming subscriptions.

Fortnite enticed consumers with the benefits of a cross-platform game that allows players to move between PC, mobile and console and it is setting expectations that other games do the same. Last October we saw the Call of Duty franchise come to mobile and reach a record 100 million downloads in its first week. This trend will continue and it will spread the free-to-play business model that is the norm in mobile games to many PC and console franchises in the process.

Gaming is moving to the social forefront. Many people are turning to massively multiplayer online games (MMOs) like Fortnite and PUBG to socialize, with gameplay as a secondary interest. Games are virtual worlds where players socialize, build things, and own assets much like in the real world. That results in an increasingly fluid interplay between socializing in games and in physical life, much as socializing in the virtual realms of social apps like Instagram or Twitter is now viewed as part of “real world” life.

Expect VCs to bet big on the thesis that “games are the new social networks” in 2020. Large investment firms that a year ago wrote off the category of gaming as “content bets” not fit for VC are now actively hunting for deals.

On this point, there are several startups (like Klang Games, Darewise Entertainment, Singularity 6 and Clockwork Labs) that raised millions in VC funding to create open world games that will launch (in beta at least) in 2020. These are virtual worlds where all players exist in the same instance of the world rather than being capped at 100 or so players per instance. Their vision of the future: digital realms where people will contribute to in-game economies, create friendships and ultimately earn income just like their “real-world” lives. Think next-gen Second Life. Expect them to take time to seed their worlds with early adopters in 2020 before any of them gain mainstream traction in 2021.

Few are as excited about social interaction in games as Facebook, it seems. Eager to own critical turf in the next paradigm shift of social media, Facebook will accelerate its gaming push this year. In late 2019, it acquired Madrid-based PlayGiga — which was working on cloud gaming and 5G technology — and the studio behind the hit VR game Beat Saber. It also secured exclusive rights to the VR versions of popular games like Ubisoft’s “Assassin’s Creed” and “Splinter Cell” for Oculus. Horizon, its virtual world for social interaction within VR, is expected to launch this year as well.

Facebook is betting on AR/VR as the paradigm shift in consumer computing that will replace mobile; it is pouring billions into its efforts to own the hardware and infrastructure pieces which are several years of R&D away from primetime. In the meantime, the consumer shift to social interaction in virtual worlds is occurring in established formats — mobile, PC, and console — so it will work to build the bridge for consumers from that to the future.

Lastly, cloud gaming was one of last year’s biggest headlines with the launch of Google Stadia and you should expect it to be again this year. By moving games to cloud hosting, consumers can play the highest quality games from lower quality devices, greatly expanding the market of potential players. By bundling many such games into a subscription offering, Google and others hope to entice consumers to try many more games.

As TechCrunch’s Lucas Matney argued, however, cloud gaming is likely a feature for existing subscription gaming platforms — namely Playstation Now and Xbox Game Pass — more so than the basis for a new platform to differentiate. The minor latency inherent in playing a cloud-hosted game makes it unattractive to hardcore gamers (who would rather download the game). Next to Sony and Microsoft’s offerings, Stadia’s limited game selection fails to stand out. The competition will only heat up this year with the expected entry of Amazon. Google needs to launch the Stadia integration with YouTube and the Share State feature that it promoted in its Stadia announcement to really drive consumer interest.

Visual and audio effects

Powered by WPeMatico

ByteDance & TikTok have secretly built a deepfakes maker

Posted by | Apps, artificial intelligence, bytedance, deepfakes, douyin, Entertainment, face swapping, Media, Mobile, Policy, privacy, Social, TC, tiktok | No Comments

TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.

With ByteDance’s new Face Swap feature, users scan themselves, pick a video and have their face overlaid on the body of someone in the clip

The deepfakes feature, if launched in Douyin and TikTok, could create a more controlled environment where face swapping technology plus a limited selection of source videos can be used for fun instead of spreading misinformation. It might also raise awareness of the technology so more people are aware that they shouldn’t believe everything they see online. But it’s also likely to heighten fears about what ByteDance could do with such sensitive biometric data — similar to what’s used to set up Face ID on iPhones.

Several other tech companies have recently tried to consumerize watered-down versions of deepfakes. The app Morphin lets you overlay a computerized rendering of your face on actors in GIFs. Snapchat offered a FaceSwap option for years that would switch the visages of two people in frame, or replace one on camera with one from your camera roll, and there are standalone apps that do that too, like Face Swap Live. Then last month, TechCrunch spotted Snapchat’s new Cameos for inserting a real selfie into video clips it provides, though the results aren’t meant to look confusingly realistic.

Most problematic has been Chinese deepfakes app Zao, which uses artificial intelligence to blend one person’s face into another’s body as they move and synchronize their expressions. Zao went viral in September despite privacy and security concerns about how users’ facial scans might be abused. Zao was previously blocked by China’s WeChat for presenting “security risks.” [Correction: While “Zao” is mentioned in the discovered code, it refers to the general concept rather than a partnership between ByteDance and Zao.]

But ByteDance could bring convincingly life-like deepfakes to TikTok and Douyin, two of the world’s most popular apps with over 1.5 billion downloads.

Zao in the Chinese iOS App Store

Zao in the Chinese iOS App Store

Hidden inside TikTok and Douyin

TechCrunch received a tip about the news from Israeli in-app market research startup Watchful.ai. The company had discovered code for the deepfakes feature in the latest version of TikTok and Douyin’s Android apps. Watchful.ai was able to activate the code in Douyin to generate screenshots of the feature, though it’s not currently available to the public.

First, users scan their face into TikTok. This also serves as an identity check to make sure you’re only submitting your own face so you can’t make unconsented deepfakes of anyone else using an existing photo or a single shot of their face. By asking you to blink, nod and open and close your mouth while in focus and proper lighting, Douyin can ensure you’re a live human and create a manipulable scan of your face that it can stretch and move to express different emotions or fill different scenes.

You’ll then be able to pick from videos ByteDance claims to have the rights to use, and it will replace with your own the face of whomever is in the clip. You can then share or download the deepfake video, though it will include an overlayed watermark the company claims will help distinguish the content as not being real. I received confidential access to videos made by Watchful using the feature, and the face swapping is quite seamless. The motion tracking, expressions and color blending all look very convincing.

Watchful also discovered unpublished updates to TikTok and Douyin’s terms of service that cover privacy and usage of the deepfakes feature. Inside the U.S. version of TikTok’s Android app, English text in the code explains the feature and some of its terms of use:

Your facial pattern will be used for this feature. Read the Drama Face Terms of Use and Privacy Policy for more details. Make sure you’ve read and agree to the Terms of Use and Privacy Policy before continuing. 1. To make this feature secure for everyone, real identity verification is required to make sure users themselves are using this feature with their own faces. For this reason, uploaded photos can’t be used; 2. Your facial pattern will only be used to generate face-change videos that are only visible to you before you post it. To better protect your personal information, identity verification is required if you use this feature later. 3. This feature complies with Internet Personal Information Protection Regulations for Minors. Underage users won’t be able to access this feature. 4. All video elements related to this feature provided by Douyin have acquired copyright authorization.

ZHEJIANG, CHINA – OCTOBER 18 2019 Two U.S. senators have sent a letter to the U.S. national intelligence agency saying TikTok could pose a threat to U.S. national security and should be investigated. Visitors visit the booth of Douyin (Tiktok) at the 2019 Smart Expo in Hangzhou, east China’s Zhejiang province, Oct. 18, 2019.- PHOTOGRAPH BY Costfoto / Barcroft Media via Getty Images.

A longer terms of use and privacy policy was also found in Chinese within Douyin. Translated into English, some highlights from the text include:

  • “The ‘face-changing’ effect presented by this function is a fictional image generated by the superimposition of our photos based on your photos. In order to show that the original work has been modified and the video generated using this function is not a real video, we will mark the video generated using this function. Do not erase the mark in any way.”

  • “The information collected during the aforementioned detection process and using your photos to generate face-changing videos is only used for live detection and matching during face-changing. It will not be used for other purposes . . . And matches are deleted immediately and your facial features are not stored.”

  • “When you use this function, you can only use the materials provided by us, you cannot upload the materials yourself. The materials we provide have been authorized by the copyright owner”.

  • “According to the ‘Children’s Internet Personal Information Protection Regulations’ and the relevant provisions of laws and regulations, in order to protect the personal information of children / youths, this function restricts the use of minors”.

We reached out to TikTok and Douyin for comment regarding the deepfakes feature, when it might launch, how the privacy of biometric scans are protected and the age limit. However, TikTok declined to answer those questions. Instead, a spokesperson insisted that “after checking with the teams I can confirm this is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin, and a privacy policy that mentions Douyin. That said, we don’t work on Douyin here at TikTok.” They later told TechCrunch that “The inactive code fragments are being removed to eliminate any confusion,” which implicitly confirms that Face Swap code was found in TikTok.

A Douyin spokesperson tells TechCrunch “Douyin follows the laws and regulations of the jurisdictions in which it operates, which is China.” They denied that the Face Swap terms of service appear in TikTok despite TechCrunch reviewing code from the app showing those terms of service and the feature’s functionality.

This is suspicious, and doesn’t explain why code for the deepfakes feature and special terms of service in English for the feature appear in TikTok, and not just Douyin, where the app can already be activated and a longer terms of service was spotted. TikTok’s U.S. entity has previously denied complying with censorship requests from the Chinese government in contradiction to sources who told The Washington Post that TikTok did censor some political and sexual content at China’s behest.

Consumerizing deepfakes

It’s possible that the deepfakes Face Swap feature never officially launches in China or the U.S. But it’s fully functional, even if unreleased, and demonstrates ByteDance’s willingness to embrace the controversial technology despite its reputation for misinformation and non-consensual pornography. At least it’s restricting the use of the feature by minors, only letting you face-swap yourself, and preventing users from uploading their own source videos. That avoids it being used to create dangerous misinformational videos like the slowed down one making House Speaker Nancy Pelosi seem drunk, or clips of people saying things as if they were President Trump.

“It’s very rare to see a major social networking app restrict a new, advanced feature to their users 18 and over only,” Watchful.ai co-founder and CEO Itay Kahana tells TechCrunch. “These deepfake apps might seem like fun on the surface, but they should not be allowed to become trojan horses, compromising IP rights and personal data, especially personal data from minors who are overwhelmingly the heaviest users of TikTok to date.”

TikTok has already been banned by the U.S. Navy and ByteDance’s acquisition and merger of Musical.ly into TikTok is under investigation by the Committee on Foreign Investment in The United States. Deepfake fears could further heighten scrutiny.

With the proper safeguards, though, face-changing technology could usher in a new era of user-generated content where the creator is always at the center of the action. It’s all part of a new trend of personalized media that could be big in 2020. Social media has evolved from selfies to Bitmoji to Animoji to Cameos, and now consumerized deepfakes. When there are infinite apps and videos and notifications to distract us, making us the star could be the best way to hold our attention.

Powered by WPeMatico

Instagram finally launches 13+ age checkups

Posted by | Apps, coppa, Facebook, instagram, Instagram age check, Instagram age policy, Mobile, Policy, Social, TC | No Comments

Instagram is done playing dumb about users’ ages. After nine years, Instagram is finally embracing more responsibility to protect underage kids from the problems with social media. It will now ask new users to input their birth date and bar users younger than 13 from joining. However, it won’t be asking existing users their age, so Instagram will turn a blind eye to any underage kids already amongst its 1 billion members.

Instagram will later start using age info to offer education about settings and new privacy controls for younger users. It’s also adding the option to only allow people you follow to message you, add you to a group or reply to your Story.

Yesterday we published an opinion piece noting that “Instagram still doesn’t age-check kids. That must change.” after receiving no-comments from Instagram after mobile researcher Jane Manchun Wong spotted Instagram prototyping an age-check feature. As the code she found indicated, Instagram will keep your birthday and date private, and sync it with your Facebook profile if you link your accounts.

Instagram had fallen far behind in protecting underage users. It’s relied on ignorance about users’ ages to avoid a $40,000 fine per violation of the Child Online Privacy Protection Act that bans services from collecting personal info from children younger than 13. “Asking for this information will help prevent underage people from joining Instagram, help us keep young people safer and enable more age-appropriate experiences overall,” Instagram notes.

Facebook, Snapchat and TikTok already require users to enter their birth date as soon as they start the signup process. TikTok built a whole separate section of its app where kids can watch videos but not post or comment, after it was fined $5.7 million by the FTC for violating COPPA.

As for why it took so long, an Instagram spokesperson tells TechCrunch, “Historically, we didn’t require people to tell us their age because we wanted Instagram to be a place where everyone can express themselves fully — irrespective of their identity.” That seems like a pretty thin excuse.

Adding the age check is a good first step for Instagram. But it should consider how it can do more to verify the ages users enter and keep out those who don’t belong exposed to strangers across the app. Moving in line with industry standards is attaining minimum viable responsibility. But an app so appealing to younger users and that deals in such sensitive data should be leading on safety, not just following the herd.

Powered by WPeMatico

Instagram still doesn’t age-check kids. That must change.

Posted by | Apps, coppa, Education, Facebook, Facebook age policy, Government, instagram, Instagram age policy, Mobile, Opinion, Policy, privacy, Snapchat, Social, TC, tiktok | No Comments

Instagram dodges child safety laws. By not asking users their age upon signup, it can feign ignorance about how old they are. That way, it can’t be held liable for $40,000 per violation of the Child Online Privacy Protection Act. The law bans online services from collecting personally identifiable information about kids under 13 without parental consent. Yet Instagram is surely stockpiling that sensitive info about underage users, shrouded by the excuse that it doesn’t know who’s who.

But here, ignorance isn’t bliss. It’s dangerous. User growth at all costs is no longer acceptable.

It’s time for Instagram to step up and assume responsibility for protecting children, even if that means excluding them. Instagram needs to ask users’ age at sign up, work to verify they volunteer their accurate birthdate by all practical means, and enforce COPPA by removing users it knows are under 13. If it wants to allow tweens on its app, it needs to build a safe, dedicated experience where the app doesn’t suck in COPPA-restricted personal info.

Minimum Viable Responsibility

Instagram is woefully behind its peers. Both Snapchat and TikTok require you to enter your age as soon as you start the sign up process. This should really be the minimum regulatory standard, and lawmakers should close the loophole allowing services to skirt compliance by not asking. If users register for an account, they should be required to enter an age of 13 or older.

Instagram’s parent company Facebook has been asking for birthdate during account registration since its earliest days. Sure, it adds one extra step to sign up, and impedes its growth numbers by discouraging kids to get hooked early on the social network. But it also benefits Facebook’s business by letting it accurately age-target ads.

Most importantly, at least Facebook is making a baseline effort to keep out underage users. Of course, as kids do when they want something, some are going to lie about their age and say they’re old enough. Ideally, Facebook would go further and try to verify the accuracy of a user’s age using other available data, and Instagram should too.

Both Facebook and Instagram currently have moderators lock the accounts of any users they stumble across that they suspect are under 13. Users must upload government-issued proof of age to regain control. That policy only went into effect last year after UK’s Channel 4 reported a Facebook moderator was told to ignore seemingly underage users unless they explicitly declared they were too young or were reported for being under 13. An extreme approach would be to require this for all signups, though that might be expensive, slow, significantly hurt signup rates, and annoy of-age users.

Instagram is currently on the other end of the spectrum. Doing nothing around age-gating seems recklessly negligent. When asked for comment about how why it doesn’t ask users’ ages, how it stops underage users from joining, and if it’s in violation of COPPA, Instagram declined to comment. The fact that Instagram claims to not know users’ ages seems to be in direct contradiction to it offering marketers custom ad targeting by age such as reaching just those that are 13.

Instagram Prototypes Age Checks

Luckily, this could all change soon.

Mobile researcher and frequent TechCrunch tipster Jane Manchun Wong has spotted Instagram code inside its Android app that shows it’s prototyping an age-gating feature that rejects users under 13. It’s also tinkering with requiring your Instagram and Facebook birthdates to match. Instagram gave me a “no comment” when I asked about if these features would officially roll out to everyone.

Code in the app explains that “Providing your birthday helps us make sure you get the right Instagram experience. Only you will be able to see your birthday.” Beyond just deciding who to let in, Instagram could use this info to make sure users under 18 aren’t messaging with adult strangers, that users under 21 aren’t seeing ads for alcohol brands, and that potentially explicit content isn’t shown to minors.

Instagram’s inability to do any of this clashes with it and Facebook’s big talk this year about its commitment to safety. Instagram has worked to improve its approach to bullying, drug sales, self-harm, and election interference, yet there’s been not a word about age gating.

Meanwhile, underage users promote themselves on pages for hashtags like #12YearOld where it’s easy to find users who declare they’re that age right in their profile bio. It took me about 5 minutes to find creepy “You’re cute” comments from older men on seemingly underage girls’ photos. Clearly Instagram hasn’t been trying very hard to stop them from playing with the app.

Illegal Growth

I brought up the same unsettling situations on Musically, now known as TikTok, to its CEO Alex Zhu on stage at TechCrunch Disrupt in 2016. I grilled Zhu about letting 10-year-olds flaunt their bodies on his app. He tried to claim parents run all of these kids’ accounts, and got frustrated as we dug deeper into Musically’s failures here.

Thankfully, TikTok was eventually fined $5.7 million this year for violating COPPA and forced to change its ways. As part of its response, TikTok started showing an age gate to both new and existing users, removed all videos of users under 13, and restricted those users to a special TikTok Kids experience where they can’t post videos, comment, or provide any COPPA-restricted personal info.

If even a Chinese app social media app that Facebook CEO has warned threatens free speech with censorship is doing a better job protecting kids than Instagram, something’s gotta give. Instagram could follow suit, building a special section of its apps just for kids where they’re quarantined from conversing with older users that might prey on them.

Perhaps Facebook and Instagram’s hands-off approach stems from the fact that CEO Mark Zuckerberg doesn’t think the ban on under-13-year-olds should exist. Back in 2011, he said “That will be a fight we take on at some point . . . My philosophy is that for education you need to start at a really, really young age.” He’s put that into practice with Messenger Kids which lets 6 to 12-year-olds chat with their friends if parents approve.

The Facebook family of apps’ ad-driven business model and earnings depend on constant user growth that could be inhibited by stringent age gating. It surely doesn’t want to admit to parents it’s let kids slide into Instagram, that advertisers were paying to reach children too young to buy anything, and to Wall Street that it might not have 2.8 billion legal users across its apps as it claims.

But given Facebook and Instagram’s privacy scandals, addictive qualities, and impact on democracy, it seems like proper age-gating should be a priority as well as the subject of more regulatory scrutiny and public concern. Society has woken up to the harms of social media, yet Instagram erects no guards to keep kids from experiencing those ills for themselves. Until it makes an honest effort to stop kids from joining, the rest of Instagram’s safety initiatives ring hollow.

Powered by WPeMatico

Facebook staff demand Zuckerberg limit lies in political ads

Posted by | 2020 Election, Advertising Tech, Alexandria Ocasio-Cortez, Apps, Facebook, Facebook ads, Facebook Politics, Government, Mark Zuckerberg, Media, Mobile, Opinion, payments, Personnel, Policy, Social | No Comments

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by The New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 in Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

More than 250 employees of Facebook’s 35,000 staffers have signed the letter, which declares, “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Facebook’s response to the letter was “We remain committed to not censoring political speech, and will continue exploring additional steps we can take to bring increased transparency to political ads.” But that straw-man’s the letter’s request. Employees aren’t asking politicians to be kicked off Facebook or have their posts/ads deleted. They’re asking for warning labels and limits on paid reach. That’s not censorship.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers, including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday, Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads.

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent-looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods, especially during voting periods, and could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends toward more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there is a progressive approach to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via The New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.

Powered by WPeMatico

Google’s Play Store is giving an age-rating finger to Fleksy, a Gboard rival 🖕

Posted by | Android, antitrust, Apps, competition, emoji, Europe, european union, fleksy, gboard, Google, Google Play, Marketplaces, online marketplaces, play, play store, Policy, smartphone, spain, Thingthing | No Comments

Platform power is a helluva a drug. Do a search on Google’s Play Store in Europe and you’ll find the company’s own Gboard app has an age rating of PEGI 3 — aka the pan-European game information labelling system which signifies content is suitable for all age groups.

PEGI 3 means it may still contain a little cartoon violence. Say, for example, an emoji fist or middle finger.

Now do a search on Play for the rival Fleksy keyboard app and you’ll find it has a PEGI 12 age rating. This label signifies the rated content can contain slightly more graphic fantasy violence and mild bad language.

The discrepancy in labelling suggests there’s a material difference between Gboard and Fleksy — in terms of the content you might encounter. Yet both are pretty similar keyboard apps — with features like predictive emoji and baked in GIFs. Gboard also lets you create custom emoji. While Fleksy puts mini apps at your fingertips.

A more major difference is that Gboard is made by Play Store owner and platform controller, Google. Whereas Fleksy is an indie keyboard that since 2017 has been developed by ThingThing, a startup based out of Spain.

Fleksy’s keyboard didn’t used to carry a 12+ age rating — this is a new development. Not based on its content changing but based on Google enforcing its Play Store policies differently.

The Fleksy app, which has been on the Play Store for around eight years at this point — and per Play Store install stats has had more than 5M downloads to date — was PEGI 3 rating until earlier this month. But then Google stepped in and forced the team to up the rating to 12. Which means the Play Store description for Fleksy in Europe now rates it PEGI 12 and specifies it contains “Mild Swearing”.

Screenshot 2019 10 23 at 12.39.45

The Play store’s system for age ratings requires developers to fill in a content ratings form, responding to a series of questions about their app’s content, in order to obtain a suggested rating.

Fleksy’s team have done so over the years — and come up with the PEGI 3 rating without issue. But this month they found they were being issued the questionnaire multiple times and then that their latest app update was blocked without explanation — meaning they had to reach out to Play Developer Support to ask what was going wrong.

After some email back and forth with support staff they were told that the app contained age inappropriate emoji content. Here’s what Google wrote:

During review, we found that the content rating is not accurate for your app… Content ratings are used to inform consumers, especially parents, of potentially objectionable content that exists within an app.

For example, we found that your app contains content (e.g. emoji) that is not appropriate for all ages. Please refer to the attached screenshot.

In the attached screenshot Google’s staff fingered the middle finger emoji as the reason for blocking the update:

Fleksy Play review emoji violation

 

“We never thought a simple emoji is meant to be 12+,” ThingThing CEO Olivier Plante tells us.

With their update rejected the team was forced to raise the rating of Fleksy to PEGI 12 — just to get their update unblocked so they could push out a round of bug fixes for the app.

That’s not the end of the saga, though. Google’s Play Store team is still not happy with the regional age rating for Fleksy — and wants to push the rating even higher — claiming, in a subsequent email, that “your app contains mature content (e.g. emoji) and should have higher rating”.

Now, to be crystal clear, Google’s own Gboard app also contains the middle finger emoji. We are 100% sure of this because we double-checked…

Gboard finger

Emojis available on Google’s Gboard keyboard, including the ‘screw you’ middle finger. Photo credit: Romain Dillet/TechCrunch

This is not surprising. Pretty much any smartphone keyboard — native or add-on — would contain this symbol because it’s a totally standard emoji.

But when Plante pointed out to Google that the middle finger emoji can be found in both Fleksy’s and Gboard’s keyboards — and asked them to drop Fleksy’s rating back to PEGI 3 like Gboard — the Play team did not respond.

A PEGI 16 rating means the depiction of violence (or sexual activity) “reaches a stage that looks the same as would be expected in real life”, per official guidance on the labels, while the use of bad language can be “more extreme”, and content may include the use of tobacco, alcohol or illegal drugs.

And remember Google is objecting to “mature” emoji. So perhaps its app reviewers have been clutching at their pearls after finding other standard emojis which depict stuff like glasses of beer, martinis and wine… 🤦‍♀️

Over on the US Play Store, meanwhile, the Fleksy app is rated “teen”.

While Gboard is — yup, you guessed it! — ‘E for Everyone’… 🤔

image 1 1

 

Plante says the double standard Google is imposing on its own app vs third party keyboards is infuriating, and he accuses the platform giant of anti-competitive behavior.

“We’re all-in for competition, it’s healthy… but incumbent players like Google playing it unfair, making their keyboard 3+ with identical emojis, is another showcase of abuse of power,” he tells TechCrunch.

A quick search of the Play Store for other third party keyboard apps unearths a mixture of ratings — most rated PEGI 3 (such as Microsoft-owned SwiftKey and Grammarly Keyboard); some PEGI 12 (such as Facemoji Emoji Keyboard which, per Play Store’s summary contains “violence”).

Only one that we could find among the top listed keyboard apps has a PEGI 16 rating.

This is an app called Classic Big Keyboard — whose listing specifies it contains “Strong Language” (and what keyboard might not, frankly!?). Though, judging by the Play Store screenshots, it appears to be a fairly bog standard keyboard that simply offers adjustable key sizes. As well as, yes, standard emoji.

“It came as a surprise,” says Plante describing how the trouble with Play started. “At first, in the past weeks, we started to fill in the rating reviews and I got constant emails the rating form needed to be filled with no details as why we needed to revise it so often (6 times) and then this last week we got rejected for the same reason. This emoji was in our product since day 1 of its existence.”

Asked whether he can think of any trigger for Fleksy to come under scrutiny by Play Store reviewers now, he says: “We don’t know why but for sure we’re progressing nicely in the penetration of our keyboard. We’re growing fast for sure but unsure this is the reason.”

“I suspect someone is doubling down on competitive keyboards over there as they lost quite some grip of their search business via the alternative browsers in Europe…. Perhaps there is a correlation?” he adds, referring to the European Commission’s antitrust decision against Google Android last year — when the tech giant was hit with a $5BN fine for various breaches of EU competition law. A fine which it’s appealing.

“I’ll continue to fight for a fair market and am glad that Europe is leading the way in this,” adds Plante.

Following the EU antitrust ruling against Android, which Google is legally compelled to comply with during any appeals process, it now displays choice screens to Android users in Europe — offering alternative search engines and browsers for download, alongside Google’s own dominate search  and browser (Chrome) apps.

However the company still retains plenty of levers it can pull and push to influence the presentation of content within its dominant Play Store — influencing how rival apps are perceived by Android users and so whether or not they choose to download them.

So requiring that a keyboard app rival gets badged with a much higher age rating than Google’s own keyboard app isn’t a good look to say the least.

We reached out to Google for an explanation about the discrepancy in age ratings between Fleksy and Gboard and will update this report with any further response. At first glance a spokesman agreed with us that the situation looks odd.

Powered by WPeMatico