Policy

Apple fails to block porn & gambling ‘Enterprise’ apps

Posted by | Apple, Apps, Developer, Entertainment, Gambling, Gaming, Mobile, payments, Policy, pornography, TC, WTF | No Comments

Facebook and Google were far from the only developers openly abusing Apple’s Enterprise Certificate program meant for companies offering employee-only apps. A TechCrunch investigation uncovered a dozen hardcore pornography apps and a dozen real-money gambling apps that escaped Apple’s oversight. The developers passed Apple’s weak Enterprise Certificate screening process or piggybacked on a legitimate approval, allowing them to sidestep the App Store and Cupertino’s traditional safeguards designed to keep iOS family-friendly. Without proper oversight, they were able to operate these vice apps that blatantly flaunt Apple’s content policies.

The situation shows further evidence that Apple has been neglecting its responsibility to police the Enterprise Certificate program, leading to its exploitation to circumvent App Store rules and forbidden categories. For a company whose CEO Tim Cook frequently criticizes its competitors for data misuse and policy fiascos like Facebook’s Cambridge Analytica, Apple’s failure to catch and block these porn and gambling demonstrates it has work to do itself.

Porn apps PPAV and iPorn (iP) continue to abuse Apple’s Enterprise Certificate program to sidestep the App Store’s ban on pornography. Nudity censored by TechCrunch

 

TechCrunch broke the news last week that Facebook and Google had broken the rules of Apple’s Enterprise Certificate program to distribute apps that installed VPNs or demanded root network access to collect all of a user’s traffic and phone activity for competitive intelligence. That led Apple to briefly revoke Facebook and Google’s Certificates, thereby disabling the companies’ legitimate employee-only apps, which caused office chaos.

Apple issued a fiery statement that “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Meanwhile, dozens of prohibited apps were available for download from shady developers’ websites.

Apple offers a lookup tool for finding any business’ D-U-N-S number, allowing shady developers to forge their Enterprise Certificate application

The problem starts with Apple’s lax standards for accepting businesses to the enterprise program. The program is for companies to distribute apps only to their employees, and its policy explicitly states “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers.” Yet Apple doesn’t adequately enforce these policies.

Developers simply have to fill out an online form and pay $299 to Apple, as detailed in this guide from Calvium. The form merely asks developers to pledge they’re building an Enterprise Certificate app for internal employee-only use, that they have the legal authority to register the business, provide a D-U-N-S business ID number and have an up to date Mac. You can easily Google a business’ address details and look up their D-U-N-S ID number with a tool Apple provides. After setting up an Apple ID and agreeing to its terms of service, businesses wait one to four weeks for a phone call from Apple asking them to reconfirm they’ll only distribute apps internally and are authorized to represent their business.

With just a few lies on the phone and web plus some Googleable public information, sketchy developers can get approved for an Apple Enterprise Certificate.

Real-money gambling apps openly advertise that they have iOS versions available that abuse the Enterprise Certificate program

Given the number of policy-violating apps that are being distributed to non-employees using registrations for businesses unrelated to their apps, it’s clear that Apple needs to tighten the oversight on the Enterprise Certificate program. TechCrunch found thousands of sites offering downloads of “sideloaded” Enterprise apps, and investigating just a sample uncovered numerous abuses. Using a standard un-jailbroken iPhone. TechCrunch was able to download and verify 12 pornography and 12 real-money gambling apps over the past week that were abusing Apple’s Enterprise Certificate system to offer apps prohibited from the App Store. These apps either offered streaming or pay-per-view hardcore pornography, or allowed users to deposit, win and withdraw real money — all of which would be prohibited if the apps were distributed through the App Store.

A whole screen of prohibited sideloaded porn and gambling apps TechCrunch was able to download through the Enterprise Certificate system

In an apparent effort to step up policy enforcement in the wake of TechCrunch’s investigation into Facebook and Google’s Enterprise Certificate violations, Apple appears to have disabled some of these apps in the past few days, but many remain operational. The porn apps that we discovered which are currently functional include Swag, PPAV, Banana Video, iPorn (iP), Pear, Poshow and AVBobo, while the currently functional gambling apps include RD Poker and RiverPoker.

The Enterprise Certificates for these apps were rarely registered to company names related to their true purpose. The only example was Lucky8 for gambling. Many of the apps used innocuous names like Interprener, Mohajer International Communications, Sungate and AsianLiveTech. Yet others seemed to have forged or stolen credentials to sign up under the names of completely unrelated but legitimate businesses. Dragon Gaming was registered to U.S. gravel supplier CSL-LOMA. As for porn apps, PPAV’s certificate is assigned to the Nanjing Jianye District Information Center, Douyin Didi was licensed under Moscow motorcycle company Akura OOO, Chinese app Pear is registered to Grupo Arcavi Sociedad Anonima in Costa Rica and AVBobo covers its tracks with the name of a Fresno-based company called Chaney Cabinet & Furniture Co.

You can see a full list of the policy-violating apps we found:

Apple refused to explain how these apps slipped into the Enterprise Certificate app program. It declined to say if it does any follow-up compliance audits on developers in the program or if it plans to change admission process. An Apple spokesperson did provide this statement, though, indicating it will work to shut down these apps and potentially ban the developers from building iOS products entirely:

“Developers that abuse our enterprise certificates are in violation of the Apple Developer Enterprise Program Agreement and will have their certificates terminated, and if appropriate, they will be removed from our Developer Program completely. We are continuously evaluating the cases of misuse and are prepared to take immediate action.”

TechCrunch asked Guardian Mobile Firewall’s security expert Will Strafach to look at the apps we found and their Certificates. Strafach’s initial analysis of the apps didn’t find any glaring evidence that the apps misappropriate data, but they all do violate Apple’s Certificate policies and provide content banned from the App Store. “At the moment, I have noticed that action is slower regarding apps available from an independent website and not these easy-to-scrape app directories” that occasionally crop up offering centralized access to a plethora of sideloaded apps.

Porn app AVBobo uses an Enterprise Certificate registered to Fresno’s Chaney Cabinet & Furniture Co

Strafach explained how “A significant number of the Enterprise Certificates used to sign publicly available apps are referred to informally as ‘rogue certificates’ as they are often not associated with the named company. There are no hard facts to confirm the manner in which these certificates originate, but the result of the initial step is that individuals will gain control of an Enterprise Certificate attributable to a corporation, usually China/HK-based. Code services are then sold quietly on Chinese language marketplaces, resulting in sometimes 5 to 10 (or more) distinct apps being signed with the same Enterprise Certificate.” We found Sungate and Mohajer Certificates were farmed out for use by multiple apps in this way.

“In my experience, Enterprise Certificate signed apps available on independent websites have not been harmful to users in a malicious sense, only in the sense that they have broken the rules,” Strafach notes. “Enterprise Certificate signed apps from these Chinese ‘helper’ tools, however, have been a mixed bag. Zoe example, in multiple cases, we have noticed such apps with additional tracking and adware code injected into the original now-repackaged app being offered.”

Porn apps like Swag openly advertise their availability on iOS

Interestingly, none of the off-limits apps we discovered asked users to install a VPN like Google Screenwise, let alone root network access like Facebook Research. TechCrunch reported this month that both apps had been paying users to snoop on their private data. But the iOS versions were banned by Apple after we exposed their policy violations, and Apple also caused chaos at Facebook and Google’s offices by temporarily shutting down their employee-only iOS apps too. The fact that these two U.S. tech giants were more aggressive about collecting user data than shady Chinese porn and gambling apps is telling. “This is a cat-and-mouse game,” Strafach concluded regarding Apple’s struggle to keep out these apps. But given the rampant abuse, it seems Apple could easily add stronger verification processes and more check-ups to the Enterprise Certificate program. Developers should have to do more to prove their apps’ connection with the Certificate holder, and Apple should regularly audit certificates to see what kind of apps they’re powering.

Back when Facebook missed Cambridge Analytica’s abuse of its app platform, Cook was asked what he’d do in Mark Zuckerberg’s shoes. “I wouldn’t be in this situation” Cook frankly replied. But if Apple can’t keep porn and casinos off iOS, perhaps Cook shouldn’t be lecturing anyone else.

Powered by WPeMatico

Facebook will reveal who uploaded your contact info for ad targeting

Posted by | Advertising Tech, Apps, Facebook, Facebook ads, Facebook Custom Audiences, facebook privacy, Mobile, Policy, TC | No Comments

Facebook’s crack down on non-consensual ad targeting last year will finally produce results. In March, TechCrunch discovered Facebook planned to require advertisers to pledge that they had permission to upload someone’s phone number or email address for ad targeting. That tool debuted in June, though there was no verification process and Facebook just took businesses at their word despite the financial incentive to lie. In November, Facebook launched a way for ad agencies and marketing tech developers to specify who they were buying promotions “on behalf of.” Soon that information will finally be revealed to users.

Facebook’s new Custom Audiences transparency feature shows when your contact info was uploaded and by whom, and if it was shared between brands and partners

Facebook previously only revealed what brand was using your contact info for targeting, not who uploaded it or when

Starting February 28th, Facebook’s “Why am I seeing this?” button in the drop-down menu of feed posts will reveal more than the brand that paid for the ad, some biographical details they targeted and if they’d uploaded your contact info. Facebook will start to show when your contact info was uploaded, if it was by the brand or one of their agency/developer partners and when access was shared between partners. A Facebook spokesperson tells me the goal is to keep giving people a better understanding of how advertisers use their information.

This new level of transparency could help users pinpoint what caused a brand to get hold of their contact info. That might help them change their behavior to stay more private. The system could also help Facebook zero in on agencies or partners that are constantly uploading contact info and might not have attained it legitimately. Apparently seeking not to dredge up old privacy problems, Facebook didn’t publish a blog post about the change but simply announced it in a Facebook post to the Facebook Advertiser Hub Page.

The move comes in the wake of Facebook attaching immediately visible “paid for by” labels to more political ads to defend against election interference. With so many users concerned about how Facebook exploits their data, the Custom Audiences transparency feature could provide a small boost of confidence in a time when people have little faith in the social network’s privacy practices.

Powered by WPeMatico

Facebook now lets everyone unsend messages for 10 minutes

Posted by | Apps, Facebook, facebook messenger, Facebook unsend, Mark Zuckerberg, Mobile, Policy, Social, TC | No Comments

Facebook has finally made good on its promise to let users unsend chats after TechCrunch discovered Mark Zuckerberg had secretly retracted some of his Facebook Messages from recipients. Today Facebook Messenger globally rolls out “Remove for everyone” to help you pull back typos, poor choices, embarrassing thoughts or any other message.

For up to 10 minutes after sending a Facebook Message, the sender can tap on it and they’ll find the delete button has been replaced by “Remove for you,” but there’s now also a “Remove for everyone” option that pulls the message from recipients’ inboxes. They’ll see an alert that you removed a message in its place, and can still flag the message to Facebook, which will retain the content briefly to see if it’s reported. The feature could make people more comfortable having honest conversations or using Messenger for flirting since they can second-guess what they send, but it won’t let people change ancient history.

The company abused its power by altering the history of Zuckerberg’s Facebook’s messages in a way that email or other communication mediums wouldn’t allow. Yet Facebook refused to say if it will now resume removing executives’ messages from recipients even long after they’re delivered after telling TechCrunch in April that “until this feature is ready, we will no longer be deleting any executives’ messages.”

For a quick recap, here’s how Facebook got to Unsend:

-Facebook Messenger never had an Unsend option, except in its encrypted Secret messaging product where you can set an expiration timer on chats, or in Instagram Direct.

-In April 2018, TechCrunch reported that some of Mark Zuckerberg’s messages had been removed from the inboxes of recipients, including non-employees. There was no trace of the chats in the message thread, leaving his conversation partners looking like they were talking to themselves, but email receipts proved the messages had been sent but later disappeared.

-Facebook claimed this was partly because it was “limiting the retention period for Mark’s messages” for security purposes in the wake of the Sony Pictures hack, yet it never explained why only some messages to some people had been removed.

-The next morning, Facebook changed its tune and announced it’d build an Unsend button for everyone, providing this statement: “We have discussed this feature several times . . . We will now be making a broader delete message feature available. This may take some time. And until this feature is ready, we will no longer be deleting any executives’ messages. We should have done this sooner — and we’re sorry that we did not.”

-Six months later in October 2018, Facebook still hadn’t launched Unsend, but then TechCrunch found Facebook had been prototyping the feature.

-In November, Facebook started to roll out the feature with the current “Remove for everyone” design and 10-minute limit.

-Now every iOS and Messenger user globally will get the Unsend feature.

So will Facebook start retracting executives’ messages again? It’d only say that the new feature would be available to both users and employees. But in Zuckerberg’s case, messages from years ago were removed in a way users still aren’t allowed to. Remove for everyone could make messaging on Facebook a little less anxiety-inducing. But it shouldn’t have taken Facebook being caught stealing from the inboxes of its users to get it built.

Powered by WPeMatico

Senator Warner calls on Zuckerberg to support market research consent rules

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Government, mark warner, market research, Mobile, Policy, privacy, senate, Social, vpn | No Comments

In response to TechCrunch’s investigation of Facebook paying teens and adults to install a VPN that lets it analyze all their phone’s traffic, Senator Mark Warner (D-VA) has sent a letter to Mark Zuckerberg. It admonishes Facebook for not spelling out exactly which data the Facebook Research app was collecting or giving users adequate information necessary to determine if they should accept payment in exchange for selling their privacy. Following our report, Apple banned Facebook’s Research app from iOS and shut down its internal employee-only workplace apps too as punishment, causing mayhem in Facebook’s office.

Warner wrote to Zuckerberg, “In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection. Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

Warner is working on writing new laws to govern data collection initiatives like Facebook Research. He asks Zuckerberg, “Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?”

Senator Blumenthal’s fierce statement

Meanwhile, Senator Richard Blumenthal (D-CT) provided TechCrunch with a fiery statement regarding our investigation. He calls Facebook anti-competitive, which could fuel calls to regulate or break up Facebook, says the FTC must address the issue and that he’s planning to work with congress to safeguard teens’ privacy:

“Wiretapping teens is not research, and it should never be permissible. This is yet another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behavior. Instead of learning its lesson when it was caught spying on consumers using the supposedly ‘private’ Onavo VPN app, Facebook rebranded the intrusive app and circumvented Apple’s attempts to protect iPhone users. Facebook continues to demonstrate its eagerness to look over everyone’s shoulder and watch everything they do in order to make money. 

Mark Zuckerberg’s empty promises are not enough. The FTC needs to step up to the plate, and the Onavo app should be part of its investigation. I will also be writing to Apple and Google on Facebook’s egregious behavior, and working in Congress to make sure that teens are protected from Big Tech’s privacy intrusions.”

Senator Markey says stop surveiling teens

And finally, Senator Edward J. Markey (D-MA) requests that Facebook stop recruiting teens for its Research program, and notes he’ll push his “Do Not Track Kids” act in Congress:

“It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding how much data they’re handing over and how sensitive it is. I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens. 

But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.”

The senators’ statements do go a bit overboard. Though Facebook Research was aggressively competitive and potentially misleading, Blumenthal calling it “anti-competitive” is a stretch. And Warner’s questioning on whether “any user reasonably understood that they were giving Facebook root device access through the enterprise certificate” or that it uses the data to track competitors oversteps the bounds. Surely some savvy technologists did, but the question is whether all the teens and everyone else understood.

Facebook isn’t the only one paying users to analyze all their phone data. TechCrunch found that Google had a similar program called Screenwise Meter. Though it was more upfront about it, Google also appears to have violated Apple’s employee-only Enterprise Certificate rules. We may be seeing the start to an industry-wide crack down on market research surveillance apps that dangle gift cards in front of users to get them to give up a massive amount of privacy.

Warner’s full letter to Zuckerberg can be found below:

Dear Mr. Zuckerberg: 

I write to express concerns about allegations of Facebook’s latest efforts to monitor user activity. On January 29th, TechCrunch revealed that under the auspices of partnerships with beta testing firms, Facebook had begun paying users aged 13 to 35 to install an enterprise certificate, allowing Facebook to intercept all internet traffic to and from user devices. According to subsequent reporting by TechCrunch, Facebook relied on intermediaries that often “did not disclose Facebook’s involvement until users had begun the signup process.” Moreover, the advertisements used to recruit participants and the “Project Disclosure” make no mention of Facebook or the commercial purposes to which this data was allegedly put.

This arrangement comes in the wake of revelations that Facebook had previously engaged in similar efforts through a virtual private network (VPN) app, Onavo, that it owned and operated. According to a series of articles by the Wall Street Journal, Facebook used Onavo to scout emerging competitors by monitoring user activity – acquiring competitors in order to neutralize them as competitive threats, and in cases when that did not work, monitor usage patterns to inform Facebook’s own efforts to copy the features and innovations driving adoption of competitors’ apps. In 2017, my staff contacted Facebook with questions about how Facebook was promoting Onavo through its Facebook app – in particular, framing the app as a VPN that would “protect” users while omitting any reference to the main purpose of the app: allowing Facebook to gather market data on competitors.

Revelations in 2017 and 2018 prompted Apple to remove Onavo from its App Store in 2018 after concluding that the app violated its terms of service prohibitions on monitoring activity of other apps on a user’s device, as well as a requirement to make clear what user data will be collected and how it will be used. In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection.

Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me. As you recall, I wrote the Federal Trade Commission in 2014 in the wake of revelations that Facebook had undertaken a behavioral experiment on hundreds of thousands of users, without obtaining their informed consent. In submitted questions to your Chief Operating Officer, Sheryl Sandberg, I once again raised these concerns, asking if Facebook provided for “individualized, informed consent” in all research projects with human subjects – and whether users had the ability to opt out of such research. In response, we learned that Facebook does not rely on individualized, informed consent (noting that users consent under the terms of the general Data Policy) and that users have no opportunity to opt out of being enrolled in research studies of their activity. In large part for this reason, I am working on legislation to require individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users. 

Fair, robust competition serves as an impetus for innovation, product differentiation, and wider consumer choice. For these reasons, I request that you respond to the following questions: 

1. Do you think any user reasonably understood that they were giving Facebook root device access through the enterprise certificate? What specific steps did you take to ensure that users were properly informed of this access? 

2. Do you think any user reasonably understood that Facebook was using this data for commercial purposes, including to track competitors?

3. Will you release all participants from the confidentiality agreements Facebook made them sign?

4. As you know, I have begun working on legislation that would require large platforms such as Facebook to provide users, on a continual basis, with an estimate of the overall value of their data to the service provider. In this instance, Facebook seems to have developed valuations for at least some uses of the data that was collected (such as market research). This further emphasizes the need for users to understand fully what data is collected by Facebook, the full range of ways in which it is used, and how much it is worth to the company. Will you commit to supporting this legislation and exploring methods for valuing user data holistically?

5. Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?

I look forward to receiving your responses within the next two weeks. If you should have any questions or concerns, please contact my office at 202-224-2023.

Powered by WPeMatico

Foxconn pulls back on its $10 billion factory commitment

Posted by | alibaba, Amazon, Asia, China, Foxconn, Gaming, Government, india, Kakao, korea, Media, nexon, Policy, wisconsin | No Comments

Well that didn’t last long.

In 2017, Foxconn announced the largest investment of a foreign company in the United States when it selected Mount Pleasant, Wisconsin for a new manufacturing facility. Buttressed by huge economic development grants from Wisconsin, an endorsement from President Trump, and Foxconn CEO Terry Gou’s vision of a maker America, the plant was designed to turn a small town and its environs into the futuristic “Wisconn Valley.”

Now, those dreams are coming apart faster than you can say “Made in America.”

In an interview with Reuters, a special assistant to Gou says that those plans are being remarkably scaled back. Originally designed to be an advanced LCD factory, the new Foxconn facility will instead be a much more modest (but still needed!) research center for engineers.

It’s a huge loss for Wisconsin, but the greater shock may be just how obvious all of this was. I wrote about the boondoggle just a few weeks ago, as had Bruce Murphy at The Verge a few weeks before that. Sruthi Pinnamaneni produced an excellent podcast on Reply All about how much the economic development of Mount Pleasant tore the small town asunder.

The story in short: the economics of the factory never made sense, and economics was always going to win over the hopes and dreams of politicians like Wisconsin governor Scott Walker, who championed the deal. Despite bells and whistles, televisions are a commodity product (unlike, say, airfoils), and thus the cost structure is much more compatible with efficient Asian supply chains than with American expensive labor.

Yet, that wasn’t the only part of the project that never made any sense. Foxconn was building in what was essentially the middle of nowhere, without the sort of dense ecosystem of suppliers and sub-suppliers required for making a major factory hum. (Plus, as a native of Minnesota, I can also attest that Wisconsin is a pile of garbage).

Those suppliers are everything for manufacturers. Just this past weekend, Jack Nicas at the New York Times observed that Apple’s advanced manufacturing facility in Austin, Texas struggled to find the right parts it needed to assemble its top-of-the-line computer, the Mac Pro:

But when Apple began making the $3,000 computer in Austin, Tex., it struggled to find enough screws, according to three people who worked on the project and spoke on the condition of anonymity because of confidentiality agreements.

In China, Apple relied on factories that can produce vast quantities of custom screws on short notice. In Texas, where they say everything is bigger, it turned out the screw suppliers were not.

There are of course huge manufacturing ecosystems in the United States — everything from cars in Detroit, to planes in Washington, to advanced medical devices in several major bio-hubs. But consumer electronics is one that has for the most part been lost to Singapore, Taiwan, Korea, and of course, China.

Geopolitically, Foxconn’s factory made a modicum of sense. With the increasing protectionism emanating from Western capitals, Foxconn could have used some geographical diversity in the event of a tariff fight. The company is Taiwanese, but manufacturers many of its products on the mainland.

And of course, a research center is still an enormous gain for a region of Wisconsin that could absolutely use high-income, professional jobs. Maybe the process of rolling out a next-generation manufacturing ecosystem will take more time than originally anticipated, but nothing is stopping further expansion in the future.

Yet, one can’t help but gaze at the remarkable naïveté of Wisconsin politicians who offered billions only to find that even massive subsidies aren’t enough. It’s a competitive world out there, and the United States has little experience in these fights.

India may put friction on foreign firms to protect domestic startups

Indian Prime Minister Narendra Modi. (MONEY SHARMA/AFP/Getty Images)

One of the major battles for tech supremacy is over the future of the Indian IT market, which is rapidly bringing more than a billion people onto the internet and giving them robust software services. I’ve talked a bit about data sovereignty, which mandates that Indian data be stored in Indian data centers by Indian companies, pushing out foreign companies like Amazon, Google, and Alibaba.

Now, it looks like India is taking a page from the Asian tiger-school of development, and is going to increasingly favor domestic firms over foreign ones in key industries. Newley Purnell and Rajesh Roy report in the WSJ:

The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. , the person said. She didn’t immediately respond to a request for more details on the program or its timing.

The idea of national champions is simple. Unlike the innovation world of Silicon Valley, there are obvious sectors in an economy that need to be fulfilled. Food and clothes have to be sold, deliveries made, all kinds of industrial goods need to be built. Rather than creating a competitive market that requires high levels of duplicate capital investment, the government can designate a few companies to take the lead in each market to ensure that they can invest for growth rather than in, say, marketing costs.

If done well, such policies can rapidly industrialize a country’s economic base. When done poorly, the lack of competition can create lethargy among entrepreneurs, who have already won their markets without even trying.

The linchpin is whether the government pushes companies to excel and sets aggressive growth targets. In Korea and China, the central governments actively monitored corporate growth during their catch-up years, and transferred businesses to new entrepreneurs if business leaders failed to perform. Can India push its companies as hard without market forces?

As the technology industry matures in the West, entrepreneurs will look for overseas as their future growth hubs. The challenge is whether they will be let in at all.

Video game geopolitics

Nexon’s MapleStory2 game is one of its most profitable (Screenshot from Nexon) .

Korea and Japan are two of the epicenters of the video game industry, and now one of its top companies is on the auction block, raising tough questions about media ownership.

Nexon founder Kim Jung Ju announced a few weeks ago that he was intending to sell all of his controlling $9 billion stake in the leading video game company. The company has since executed something of a multi-stage auction process to determine who should buy those shares. One leading candidate we’ve learned is Kakao, the leading internet portal and chatting app in Korea.

The other leading candidate is China-based Tencent, which owns exclusive distribution rights in China of some of Nexon’s most important titles.

Tencent has been increasingly under the sway of China’s government, which froze video game licensing last year as it worked to increase content regulation over the industry. Now the question is whether it will be politically palatable to sell a leading star of Korea’s video game industry to its economic rival.

From the Financial Times:

Mr Wi added that Nexon would be an attractive target for Tencent, which pays about Won1tn in annual royalties to the South Korean game developer. But selling the company to Tencent would be “politically burdensome” for Mr Kim, given unfavourable public opinion in South Korea towards such a sale, he cautioned.

“Political risks are high for the deal. Being criticised for selling the company to a foreign rival, especially a Chinese one, would be the last thing that Mr Kim wants,” said Mr Wi.

Such concerns around Chinese media ownership have become acute throughout the world, but we haven’t seen these concerns as much in the video game industry. Clearly, times have changed.

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.

Share your feedback on your startup’s attorney

My colleague Eric Eldon and I are reaching out to startup founders and execs about their experiences with their attorneys. Our goal is to identify the leading lights of the industry and help spark discussions around best practices. If you have an attorney you thought did a fantastic job for your startup, let us know using this short Google Forms survey and also spread the word. We will share the results and more in the coming weeks.

What’s Next

  • More work on societal resilience

This newsletter is written with the assistance of Arman Tabatabai from New York

Powered by WPeMatico

Apple bans Facebook’s Research app that paid users for data

Posted by | Apple, Apps, Facebook, facebook research, Mark Zuckerberg, Mobile, Policy, privacy, Social, TC, Teens, Tim Cook, vpn | No Comments

In the wake of TechCrunch’s investigation yesterday, Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down. The Research app asked users for root network access to all data passing through their phone in exchange for $20 per month. Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store.

TechCrunch had reported that Facebook was breaking Apple’s policy that the Enterprise system is only for distributing internal corporate apps to employees, not paid external testers. That was actually before Facebook released a statement last night saying that it had shut down the iOS version of the Research program without mentioning that it was forced by Apple to do so.

TechCrunch’s investigation discovered that Facebook has been quietly operated the Research program on iOS and Android since 2016, recently under the name Project Atlas. It recruited 13 to 35 year olds, 5 percent of which were teenagers, with ads on Instagram and Snapchat and paid them a monthly fee plus referral bonuses to install Facebook’s Research app, the included VPN app that routes traffic to Facebook, and to ‘Trust’ the company with root network access to their phone. That lets Facebook pull in a user’s web browsing activity, what apps are on their phone and how they use them, and even decrypt their encrypted traffic. Facebook went so far as to ask users to screenshot and submit their Amazon order history. Facebook uses all this data to track competitors, assess trends, and plan its product roadmap.

Facebook was forced to remove its similar Onavo Protect app in August last year after Apple changed its policies to prohibit the VPN app’s data collection practices. But Facebook never shut down the Research app with the same functionality it was running in parallel. In fact, TechCrunch commissioned security expert Will Strafach to dig into the Facebook Research app, and we found that it featured tons of similar code and references to Onavo Protect. That means Facebook was purposefully disobeying the spirit of Apple’s 2018 privacy policy change while also abusing the Enterprise Certificate program.

Sources tell us that Apple revoking Facebook’s Enterprise Certificate has broken all of the company’s legitimate employee-only apps. Those include pre-launch internal-testing versions of Facebook and Instagram, as well as the employee apps for coordinating office collaboration, commutes, seeing the day’s lunch schedule, and more. That’s causing mayhem at Facebook, disrupting their daily work flow and ability to do product development. We predicted yesterday that Apple could take this drastic step to punish Facebook much harder than just removing its Research app. The disruption will translate into a huge loss of productivity for Facebook’s 33,000 employees.

[Update: Facebook later confirmed to TechCrunch that its internal apps were broken by Apple’s punishment and that it’s in talks with Apple to try to resolve the issue and get their employee tools running again.]

For reference, Facebook’s main iOS app still functions normally. Also, you can’t get paid for installing Onavo Protect on Android, only for the Facebook Research app. And Facebook isn’t the only one violating Apple’s Enterprise Certificate policy, as TechCrunch discovered Google’s Screenwise Meter surveillance app breaks the rules too.

This morning, Apple informed us it had banned Facebook’s Research app yesterday before the social network seemingly pulled it voluntarily. Apple provided us with this strongly worded statement condemning the social network’s behavior:

“We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

That comes in direct contradiction to Facebook’s initial response to our investigation. Facebook claimed it was in alignment with Apple’s Enterprise Certificate policy and that the program was no different than a focus group.

Seven hours later, a Facebook spokesperson said it was pulling its Research program from iOS without mentioning that Apple forced it to do so, and issued this statement disputing the characterization of our story:

“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”

We refute those accusations by Facebook. As we wrote yesterday night, Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.

Senator Mark Warner has since called on Facebook CEO Mark Zuckerberg to support legislation requiring individual informed consent for market research initiatives like Facebook Research. Meanwhile, Senator Richard Blumenthal issued a fierce statement that “Wiretapping teens is not research, and it should never be permissible.”

The situation will surely worsen the relationship between Facebook and Apple after years of mounting animosity between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices, and Zuckerberg has countered that it offers products for free for everyone rather than making products few can afford like Apple. Flared tensions could see Facebook receive less promotion in the App Store, fewer integrations into iOS, and more jabs from Cook. Meanwhile, the world sees Facebook as having been caught red-handed threatening user privacy and breaking Apple policy.

Powered by WPeMatico

Facebook pays teens to install VPN that spies on them

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Facebook Teens, Mobile, Onavo, Policy, privacy, Social, vpn | No Comments

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.

Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits, and it has no plans to stop.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Facebook’s Research app requires users to ‘Trust’ it with extensive access to their data

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple could seek to block Facebook from continuing to distribute its Research app, or even revoke it permission to offer employee-only apps, and the situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point. TechCrunch has spoken to Apple and it’s aware of the issue, but the company did not provide a statement before press time.

Facebook’s Research program is referred to as Project Atlas on sign-up sites that don’t mention Facebook’s involvement

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

Facebook’s surveillance app

Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.

Over the years since, Onavo clued Facebook in to what apps to copy, features to build and flops to avoid. By 2018, Facebook was promoting the Onavo app in a Protect bookmark of the main Facebook app in hopes of scoring more users to snoop on. Facebook also launched the Onavo Bolt app that let you lock apps behind a passcode or fingerprint while it surveils you, but Facebook shut down the app the day it was discovered following privacy criticism. Onavo’s main app remains available on Google Play and has been installed more than 10 million times.

The backlash heated up after security expert Strafach detailed in March how Onavo Protect was reporting to Facebook when a user’s screen was on or off, and its Wi-Fi and cellular data usage in bytes even when the VPN was turned off. In June, Apple updated its developer policies to ban collecting data about usage of other apps or data that’s not necessary for an app to function. Apple proceeded to inform Facebook in August that Onavo Protect violated those data collection policies and that the social network needed to remove it from the App Store, which it did, Deepa Seetharaman of the WSJ reported.

But that didn’t stop Facebook’s data collection.

Project Atlas

TechCrunch recently received a tip that despite Onavo Protect being banished by Apple, Facebook was paying users to sideload a similar VPN app under the Facebook Research moniker from outside of the App Store. We investigated, and learned Facebook was working with three app beta testing services to distribute the Facebook Research app: BetaBound, uTest and Applause. Facebook began distributing the Research VPN app in 2016. It has been referred to as Project Atlas since at least mid-2018, around when backlash to Onavo Protect magnified and Apple instituted its new rules that prohibited Onavo. [Update: Previously, a similar program was called Project Kodiak.] Facebook didn’t want to stop collecting data on people’s phone usage and so the Research program continued, in disregard for Apple banning Onavo Protect.

Facebook’s Research App on iOS

Ads (shown below) for the program run by uTest on Instagram and Snapchat sought teens 13-17 years old for a “paid social media research study.” The sign-up page for the Facebook Research program administered by Applause doesn’t mention Facebook, but seeks users “Age: 13-35 (parental consent required for ages 13-17).” If minors try to sign-up, they’re asked to get their parents’ permission with a form that reveal’s Facebook’s involvement and says “There are no known risks associated with the project, however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of apps. You will be compensated by Applause for your child’s participation.” For kids short on cash, the payments could coerce them to sell their privacy to Facebook.

The Applause site explains what data could be collected by the Facebook Research app (emphasis mine):

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.”

Meanwhile, the BetaBound sign-up page with a URL ending in “Atlas” explains that “For $20 per month (via e-gift cards), you will install an app on your phone and let it run in the background.” It also offers $20 per friend you refer. That site also doesn’t initially mention Facebook, but the instruction manual for installing Facebook Research reveals the company’s involvement.

Facebook’s intermediary uTest ran ads on Snapchat and Instagram, luring teens to the Research program with the promise of money

 

Facebook seems to have purposefully avoided TestFlight, Apple’s official beta testing system, which requires apps to be reviewed by Apple and is limited to 10,000 participants. Instead, the instruction manual reveals that users download the app from r.facebook-program.com and are told to install an Enterprise Developer Certificate and VPN and “Trust” Facebook with root access to the data their phone transmits. Apple requires that developers agree to only use this certificate system for distributing internal corporate apps to their own employees. Randomly recruiting testers and paying them a monthly fee appears to violate the spirit of that rule.

Security expert Will Strafach found Facebook’s Research app contains lots of code from Onavo Protect, the Facebook-owned app Apple banned last year

Once installed, users just had to keep the VPN running and sending data to Facebook to get paid. The Applause-administered program requested that users screenshot their Amazon orders page. This data could potentially help Facebook tie browsing habits and usage of other apps with purchase preferences and behavior. That information could be harnessed to pinpoint ad targeting and understand which types of users buy what.

TechCrunch commissioned Strafach to analyze the Facebook Research app and find out where it was sending data. He confirmed that data is routed to “vpn-sjc1.v.facebook-program.com” that is associated with Onavo’s IP address, and that the facebook-program.com domain is registered to Facebook, according to MarkMonitor. The app can update itself without interacting with the App Store, and is linked to the email address PeopleJourney@fb.com. He also discovered that the Enterprise Certificate indicates Facebook renewed it on June 27th, 2018 — weeks after Apple announced its new rules that prohibited the similar Onavo Protect app.

“It is tricky to know what data Facebook is actually saving (without access to their servers). The only information that is knowable here is what access Facebook is capable of based on the code in the app. And it paints a very worrisome picture,” Strafach explains. “They might respond and claim to only actually retain/save very specific limited data, and that could be true, it really boils down to how much you trust Facebook’s word on it. The most charitable narrative of this situation would be that Facebook did not think too hard about the level of access they were granting to themselves . . . which is a startling level of carelessness in itself if that is the case.”

“Flagrant defiance of Apple’s rules”

In response to TechCrunch’s inquiry, a Facebook spokesperson confirmed it’s running the program to learn how people use their phones and other services. The spokesperson told us “Like many companies, we invite people to participate in research that helps us identify things we can be doing better. Since this research is aimed at helping Facebook understand how people use their mobile devices, we’ve provided extensive information about the type of data we collect and how they can participate. We don’t share this information with others and people can stop participating at any time.”

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone

Facebook’s spokesperson claimed that the Facebook Research app was in line with Apple’s Enterprise Certificate program, but didn’t explain how in the face of evidence to the contrary. They said Facebook first launched its Research app program in 2016. They tried to liken the program to a focus group and said Nielsen and comScore run similar programs, yet neither of those ask people to install a VPN or provide root access to the network. The spokesperson confirmed the Facebook Research program does recruit teens but also other age groups from around the world. They claimed that Onavo and Facebook Research are separate programs, but admitted the same team supports both as an explanation for why their code was so similar.

Facebook’s Research program requested users screenshot their Amazon order history to provide it with purchase data

However, Facebook claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.

Facebook disobeying Apple so directly could hurt their relationship. “The code in this iOS app strongly indicates that it is simply a poorly re-branded build of the banned Onavo app, now using an Enterprise Certificate owned by Facebook in direct violation of Apple’s rules, allowing Facebook to distribute this app without Apple review to as many users as they want,” Strafach tells us. ONV prefixes and mentions of graph.onavo.com, “onavoApp://” and “onavoProtect://” custom URL schemes litter the app. “This is an egregious violation on many fronts, and I hope that Apple will act expeditiously in revoking the signing certificate to render the app inoperable.”

Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram. Insights into how popular with teens is Chinese video music app TikTok and meme sharing led Facebook to launch a clone called Lasso and begin developing a meme-browsing feature called LOL, TechCrunch first reported. But Facebook’s desire for data about teens riles critics at a time when the company has been battered in the press. Analysts on tomorrow’s Facebook earnings call should inquire about what other ways the company has to collect competitive intelligence.

Last year when Tim Cook was asked what he’d do in Mark Zuckerberg’s position in the wake of the Cambridge Analytica scandal, he said “I wouldn’t be in this situation . . . The truth is we could make a ton of money if we monetized our customer, if our customer was our product. We’ve elected not to do that.” Zuckerberg told Ezra Klein that he felt Cook’s comment was “extremely glib.”

Now it’s clear that even after Apple’s warnings and the removal of Onavo Protect, Facebook is still aggressively collecting data on its competitors via Apple’s iOS platform. “I have never seen such open and flagrant defiance of Apple’s rules by an App Store developer,” Strafach concluded. If Apple shuts the Research program down, Facebook will either have to invent new ways to surveil our behavior amidst a climate of privacy scrutiny, or be left in the dark.

Additional reporting by Zack Whittaker.

Powered by WPeMatico

Instagram caught selling ads to follower-buying services it banned

Posted by | Apps, eCommerce, Facebook, instagram, Mobile, Policy, Social, spam, TC | No Comments

Instagram has been earning money from businesses flooding its social network with spam notifications. Instagram hypocritically continues to sell ad space to services that charge clients for fake followers or that automatically follow/unfollow other people to get them to follow the client back. This is despite Instagram reiterating a ban on these businesses in November and threatening the accounts of people who employ them.

A TechCrunch investigation initially found 17 services selling fake followers or automated notification spam for luring in followers that were openly advertising on Instagram despite blatantly violating the network’s policies. This demonstrates Instagram’s failure to adequately police its app and ad platform. That neglect led to users being distracted by notifications for follows and Likes generated by bots or fake accounts. Instagram raked in revenue from these services while they diluted the quality of Instagram notifications and wasted people’s time.

In response to our investigation, Instagram tells me it’s removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services we reported were violating its policies. Pages and accounts that themselves weren’t in violation but whose ads have been banned from advertising on Facebook and Instagram. However, a day later TechCrunch still found ads from two of these services on Instagram, and discovered five more companies paying to promote policy-violating follower-growth services.

This raises a big question about whether Instagram properly protects its community from spammers. Why would it take a journalist’s investigation to remove these ads and businesses that brazenly broke Instagram’s rules when the company is supposed to have technical and human moderation systems in place? The Facebook-owned app’s quest to “move fast” to grow its user base and business seems to have raced beyond what its watchdogs could safeguard.

Hunting spammers

I began this investigation a month ago after being pestered with Instagram Stories ads by a service called GramGorilla. The slicked-back hipster salesmen boasted how many followers he gained with the service and that I could pay to do the same. The ads linked to the website of a division of Krends Marketing, where for $46 to $126 per month, it promised to score me 1,000 to 2,500 Instagram followers.

Some apps like this sell followers directly, though these are typically fake accounts. They might boost your follower count (unless they’re detected and terminated) but won’t actually engage with your content or help your business, and end up dragging down your metrics so Instagram shows your posts to fewer people. But I discovered that GramGorilla/Krends and the majority of apps selling Instagram audience growth do something even worse.

You give these scammy businesses your Instagram username and password, plus some relevant topics or demographics, and they automatically follow and unfollow, like and comment on strangers’ Instagram profiles. The goal is to generate notifications those strangers will see in hopes that they’ll get curious or want to reciprocate and so therefore follow you back. By triggering enough of this notification spam, they trick enough strangers to follow you to justify the monthly subscription fee.

That pissed me off. Facebook, Instagram and other social networks send enough real notifications as is, growth hacking their way to more engagement, ad views and daily user counts. But at least they have to weigh the risk of annoying you so much that you turn off notifications all together. Services that sell followers don’t care if they pollute Instagram and ruin your experience as long as they make money. They’re classic villains in the “tragedy of the commons” of our attention.

This led me to start cataloging these spam company ads, and I was startled by how many different ones I saw. Soon, Instagram’s ad targeting and retargeting algorithms were backfiring, purposefully feeding me ads for similar companies that also violated Instagram’s policies.

The 17 services selling followers or spam that I originally indexed were Krends Marketing / GramGorilla, SocialUpgrade, MagicSocial, EZ-Grow, Xplod Social, Macurex, GoGrowthly, Instashop / IG Shops, TrendBee, JW Social Media Marketing, YR Charisma, Instagrocery, Social Sensational, SocialFuse, We Grow Social, IG Wildfire and Gramflare. TrendBee and Gramflare were found to still be running Instagram ads after the platform said they’ve been banned from doing so. Upon further investigation after Instagram’s supposed crackdown, I discovered five more services sell prohibited growth services: FireSocial, InstaMason/IWentMissing, NexStore2019, InstaGrow and Servantify.

Knowingly poisoning the well

I wanted to find out if these companies were aware that they violate Instagram’s policies and how they justify generating spam. Most hide their contact info and merely provide a customer support email, but eventually I was able to get on the phone with some of the founders.

What we’re doing is obviously against their terms of service,” said GoGrowthly’s co-founder who refused to provide their name. “We’re going in and piggybacking off their free platform and not giving them any of the revenue. Instagram doesn’t like us at all. We utilize private proxies depending on clients’ geographic location. That’s sort of our trick to reduce any sort of liability,” so clients’ accounts don’t get shut down, they said. “It’s a careful line that we tread with Instagram. Similar to SEO companies and Google, Google wants the best results for customers and customers want the best results for them. There’s a delicate dance,” said Macurex founder Gun Hudson.

EZ-Grow’s co-founder Elon refused to give his last name on the record, but told me “[Clients] always need something new. At first it was follows and likes. Now we even watch Stories for them. Every new feature that Instagram has we take advantage of it to make more visibility for our clients.” He says EZ-Grow spends $500 per day on Instagram ads, which are its core strategy for finding new customers. SocialFuse founder Aleksandr [last name redacted] says his company spends a couple hundred dollars per day on Instagram and Facebook ads, and was worried when Instagram reiterated its ban on his kind of service in November, but says, “We thought that we were definitely going to get shut down but nothing has changed on our end.”

Several of the founders tried to defend their notification spam services by saying that at least they weren’t selling fake followers. Lacking any self-awareness, Macurex’s Hudson said, “If it’s done the wrong way it can ruin the user experience. There are all sorts of marketers who will market in untasteful or spammy ways. Instagram needs to keep a check on that.” GoGrowthly’s founder actually told me, “We’re actually doing good for the community by generating those targeted interactions.” WeGrowSocial’s co-founder Brandon also refused to give his last name, but was willing to rat out his competitor SocialSensational for selling followers.

Only EZ-Grow’s Elon seemed to have a moment of clarity. “Because the targeting goes to the right people… and it’s something they would like, it’s not spam,” he said before his epiphany. “People can also look at it as spam, maybe.”

Instagram finally shuts down the spammers

In response to our findings, an Instagram spokesperson provided this lengthy statement confirming it’s shut down the ads and accounts of the violators we discovered, claiming that it works hard to fight spam, and admitting it needs to do better:

Nobody likes receiving spammy follows, likes and comments. It’s really important to us that the interactions people have on Instagram are genuine, and we’re working hard to keep the community free from spammy behavior. Services that offer to boost an account’s popularity via inauthentic likes, comments and followers, as well as ads that promote these services, aren’t allowed on Instagram. We’ve taken action on the services raised in this article, including removing violating ads, disabling Pages and accounts, and stopping Pages from placing further ads. We have various systems in place that help us catch and remove these types of ads before anyone sees them, but given the number of ads uploaded to our platform every day, there are times when some still manage to slip through. We know we have more to do in this area and we’re committed to improving.

Instagram tells me it uses machine learning tools to identify accounts that pay third-party apps to boost their popularity and claims to remove inauthentic engagement before it reaches the recipient of the notifications. By nullifying the results of these services, Instagram believes users will have less incentive to use them. It uses automated systems to evaluate the images, captions and landing pages of all its ads before they run, and sends some to human moderators. It claims this lets it catch most policy-violating ads, and that users can report those it misses.

But these ads and their associated accounts were filled with terms like “get followers,” “boost your Instagram followers,” “real followers,” “grow your engagement,” “get verified,” “engagement automation” and other terms tightly linked to policy-violating services. That casts doubt on just how hard Instagram was working on this problem. It may have simply relied on cheap and scalable technical approaches to catching services with spam bots or fake accounts instead of properly screening ads or employing sufficient numbers of human moderators to police the network.

That misplaced dependence on AI and other tech solutions appears to be a trend in the industry. When I recently reported that child sexual abuse imagery was easy to find on WhatsApp and Microsoft Bing, both seemed to be understaffing the human moderation team that could have hunted down this illegal content with common sense where complex algorithms failed. As with Instagram, these products have highly profitable parent companies that can afford to pour more dollars in policy enforcement.

Kicking these services off Instagram is an important step, but the company must be more proactive. Social networks and self-serve ad networks have been treated as efficient cash cows for too long. The profits from these products should be reinvested in policing them. Otherwise, crooks will happily fleece users for our money and attention.

To learn more about the future of Instagram, check out this article’s author Josh Constine’s SXSW 2019 keynote with Instagram co-founders Kevin Systrom and Mike Krieger — their first talk together since leaving the company.

Powered by WPeMatico

Daily Crunch: How the government shutdown is damaging cybersecurity and future IPOs

Posted by | Apps, Enterprise, Finance, Fundings & Exits, Gadgets, Government, hardware, payments, Policy, Startups, Venture Capital | No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. How Trump’s government shutdown is harming cyber and national security
The government has been shut down for nearly three weeks, and there’s no end in sight. While most of the core government departments — State, Treasury, Justice and Defense — are still operational, others like Homeland Security, which takes the bulk of the government’s cybersecurity responsibilities, are suffering the most.

2. With SEC workers offline, the government shutdown could screw IPO-ready companies
The SEC has been shut down since December 27 and only has 285 of its 4,436 employees on the clock for emergency situations. While tech’s most buzz-worthy unicorns like Uber and Lyft won’t suffer too much from the shutdown, smaller businesses, particularly those in need of an infusion of capital to continue operating, will bear the brunt of any IPO delays.

3. The state of seed 

In 2018, seed activity as a percentage of all deals shrank from 31 percent to 25 percent — a decade low — while the share and size of late-stage deals swelled to record highs.

4. Banking startup N26 raises $300 million at $2.7 billion valuation

N26 is building a retail bank from scratch. The company prides itself on the speed and simplicity of setting up an account and managing assets. In the past year, N26’s valuation has exploded as its user base has tripled, with nearly a third of customers paying for a premium account.

5. E-scooter startup Bird is raising another $300M 

Bird is reportedly nearing a deal to extend its Series C round with a $300 million infusion led by Fidelity. The funding, however, comes at a time when scooter companies are losing steam and struggling to prove that its product is the clear solution to last-mile transportation.

6. AWS gives open source the middle finger 

It’s no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

7. The Galaxy S10 is coming on February 20 

Looks like Samsung is giving Mobile World Congress the cold shoulder and has decided to announce its latest flagship phone a week earlier in San Francisco.

Powered by WPeMatico

Google & Facebook fed ad dollars to child porn discovery apps

Posted by | admob, Advertising Tech, Apps, child exploitation, Facebook, Facebook Audience Network, Google, Health, Mobile, Policy, privacy, Security, TC, WhatsApp, WhatsApp Child Exploitation | No Comments

Google has scrambled to remove third-party apps that led users to child porn sharing groups on WhatsApp in the wake of TechCrunch’s report about the problem last week. We contacted Google with the name of one of these apps and evidence that it and others offered links to WhatsApp groups for sharing child exploitation imagery. Following publication of our article, Google removed from the Google Play store that app and at least five like it. Several of these apps had more than 100,000 downloads, and they’re still functional on devices that already downloaded them.

A screenshot from earlier this month of now-banned child exploitation groups on WhatsApp . Phone numbers and photos redacted

WhatsApp failed to adequately police its platform, confirming to TechCrunch that it’s only moderated by its own 300 employees and not Facebook’s 20,000 dedicated security and moderation staffers. It’s clear that scalable and efficient artificial intelligence systems are not up to the task of protecting the 1.5 billion-user WhatsApp community, and companies like Facebook must invest more in unscalable human investigators.

But now, new research provided exclusively to TechCrunch by anti-harassment algorithm startup AntiToxin shows that these removed apps that hosted links to child porn sharing rings on WhatsApp were supported with ads run by Google and Facebook’s ad networks. AntiToxin found six of these apps ran Google AdMob, one ran Google Firebase, two ran Facebook Audience Network and one ran StartApp. These ad networks earned a cut of brands’ marketing spend while allowing the apps to monetize and sustain their operations by hosting ads for Amazon, Microsoft, Motorola, Sprint, Sprite, Western Union, Dyson, DJI, Gett, Yandex Music, Q Link Wireless, Tik Tok and more.

The situation reveals that tech giants aren’t just failing to spot offensive content in their own apps, but also in third-party apps that host their ads and that earn them money. While these apps like “Group Links For Whats” by Lisa Studio let people discover benign links to WhatsApp groups for sharing legal content and discussing topics like business or sports, TechCrunch found they also hosted links with titles such as “child porn only no adv” and “child porn xvideos” that led to WhatsApp groups with names like “Children 💋👙👙” or “videos cp” — a known abbreviation for “child pornography.”

In a video provided by AntiToxin seen below, the app “Group Links For Whats by Lisa Studio” that ran Google AdMob is shown displaying an interstitial ad for Q Link Wireless before providing WhatsApp group search results for “child.” A group described as “Child nude FBI POLICE” is surfaced, and when the invite link is clicked, it opens within WhatsApp to a group used for sharing child exploitation imagery. (No illegal imagery is shown in this video or article. TechCrunch has omitted the end of the video that showed a URL for an illegal group and the phone numbers of its members.)

Another video shows the app “Group Link For whatsapp by Video Status Zone” that ran Google AdMob and Facebook Audience Network displaying a link to a WhatsApp group described as “only cp video.” When tapped, the app first surfaces an interstitial ad for Amazon Photos before revealing a button for opening the group within WhatsApp. These videos show how alarmingly easy it was for people to find illegal content sharing groups on WhatsApp, even without WhatsApp’s help.

Zero tolerance doesn’t mean zero illegal content

In response, a Google spokesperson tells me that these group discovery apps violated its content policies and it’s continuing to look for more like them to ban. When they’re identified and removed from Google Play, it also suspends their access to its ad networks. However, it refused to disclose how much money these apps earned and whether it would refund the advertisers. The company provided this statement:

Google has a zero tolerance approach to child sexual abuse material and we’ve invested in technology, teams and partnerships with groups like the National Center for Missing and Exploited Children, to tackle this issue for more than two decades. If we identify an app promoting this kind of material that our systems haven’t already blocked, we report it to the relevant authorities and remove it from our platform. These policies apply to apps listed in the Play store as well as apps that use Google’s advertising services.

App Developer Ad Network Estimated Installs   Last Day Ranked
Unlimited Whats Groups Without Limit Group links   Jack Rehan Google AdMob 200,000 12/18/2018
Unlimited Group Links for Whatsapp NirmalaAppzTech Google AdMob 127,000 12/18/2018
Group Invite For Whatsapp Villainsbrain Google Firebase 126,000 12/18/2018
Public Group for WhatsApp Bit-Build Google AdMob, Facebook Audience Network   86,000 12/18/2018
Group links for Whats – Find Friends for Whats Lisa Studio Google AdMob 54,000 12/19/2018
Unlimited Group Links for Whatsapp 2019 Natalie Pack Google AdMob 3,000 12/20/2018
Group Link For whatsapp Video Status Zone   Google AdMob, Facebook Audience Network 97,000 11/13/2018
Group Links For Whatsapp – Free Joining Developers.pk StartAppSDK 29,000 12/5/2018

Facebook, meanwhile, blamed Google Play, saying the apps’ eligibility for its Facebook Audience Network ads was tied to their availability on Google Play and that the apps were removed from FAN when booted from the Android app store. The company was more forthcoming, telling TechCrunch it will refund advertisers whose promotions appeared on these abhorrent apps. It’s also pulling Audience Network from all apps that let users discover WhatsApp Groups.

A Facebook spokesperson tells TechCrunch that “Audience Network monetization eligibility is closely tied to app store (in this case Google) review. We removed [Public Group for WhatsApp by Bit-Build] when Google did – it is not currently monetizing on Audience Network. Our policies are on our website and out of abundance of caution we’re ensuring Audience Network does not support any group invite link apps. This app earned very little revenue (less than $500), which we are refunding to all impacted advertisers.” WhatsApp has already banned all the illegal groups TechCrunch reported on last week.

Facebook also provided this statement about WhatsApp’s stance on illegal imagery sharing groups and third-party apps for finding them:

WhatsApp does not provide a search function for people or groups – nor does WhatsApp encourage publication of invite links to private groups. WhatsApp regularly engages with Google and Apple to enforce their terms of service on apps that attempt to encourage abuse on WhatsApp. Following the reports earlier this week, WhatsApp asked Google to remove all known group link sharing apps. When apps are removed from Google Play store, they are also removed from Audience Network.

An app with links for discovering illegal WhatsApp Groups runs an ad for Amazon Photos

Israeli NGOs Netivei Reshet and Screen Savers worked with AntiToxin to provide a report published by TechCrunch about the wide extent of child exploitation imagery they found on WhatsApp. Facebook and WhatsApp are still waiting on the groups to work with Israeli police to provide their full research so WhatsApp can delete illegal groups they discovered and terminate user accounts that joined them.

AntiToxin develops technologies for protecting online network harassment, bullying, shaming, predatory behavior and sexually explicit activity. It was co-founded by Zohar Levkovitz, who sold Amobee to SingTel for $400 million, and Ron Porat, who was the CEO of ad-blocker Shine. [Disclosure: The company also employs Roi Carthy, who contributed to TechCrunch from 2007 to 2012.] “Online toxicity is at unprecedented levels, at unprecedented scale, with unprecedented risks for children, which is why completely new thinking has to be applied to technology solutions that help parents keep their children safe,” Levkovitz tells me. The company is pushing Apple to remove WhatsApp from the App Store until the problems are fixed, citing how Apple temporarily suspended Tumblr due to child pornography.

Ad networks must be monitored

Encryption has proven an impediment to WhatsApp preventing the spread of child exploitation imagery. WhatsApp can’t see what is shared inside of group chats. Instead, it has to rely on the few pieces of public and unencrypted data, such as group names and profile photos plus their members’ profile photos, looking for suspicious names or illegal images. The company matches those images to a PhotoDNA database of known child exploitation photos to administer bans, and has human moderators investigate if seemingly illegal images aren’t already on file. It then reports its findings to law enforcement and the National Center for Missing and Exploited Children. Strong encryption is important for protecting privacy and political dissent, but also thwarts some detection of illegal content and thereby necessitates more manual moderation.

With just 300 total employees and only a subset working on security or content moderation, WhatsApp seems understaffed to manage such a large user base. It’s tried to depend on AI to safeguard its community. However, that technology can’t yet perform the nuanced investigations necessary to combat exploitation. WhatsApp runs semi-independently of Facebook, but could hire more moderators to investigate group discovery apps that lead to child pornography if Facebook allocated more resources to its acquisition.

WhatsApp group discovery apps featured Adult sections that contained links to child exploitation imagery groupsGoogle and Facebook, with their vast headcounts and profit margins, are neglecting to properly police who hosts their ad networks. The companies have sought to earn extra revenue by powering ads on other apps, yet failed to assume the necessary responsibility to ensure those apps aren’t facilitating crimes. Stricter examinations of in-app content should be administered before an app is accepted to app stores or ad networks, and periodically once they’re running. And when automated systems can’t be deployed, as can be the case with policing third-party apps, human staffers should be assigned despite the cost.

It’s becoming increasingly clear that social networks and ad networks that profit off other people’s content can’t be low-maintenance cash cows. Companies should invest ample money and labor into safeguarding any property they run or monetize, even if it makes the opportunities less lucrative. The strip-mining of the internet without regard for consequences must end.

Powered by WPeMatico