Facebook Policy

Senator Warner calls on Zuckerberg to support market research consent rules

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Government, mark warner, market research, Mobile, Policy, privacy, senate, Social, vpn | No Comments

In response to TechCrunch’s investigation of Facebook paying teens and adults to install a VPN that lets it analyze all their phone’s traffic, Senator Mark Warner (D-VA) has sent a letter to Mark Zuckerberg. It admonishes Facebook for not spelling out exactly which data the Facebook Research app was collecting or giving users adequate information necessary to determine if they should accept payment in exchange for selling their privacy. Following our report, Apple banned Facebook’s Research app from iOS and shut down its internal employee-only workplace apps too as punishment, causing mayhem in Facebook’s office.

Warner wrote to Zuckerberg, “In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection. Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

Warner is working on writing new laws to govern data collection initiatives like Facebook Research. He asks Zuckerberg, “Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?”

Senator Blumenthal’s fierce statement

Meanwhile, Senator Richard Blumenthal (D-CT) provided TechCrunch with a fiery statement regarding our investigation. He calls Facebook anti-competitive, which could fuel calls to regulate or break up Facebook, says the FTC must address the issue and that he’s planning to work with congress to safeguard teens’ privacy:

“Wiretapping teens is not research, and it should never be permissible. This is yet another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behavior. Instead of learning its lesson when it was caught spying on consumers using the supposedly ‘private’ Onavo VPN app, Facebook rebranded the intrusive app and circumvented Apple’s attempts to protect iPhone users. Facebook continues to demonstrate its eagerness to look over everyone’s shoulder and watch everything they do in order to make money. 

Mark Zuckerberg’s empty promises are not enough. The FTC needs to step up to the plate, and the Onavo app should be part of its investigation. I will also be writing to Apple and Google on Facebook’s egregious behavior, and working in Congress to make sure that teens are protected from Big Tech’s privacy intrusions.”

Senator Markey says stop surveiling teens

And finally, Senator Edward J. Markey (D-MA) requests that Facebook stop recruiting teens for its Research program, and notes he’ll push his “Do Not Track Kids” act in Congress:

“It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding how much data they’re handing over and how sensitive it is. I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens. 

But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.”

The senators’ statements do go a bit overboard. Though Facebook Research was aggressively competitive and potentially misleading, Blumenthal calling it “anti-competitive” is a stretch. And Warner’s questioning on whether “any user reasonably understood that they were giving Facebook root device access through the enterprise certificate” or that it uses the data to track competitors oversteps the bounds. Surely some savvy technologists did, but the question is whether all the teens and everyone else understood.

Facebook isn’t the only one paying users to analyze all their phone data. TechCrunch found that Google had a similar program called Screenwise Meter. Though it was more upfront about it, Google also appears to have violated Apple’s employee-only Enterprise Certificate rules. We may be seeing the start to an industry-wide crack down on market research surveillance apps that dangle gift cards in front of users to get them to give up a massive amount of privacy.

Warner’s full letter to Zuckerberg can be found below:

Dear Mr. Zuckerberg: 

I write to express concerns about allegations of Facebook’s latest efforts to monitor user activity. On January 29th, TechCrunch revealed that under the auspices of partnerships with beta testing firms, Facebook had begun paying users aged 13 to 35 to install an enterprise certificate, allowing Facebook to intercept all internet traffic to and from user devices. According to subsequent reporting by TechCrunch, Facebook relied on intermediaries that often “did not disclose Facebook’s involvement until users had begun the signup process.” Moreover, the advertisements used to recruit participants and the “Project Disclosure” make no mention of Facebook or the commercial purposes to which this data was allegedly put.

This arrangement comes in the wake of revelations that Facebook had previously engaged in similar efforts through a virtual private network (VPN) app, Onavo, that it owned and operated. According to a series of articles by the Wall Street Journal, Facebook used Onavo to scout emerging competitors by monitoring user activity – acquiring competitors in order to neutralize them as competitive threats, and in cases when that did not work, monitor usage patterns to inform Facebook’s own efforts to copy the features and innovations driving adoption of competitors’ apps. In 2017, my staff contacted Facebook with questions about how Facebook was promoting Onavo through its Facebook app – in particular, framing the app as a VPN that would “protect” users while omitting any reference to the main purpose of the app: allowing Facebook to gather market data on competitors.

Revelations in 2017 and 2018 prompted Apple to remove Onavo from its App Store in 2018 after concluding that the app violated its terms of service prohibitions on monitoring activity of other apps on a user’s device, as well as a requirement to make clear what user data will be collected and how it will be used. In both the case of Onavo and the Facebook Research project, I have concerns that users were not appropriately informed about the extent of Facebook’s data-gathering and the commercial purposes of this data collection.

Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me. As you recall, I wrote the Federal Trade Commission in 2014 in the wake of revelations that Facebook had undertaken a behavioral experiment on hundreds of thousands of users, without obtaining their informed consent. In submitted questions to your Chief Operating Officer, Sheryl Sandberg, I once again raised these concerns, asking if Facebook provided for “individualized, informed consent” in all research projects with human subjects – and whether users had the ability to opt out of such research. In response, we learned that Facebook does not rely on individualized, informed consent (noting that users consent under the terms of the general Data Policy) and that users have no opportunity to opt out of being enrolled in research studies of their activity. In large part for this reason, I am working on legislation to require individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users. 

Fair, robust competition serves as an impetus for innovation, product differentiation, and wider consumer choice. For these reasons, I request that you respond to the following questions: 

1. Do you think any user reasonably understood that they were giving Facebook root device access through the enterprise certificate? What specific steps did you take to ensure that users were properly informed of this access? 

2. Do you think any user reasonably understood that Facebook was using this data for commercial purposes, including to track competitors?

3. Will you release all participants from the confidentiality agreements Facebook made them sign?

4. As you know, I have begun working on legislation that would require large platforms such as Facebook to provide users, on a continual basis, with an estimate of the overall value of their data to the service provider. In this instance, Facebook seems to have developed valuations for at least some uses of the data that was collected (such as market research). This further emphasizes the need for users to understand fully what data is collected by Facebook, the full range of ways in which it is used, and how much it is worth to the company. Will you commit to supporting this legislation and exploring methods for valuing user data holistically?

5. Will you commit to supporting legislation requiring individualized, informed consent in all instances of behavioral and market research conducted by large platforms on users?

I look forward to receiving your responses within the next two weeks. If you should have any questions or concerns, please contact my office at 202-224-2023.

Powered by WPeMatico

Facebook pays teens to install VPN that spies on them

Posted by | Apps, Facebook, Facebook Policy, facebook privacy, facebook research, Facebook Teens, Mobile, Onavo, Policy, privacy, Social, vpn | No Comments

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.

Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits, and it has no plans to stop.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Facebook’s Research app requires users to ‘Trust’ it with extensive access to their data

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple could seek to block Facebook from continuing to distribute its Research app, or even revoke it permission to offer employee-only apps, and the situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point. TechCrunch has spoken to Apple and it’s aware of the issue, but the company did not provide a statement before press time.

Facebook’s Research program is referred to as Project Atlas on sign-up sites that don’t mention Facebook’s involvement

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

Facebook’s surveillance app

Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.

Over the years since, Onavo clued Facebook in to what apps to copy, features to build and flops to avoid. By 2018, Facebook was promoting the Onavo app in a Protect bookmark of the main Facebook app in hopes of scoring more users to snoop on. Facebook also launched the Onavo Bolt app that let you lock apps behind a passcode or fingerprint while it surveils you, but Facebook shut down the app the day it was discovered following privacy criticism. Onavo’s main app remains available on Google Play and has been installed more than 10 million times.

The backlash heated up after security expert Strafach detailed in March how Onavo Protect was reporting to Facebook when a user’s screen was on or off, and its Wi-Fi and cellular data usage in bytes even when the VPN was turned off. In June, Apple updated its developer policies to ban collecting data about usage of other apps or data that’s not necessary for an app to function. Apple proceeded to inform Facebook in August that Onavo Protect violated those data collection policies and that the social network needed to remove it from the App Store, which it did, Deepa Seetharaman of the WSJ reported.

But that didn’t stop Facebook’s data collection.

Project Atlas

TechCrunch recently received a tip that despite Onavo Protect being banished by Apple, Facebook was paying users to sideload a similar VPN app under the Facebook Research moniker from outside of the App Store. We investigated, and learned Facebook was working with three app beta testing services to distribute the Facebook Research app: BetaBound, uTest and Applause. Facebook began distributing the Research VPN app in 2016. It has been referred to as Project Atlas since at least mid-2018, around when backlash to Onavo Protect magnified and Apple instituted its new rules that prohibited Onavo. [Update: Previously, a similar program was called Project Kodiak.] Facebook didn’t want to stop collecting data on people’s phone usage and so the Research program continued, in disregard for Apple banning Onavo Protect.

Facebook’s Research App on iOS

Ads (shown below) for the program run by uTest on Instagram and Snapchat sought teens 13-17 years old for a “paid social media research study.” The sign-up page for the Facebook Research program administered by Applause doesn’t mention Facebook, but seeks users “Age: 13-35 (parental consent required for ages 13-17).” If minors try to sign-up, they’re asked to get their parents’ permission with a form that reveal’s Facebook’s involvement and says “There are no known risks associated with the project, however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of apps. You will be compensated by Applause for your child’s participation.” For kids short on cash, the payments could coerce them to sell their privacy to Facebook.

The Applause site explains what data could be collected by the Facebook Research app (emphasis mine):

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.”

Meanwhile, the BetaBound sign-up page with a URL ending in “Atlas” explains that “For $20 per month (via e-gift cards), you will install an app on your phone and let it run in the background.” It also offers $20 per friend you refer. That site also doesn’t initially mention Facebook, but the instruction manual for installing Facebook Research reveals the company’s involvement.

Facebook’s intermediary uTest ran ads on Snapchat and Instagram, luring teens to the Research program with the promise of money

 

Facebook seems to have purposefully avoided TestFlight, Apple’s official beta testing system, which requires apps to be reviewed by Apple and is limited to 10,000 participants. Instead, the instruction manual reveals that users download the app from r.facebook-program.com and are told to install an Enterprise Developer Certificate and VPN and “Trust” Facebook with root access to the data their phone transmits. Apple requires that developers agree to only use this certificate system for distributing internal corporate apps to their own employees. Randomly recruiting testers and paying them a monthly fee appears to violate the spirit of that rule.

Security expert Will Strafach found Facebook’s Research app contains lots of code from Onavo Protect, the Facebook-owned app Apple banned last year

Once installed, users just had to keep the VPN running and sending data to Facebook to get paid. The Applause-administered program requested that users screenshot their Amazon orders page. This data could potentially help Facebook tie browsing habits and usage of other apps with purchase preferences and behavior. That information could be harnessed to pinpoint ad targeting and understand which types of users buy what.

TechCrunch commissioned Strafach to analyze the Facebook Research app and find out where it was sending data. He confirmed that data is routed to “vpn-sjc1.v.facebook-program.com” that is associated with Onavo’s IP address, and that the facebook-program.com domain is registered to Facebook, according to MarkMonitor. The app can update itself without interacting with the App Store, and is linked to the email address PeopleJourney@fb.com. He also discovered that the Enterprise Certificate indicates Facebook renewed it on June 27th, 2018 — weeks after Apple announced its new rules that prohibited the similar Onavo Protect app.

“It is tricky to know what data Facebook is actually saving (without access to their servers). The only information that is knowable here is what access Facebook is capable of based on the code in the app. And it paints a very worrisome picture,” Strafach explains. “They might respond and claim to only actually retain/save very specific limited data, and that could be true, it really boils down to how much you trust Facebook’s word on it. The most charitable narrative of this situation would be that Facebook did not think too hard about the level of access they were granting to themselves . . . which is a startling level of carelessness in itself if that is the case.”

“Flagrant defiance of Apple’s rules”

In response to TechCrunch’s inquiry, a Facebook spokesperson confirmed it’s running the program to learn how people use their phones and other services. The spokesperson told us “Like many companies, we invite people to participate in research that helps us identify things we can be doing better. Since this research is aimed at helping Facebook understand how people use their mobile devices, we’ve provided extensive information about the type of data we collect and how they can participate. We don’t share this information with others and people can stop participating at any time.”

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone

Facebook’s spokesperson claimed that the Facebook Research app was in line with Apple’s Enterprise Certificate program, but didn’t explain how in the face of evidence to the contrary. They said Facebook first launched its Research app program in 2016. They tried to liken the program to a focus group and said Nielsen and comScore run similar programs, yet neither of those ask people to install a VPN or provide root access to the network. The spokesperson confirmed the Facebook Research program does recruit teens but also other age groups from around the world. They claimed that Onavo and Facebook Research are separate programs, but admitted the same team supports both as an explanation for why their code was so similar.

Facebook’s Research program requested users screenshot their Amazon order history to provide it with purchase data

However, Facebook claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.

Facebook disobeying Apple so directly could hurt their relationship. “The code in this iOS app strongly indicates that it is simply a poorly re-branded build of the banned Onavo app, now using an Enterprise Certificate owned by Facebook in direct violation of Apple’s rules, allowing Facebook to distribute this app without Apple review to as many users as they want,” Strafach tells us. ONV prefixes and mentions of graph.onavo.com, “onavoApp://” and “onavoProtect://” custom URL schemes litter the app. “This is an egregious violation on many fronts, and I hope that Apple will act expeditiously in revoking the signing certificate to render the app inoperable.”

Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram. Insights into how popular with teens is Chinese video music app TikTok and meme sharing led Facebook to launch a clone called Lasso and begin developing a meme-browsing feature called LOL, TechCrunch first reported. But Facebook’s desire for data about teens riles critics at a time when the company has been battered in the press. Analysts on tomorrow’s Facebook earnings call should inquire about what other ways the company has to collect competitive intelligence.

Last year when Tim Cook was asked what he’d do in Mark Zuckerberg’s position in the wake of the Cambridge Analytica scandal, he said “I wouldn’t be in this situation . . . The truth is we could make a ton of money if we monetized our customer, if our customer was our product. We’ve elected not to do that.” Zuckerberg told Ezra Klein that he felt Cook’s comment was “extremely glib.”

Now it’s clear that even after Apple’s warnings and the removal of Onavo Protect, Facebook is still aggressively collecting data on its competitors via Apple’s iOS platform. “I have never seen such open and flagrant defiance of Apple’s rules by an App Store developer,” Strafach concluded. If Apple shuts the Research program down, Facebook will either have to invent new ways to surveil our behavior amidst a climate of privacy scrutiny, or be left in the dark.

Additional reporting by Zack Whittaker.

Powered by WPeMatico

With trust destroyed, Facebook is haunted by old data deals

Posted by | Apps, BlackBerry, Facebook, facebook platform, Facebook Policy, facebook privacy, Mobile, Netflix, Social, Spotify, TC, Yahoo | No Comments

As Facebook colonized the rest of the web with its functionality in hopes of fueling user growth, it built aggressive integrations with partners that are coming under newfound scrutiny through a deeply reported New York Times investigationSome of what Facebook did was sloppy or unsettling, including forgetting to shut down APIs when it cancelled its Instant Personalization feature for other sites in 2014, and how it used contact syncing to power friend recommendations.

But other moves aren’t as bad as they sound. Facebook did provide Spotify and Netflix the ability to access users messages, but only so people could send friends songs or movies via Facebook messages without leaving those apps. And Facebook did let Yahoo and Blackberry access people’s News Feeds, but to let users browse those feeds within social hub features inside those apps. These partners could only access data when users logged in and connected their Facebook accounts, and were only approved to use this data to provide Facebook-related functionality. That means Spotify at least wasn’t supposed to be rifling through everyone’s messages to find out what bands they talk about so it could build better curation algorithms, and there’s no evidence yet that it did.

Thankfully Facebook has ditched most of these integrations, as the dominance of iOS and Android have allowed it to build fewer, more standardized, and better safeguarded access points to its data. And it’s battened down the hatches in some ways, forcing users to shortcut from Spotify into the real Facebook Messenger rather than giving third-parties any special access to offer Facebook Messaging themselves.

The most glaring allegation Facebook hasn’t adequately responded to yet is that it used data from Amazon, Yahoo, and Huawei to improve friend suggestions through People You May Know — perhaps its creepiest feature. The company needs to accept the loss of growth hacking trade secrets and become much more transparent about how it makes so uncannily accurate recommendations of who to friend request — as Gizmodo’s Kashmir Hill has documented.

In some cases, Facebook has admitted to missteps, with its Director of Developer Platforms and Programs Konstantinos Papamiltiadis writing “we shouldn’t have left the APIs in place after we shut down instant personalization.”

In others, we’ll have decide where to draw the line between what was actually dangerous and what gives us the chills at first glance. You don’t ask permission from friends to read an email from them on a certain browser or device, so should you worry if they saw your Facebook status update on a Blackberry social hub feature instead of the traditional Facebook app? Well that depends on how the access is monitored and meted out.

The underlying question is whether we trust that Facebook and these other big tech companies actually abided by rules to oversee and not to overuse data. Facebook has done plenty wrong, and after repeatedly failing to be transparent or live up to its apologies, it doesn’t deserve the benefit of the doubt. For that reason, I don’t want it giving any developer — even ones I normally trust like Spotify — access to sensitive data protected merely by their promise of good behavior despite financial incentives for misuse.

Facebook’s former chief security officer Alex Stamos tweeted that “allowing for 3rd party clients is the kind of pro-competition move we want to see from dominant platforms. For ex, making Gmail only accessible to Android and the Gmail app would be horrible. For the NY Times to try to scandalize this kind of integration is wrong.” But countered that by noting that “integrations that are sneaky or send secret data to servers controlled by others really is wrong.”

Even if Spotify and Netflix didn’t abuse the access Facebook provided, there’s always eventually a Cambridge Analytica. Tech companies have proven their word can’t necessarily be trusted. The best way to protect users is to properly lock down the platforms with ample vetting, limits, and oversight so there won’t be gray areas that require us to put our faith in the kindness of businesses.

Powered by WPeMatico

Facebook ends platform policy banning apps that copy its features

Posted by | Apps, Developer, Facebook, facebook platform, Facebook Policy, Mobile, Policy, Social, TC | No Comments

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

That policy felt pretty disingenuous given how aggressively Facebook has replicated everyone else’s core functionality, from Snapchat to Twitter and beyond. Facebook had previously enforced the policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or road map, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article, “Facebook shouldn’t block you from finding friends on competitors,” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and, once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its six-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.

Powered by WPeMatico

Facebook must police Today In, its local news digest launching in 400 cities

Posted by | Apps, Facebook, Facebook Fake News, Facebook Journalism Project, Facebook Local News, Facebook News Feed, Facebook Policy, Facebook Today In, Government, Media, Mobile, Policy, Social, TC | No Comments

Facebook has a new area of its app it will have to police for fake news and biased sensationalism. Facebook is launching “Today In”, its local news aggregator it began testing in January, in 400 small to medium-sized US cities. It’s also now testing it in its first overseas spot in Australia. iOS and Android users can open the Today In bookmark or opt in to getting digests of its local news in their feed. The feature includes previews that link out to news sites about top headlines, current discussions, school announcements and more.

“We have a number of misinformation filters in place to ensure that fake news and clickbait does not surface on Today In. We also provide people the ability to report suspicious content on Facebook and within Today In specifically” a Facebook spokesperson tells me. “The misinformation filters are the same across Facebook that we’ve previously talked about – downranking clickbait, ratings from third-party fact checkers” they said. However, “the content in the surface is pulled by algorithm”, so there’s always a chance that problematic content slips through. For now, there will be no ads in Today In.

 

 

Facebook is also now testing Local Alerts with 100 local government and first responder Pages that can be issued to inform citizens about urgent issues or emergencies, such as where to take shelter from a hurricane. The Local Alerts are delivered via News Feed, Today In, and Pages can also target users with notifications about them. Again, while Facebook may be vetting which Pages get access to the Local Alerts feature, it must closely monitor to make sure they’re using it to provide vital info to their communities rather than just grab traffic at sensitive moments.

Facebook is hoping to fill a void after surveys found 50 percent of users wanted more local news through Facebook. It previously tested Today In with New Orleans, La.; Little Rock, Ark.; Billings, Mont.; Peoria, Ill.; Olympia, Wash.; and Binghamton, N.Y. The feature could give local outlets a referral traffic boost that could help offset the fact that Facebook has drained ad dollars from journalism into its own News Feed ads. And to make sure “news deserts” without enough local outlets still have robust Today In sections, Facebook will collect headlines from surrounding areas.

But the launch also opens up a new vector for policy issues, and it’s curious that Facebook would push forward on this given all its policy troubles as of late. It will have to ensure that Today In only aggregates content from reliable and fact-focused local outlets and doesn’t end up peddling fake news. But that in turn could open it to criticism suggesting it’s biased against fringe political outlets that believe their clickbait is the real story.

Users who want to check if they have access to Today In can visit this interactive map. The list includes Facebook’s hometown of Menlo Park and nearby Oakland, but not San Francisco. It’s also skipping big cities like New York and Washington, D.C. in favor of places like Mobile, Alabama; and Provo, Utah.

To find the mobile-only feature in Facebook (there’s no desktop version), users will hit the three-line “More” hamburger button and scroll down looking for “Today In [their city]”. Otherwise, they may stumble across one of its digests showing the headlines, thumbnail images, and publications for three of the biggest local news stories.

After tapping through or opening the Today In bookmark, they’ll be able to horizontally swipe through different sections like In The News that features recent stories and can be toggled to display sports. As per usual, Facebook isn’t above promoting its own content, like user and Page News Feed posts discussing local topics, Groups you could join, or Events you could RSVP to. Once you hit the end of a daily edition, you’ll see a “You’re all caught up” notice, similar to Instagram’s feature designed to keep you from over-scrolling.

Facebook infamously turned away from news in favor of content from friends at the start of 2018, precipitating a significant decline in News Feed reach and referral traffic for links to articles. That left a lot of outlets feeling burned, as many had staffed up thanks to the that flow of traffic and the ad dollars it generated. Now some are having to lay off journalists, especially those making video content that Facebook also dialed down.

By resurfacing local news, Facebook could help strengthen ties in local communities as part of its new mission statement to “bring the world closer together”. But if that news contains heavy partisan bias or hypes up nothingburgers, it could lead to more polarization. Facebook already has trouble finding enough third-party fact checkers to verify viral news stories. Now it may expose itself to even more liability to be the arbiter of truth now that it’s fragmented the news space into hundreds of distinct digests.

This conundrum will play out again and again. Facebook wants to keep pushing forward with product launches it thinks can help society, but it in turn takes on even greater responsibility to protect us that it hasn’t proven it deserves.

Powered by WPeMatico

Tech giants offer empty apologies because users can’t quit

Posted by | Amazon, Apple, Apps, Cambridge Analytica, Drama, Elliot Schrage, Facebook, Facebook Policy, facebook privacy, GDPR, Google, Government, Mark Zuckerberg, Microsoft, Mobile, Policy, privacy, project maven, Security, Social, Talent, TC | No Comments

A true apology consists of a sincere acknowledgement of wrong-doing, a show of empathic remorse for why you wronged and the harm it caused, and a promise of restitution by improving ones actions to make things right. Without the follow-through, saying sorry isn’t an apology, it’s a hollow ploy for forgiveness.

That’s the kind of “sorry” we’re getting from tech giants — an attempt to quell bad PR and placate the afflicted, often without the systemic change necessary to prevent repeated problems. Sometimes it’s delivered in a blog post. Sometimes it’s in an executive apology tour of media interviews. But rarely is it in the form of change to the underlying structures of a business that caused the issue.

Intractable Revenue

Unfortunately, tech company business models often conflict with the way we wish they would act. We want more privacy but they thrive on targeting and personalization data. We want control of our attention but they subsist on stealing as much of it as possible with distraction while showing us ads. We want safe, ethically built devices that don’t spy on us but they make their margins by manufacturing them wherever’s cheap with questionable standards of labor and oversight. We want groundbreaking technologies to be responsibly applied, but juicy government contracts and the allure of China’s enormous population compromise their morals. And we want to stick to what we need and what’s best for us, but they monetize our craving for the latest status symbol or content through planned obsolescence and locking us into their platforms.

The result is that even if their leaders earnestly wanted to impart meaningful change to provide restitution for their wrongs, their hands are tied by entrenched business models and the short-term focus of the quarterly earnings cycle. They apologize and go right back to problematic behavior. The Washington Post recently chronicled a dozen times Facebook CEO Mark Zuckerberg has apologized, yet the social network keeps experiencing fiasco after fiasco. Tech giants won’t improve enough on their own.

Addiction To Utility

The threat of us abandoning ship should theoretically hold the captains in line. But tech giants have evolved into fundamental utilities that many have a hard time imagining living without. How would you connect with friends? Find what you needed? Get work done? Spend your time? What hardware or software would you cuddle up with in the moments you feel lonely? We live our lives through tech, have become addicted to its utility, and fear the withdrawal.

If there were principled alternatives to switch to, perhaps we could hold the giants accountable. But the scalability, network effects, and aggregation of supply by distributors has led to near monopolies in these core utilities. The second-place solution is often distant. What’s the next best social network that serves as an identity and login platform that isn’t owned by Facebook? The next best premium mobile and PC maker behind Apple? The next best mobile operating system for the developing world beyond Google’s Android? The next best ecommerce hub that’s not Amazon? The next best search engine? Photo feed? Web hosting service? Global chat app? Spreadsheet?

Facebook is still growing in the US & Canada despite the backlash, proving that tech users aren’t voting with their feet. And if not for a calculation methodology change, it would have added 1 million users in Europe this quarter too.

One of the few tech backlashes that led to real flight was #DeleteUber. Workplace discrimination, shady business protocols, exploitative pricing and more combined to spur the movement to ditch the ridehailing app. But what was different here is that US Uber users did have a principled alternative to switch to without much hassle: Lyft. The result was that “Lyft benefitted tremendously from Uber’s troubles in 2018” eMarketer’s forecasting director Shelleen Shum told the USA Today in May. Uber missed eMarketer’s projections while Lyft exceeded them, narrowing the gap between the car services. And meanwhile, Uber’s CEO stepped down as it tried to overhaul its internal policies.

This is why we need regulation that promotes competition by preventing massive mergers and giving users the right to interoperable data portability so they can easily switch away from companies that treat them poorly

But in the absence of viable alternatives to the giants, leaving these mainstays is inconvenient. After all, they’re the ones that made us practically allergic to friction. Even after massive scandals, data breaches, toxic cultures, and unfair practices, we largely stick with them to avoid the uncertainty of life without them. Even Facebook added 1 million monthly users in the US and Canada last quarter despite seemingly every possible source of unrest. Tech users are not voting with their feet. We’ve proven we can harbor ill will towards the giants while begrudgingly buying and using their products. Our leverage to improve their behavior is vastly weakened by our loyalty.

Inadequate Oversight

Regulators have failed to adequately step up either. This year’s congressional hearings about Facebook and social media often devolved into inane and uninformed questioning like how does Facebook earn money if its doesn’t charge? “Senator, we run ads” Facebook CEO Mark Zuckerberg said with a smirk. Other times, politicians were so intent on scoring partisan points by grandstanding or advancing conspiracy theories about bias that they were unable to make any real progress. A recent survey commissioned by Axios found that “In the past year, there has been a 15-point spike in the number of people who fear the federal government won’t do enough to regulate big tech companies — with 55% now sharing this concern.”

When regulators do step in, their attempts can backfire. GDPR was supposed to help tamp down on the dominance of Google and Facebook by limiting how they could collect user data and making them more transparent. But the high cost of compliance simply hindered smaller players or drove them out of the market while the giants had ample cash to spend on jumping through government hoops. Google actually gained ad tech market share and Facebook saw the littlest loss while smaller ad tech firms lost 20 or 30 percent of their business.

Europe’s GDPR privacy regulations backfired, reinforcing Google and Facebook’s dominance. Chart via Ghostery, Cliqz, and WhoTracksMe.

Even the Honest Ads act, which was designed to bring political campaign transparency to internet platforms following election interference in 2016, has yet to be passed even despite support from Facebook and Twitter. There’s hasn’t been meaningful discussion of blocking social networks from acquiring their competitors in the future, let alone actually breaking Instagram and WhatsApp off of Facebook. Governments like the U.K. that just forcibly seized documents related to Facebook’s machinations surrounding the Cambridge Analytica debacle provide some indication of willpower. But clumsy regulation could deepen the moats of the incumbents, and prevent disruptors from gaining a foothold. We can’t depend on regulators to sufficiently protect us from tech giants right now.

Our Hope On The Inside

The best bet for change will come from the rank and file of these monolithic companies. With the war for talent raging, rock star employees able to have huge impact on products, and compensation costs to keep them around rising, tech giants are vulnerable to the opinions of their own staff. It’s simply too expensive and disjointing to have to recruit new high-skilled workers to replace those that flee.

Google declined to renew a contract with the government after 4000 employees petitioned and a few resigned over Project Maven’s artificial intelligence being used to target lethal drone strikes. Change can even flow across company lines. Many tech giants including Facebook and Airbnb have removed their forced arbitration rules for harassment disputes after Google did the same in response to 20,000 of its employees walking out in protest.

Thousands of Google employees protested the company’s handling of sexual harassment and misconduct allegations on Nov. 1.

Facebook is desperately pushing an internal communications campaign to reassure staffers it’s improving in the wake of damning press reports from the New York Times and others. TechCrunch published an internal memo from Facebook’s outgoing VP of communications Elliot Schrage in which he took the blame for recent issues, encouraged employees to avoid finger-pointing, and COO Sheryl Sandberg tried to reassure employees that “I know this has been a distraction at a time when you’re all working hard to close out the year — and I am sorry.” These internal apologizes could come with much more contrition and real change than those paraded for the public.

And so after years of us relying on these tech workers to build the product we use every day, we must now rely that will save us from them. It’s a weighty responsibility to move their talents where the impact is positive, or commit to standing up against the business imperatives of their employers. We as the public and media must in turn celebrate when they do what’s right for society, even when it reduces value for shareholders. If apps abuse us or unduly rob us of our attention, we need to stay off of them.

And we must accept that shaping the future for the collective good may be inconvenient for the individual. There’s an oppprtunity here not just to complain or wish, but to build a social movement that holds tech giants accountable for delivering the change they’ve promised over and over.

For more on this topic:

Powered by WPeMatico

10 critical points from Zuckerberg’s epic security manifesto

Posted by | Apps, Election Interference, Facebook, Facebook Election Interference, Facebook Policy, Facebook Security, Government, Mark Zuckerberg, Mobile, Personnel, Policy, Talent, TC | No Comments

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

“While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.”

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

“These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.”

Balancing free speech and security is complicated and expensive

“These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”‘

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”

Powered by WPeMatico

Facebook demands ID verification for big Pages, ‘issue’ ad buyers

Posted by | Apps, Facebook, Facebook ads, Facebook Fake News, facebook pages, Facebook Policy, Government, Honest Ads Act, Mark Zuckerberg, Mobile, Policy, Social, TC | No Comments

Facebook is looking to self-police by implementing parts of the proposed Honest Ads Act before the government tries to regulate it. To fight fake news and election interference, Facebook will require the admins of popular Facebook Pages and advertisers buying political or “issue” ads on “debated topics of national legislative importance” like education or abortion to verify their identity and location. Those that refuse, are found to be fraudulent or are trying to influence foreign elections will have their Pages prevented from posting to the News Feed or their ads blocked.

Meanwhile, Facebook plans to use this information to append a “Political Ad” label and “Paid for by” information to all election, politics and issue ads. Users can report any ads they think are missing the label, and Facebook will show if a Page has changed its name to thwart deception. Facebook started the verification process this week; users in the U.S. will start seeing the labels and buyer info later this spring, and Facebook will expand the effort to ads around the world in the coming months.

This verification and name change disclosure process could prevent hugely popular Facebook Pages from being built up around benign content, then sold to cheats or trolls who switch to sharing scams or misinformation.

Overall, it’s a smart start that comes way too late. As soon as Facebook started heavily promoting its ability to run influential election ads, it should have voluntarily adopted similar verification and labeling rules as traditional media. Instead, it was so focused on connecting people to politics, it disregarded how the connection could be perverted to power mass disinformation and destabilization campaigns.

“These steps by themselves won’t stop all people trying to game the system. But they will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads,” CEO Mark Zuckerberg wrote on Facebook. “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act. This will help raise the bar for all political advertising online.” You can see his full post below.

The move follows Twitter’s November announcement that it too would label political ads and show who purchased them.

Twitter’s mockup for its “Political” ad labels and “paid for by” information

Facebook also gave a timeline for releasing both its tools for viewing all ads run by Pages and to create a Political Ad Archive. A searchable index of all ads with the “political” label, including their images, text, target demographics and how much was spent on them, will launch in June and keep ads visible for four years after they run. Meanwhile, the View Ads tool that’s been testing in Canada will roll out globally in June so users can see any ad run by a Page, not just those targeted to them.

Facebook announced in October it would require documentation from election advertisers and label their ads, but now is applying those requirements to a much wider swath of ads that deal with big issues impacted by politics. That could protect users from disinformation and divisive content not just during elections, but any time bad actors are trying to drive wedges into society. Facebook wouldn’t reveal the threshold of followers that will trigger Pages needing verification, but confirmed it will not apply to small to medium-size businesses.

By self-regulating, Facebook may be able to take the wind out of calls for new laws that apply to online ads buyer disclosure rules on TV and other traditional media ads. Zuckerberg will testify before the U.S. Senate Judiciary and Commerce committees on April 10, as well as the House Energy and Commerce Committee on April 11. Having today’s announcement to point to could give him more protection against criticism during the hearings, though Congress will surely want to know why these safeguards weren’t in place already.

With important elections coming up in the US, Mexico, Brazil, India, Pakistan and more countries in the next year, one…

Posted by Mark Zuckerberg on Friday, April 6, 2018

For more on Facebook’s recent troubles, check out our feature stories:

Powered by WPeMatico