Mark Zuckerberg

Seized cache of Facebook docs raise competition and consent questions

Posted by | Android, api, competition, Damian Collins, data protection law, DCMS committee, Developer, Europe, european union, Facebook, Mark Zuckerberg, Onavo, Policy, privacy, Six4Three, Social, social network, terms of service, United Kingdom, vpn | No Comments

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  • White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

Facebook responded

  • Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

In their response Facebook contends that this was essentially another “cherrypicked” topic and that the company “ultimately settled on a model where developers did not need to purchase advertising to access APIs and we continued to provide the developer platform for free.”

  • Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  • Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  • Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  • Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.

Update: 11:40am

Facebook has posted a lengthy response (read it here) positing that the “set of documents, by design, tells only one side of the story and omits important context.” They give a blow-by-blow response to Collins’ points below though they are ultimately pretty selective in what they actually address.

Generally they suggest that some of the issues being framed as anti-competitive were in fact designed to prevent “sketchy apps” from operating on the platform. Furthermore, Facebook details that they delete some old call logs on Android, that using “market research” data from Onava is essentially standard practice and that users had the choice whether data was shared reciprocally between FB and developers. In regard to specific competitors’ apps, Facebook appears to have tried to get ahead of this release with their announcement yesterday that it was ending its platform policy of banning apps that “replicate core functionality.” 

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

Onavo charts are quite an insight into facebook’s commanding view of the app-based attention marketplace pic.twitter.com/Ezdaxk6ffC

— David Carroll 🦅 (@profcarroll) December 5, 2018

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Powered by WPeMatico

Tech giants offer empty apologies because users can’t quit

Posted by | Amazon, Apple, Apps, Cambridge Analytica, Drama, Elliot Schrage, Facebook, Facebook Policy, facebook privacy, GDPR, Google, Government, Mark Zuckerberg, Microsoft, Mobile, Policy, privacy, project maven, Security, Social, Talent, TC | No Comments

A true apology consists of a sincere acknowledgement of wrong-doing, a show of empathic remorse for why you wronged and the harm it caused, and a promise of restitution by improving ones actions to make things right. Without the follow-through, saying sorry isn’t an apology, it’s a hollow ploy for forgiveness.

That’s the kind of “sorry” we’re getting from tech giants — an attempt to quell bad PR and placate the afflicted, often without the systemic change necessary to prevent repeated problems. Sometimes it’s delivered in a blog post. Sometimes it’s in an executive apology tour of media interviews. But rarely is it in the form of change to the underlying structures of a business that caused the issue.

Intractable Revenue

Unfortunately, tech company business models often conflict with the way we wish they would act. We want more privacy but they thrive on targeting and personalization data. We want control of our attention but they subsist on stealing as much of it as possible with distraction while showing us ads. We want safe, ethically built devices that don’t spy on us but they make their margins by manufacturing them wherever’s cheap with questionable standards of labor and oversight. We want groundbreaking technologies to be responsibly applied, but juicy government contracts and the allure of China’s enormous population compromise their morals. And we want to stick to what we need and what’s best for us, but they monetize our craving for the latest status symbol or content through planned obsolescence and locking us into their platforms.

The result is that even if their leaders earnestly wanted to impart meaningful change to provide restitution for their wrongs, their hands are tied by entrenched business models and the short-term focus of the quarterly earnings cycle. They apologize and go right back to problematic behavior. The Washington Post recently chronicled a dozen times Facebook CEO Mark Zuckerberg has apologized, yet the social network keeps experiencing fiasco after fiasco. Tech giants won’t improve enough on their own.

Addiction To Utility

The threat of us abandoning ship should theoretically hold the captains in line. But tech giants have evolved into fundamental utilities that many have a hard time imagining living without. How would you connect with friends? Find what you needed? Get work done? Spend your time? What hardware or software would you cuddle up with in the moments you feel lonely? We live our lives through tech, have become addicted to its utility, and fear the withdrawal.

If there were principled alternatives to switch to, perhaps we could hold the giants accountable. But the scalability, network effects, and aggregation of supply by distributors has led to near monopolies in these core utilities. The second-place solution is often distant. What’s the next best social network that serves as an identity and login platform that isn’t owned by Facebook? The next best premium mobile and PC maker behind Apple? The next best mobile operating system for the developing world beyond Google’s Android? The next best ecommerce hub that’s not Amazon? The next best search engine? Photo feed? Web hosting service? Global chat app? Spreadsheet?

Facebook is still growing in the US & Canada despite the backlash, proving that tech users aren’t voting with their feet. And if not for a calculation methodology change, it would have added 1 million users in Europe this quarter too.

One of the few tech backlashes that led to real flight was #DeleteUber. Workplace discrimination, shady business protocols, exploitative pricing and more combined to spur the movement to ditch the ridehailing app. But what was different here is that US Uber users did have a principled alternative to switch to without much hassle: Lyft. The result was that “Lyft benefitted tremendously from Uber’s troubles in 2018” eMarketer’s forecasting director Shelleen Shum told the USA Today in May. Uber missed eMarketer’s projections while Lyft exceeded them, narrowing the gap between the car services. And meanwhile, Uber’s CEO stepped down as it tried to overhaul its internal policies.

This is why we need regulation that promotes competition by preventing massive mergers and giving users the right to interoperable data portability so they can easily switch away from companies that treat them poorly

But in the absence of viable alternatives to the giants, leaving these mainstays is inconvenient. After all, they’re the ones that made us practically allergic to friction. Even after massive scandals, data breaches, toxic cultures, and unfair practices, we largely stick with them to avoid the uncertainty of life without them. Even Facebook added 1 million monthly users in the US and Canada last quarter despite seemingly every possible source of unrest. Tech users are not voting with their feet. We’ve proven we can harbor ill will towards the giants while begrudgingly buying and using their products. Our leverage to improve their behavior is vastly weakened by our loyalty.

Inadequate Oversight

Regulators have failed to adequately step up either. This year’s congressional hearings about Facebook and social media often devolved into inane and uninformed questioning like how does Facebook earn money if its doesn’t charge? “Senator, we run ads” Facebook CEO Mark Zuckerberg said with a smirk. Other times, politicians were so intent on scoring partisan points by grandstanding or advancing conspiracy theories about bias that they were unable to make any real progress. A recent survey commissioned by Axios found that “In the past year, there has been a 15-point spike in the number of people who fear the federal government won’t do enough to regulate big tech companies — with 55% now sharing this concern.”

When regulators do step in, their attempts can backfire. GDPR was supposed to help tamp down on the dominance of Google and Facebook by limiting how they could collect user data and making them more transparent. But the high cost of compliance simply hindered smaller players or drove them out of the market while the giants had ample cash to spend on jumping through government hoops. Google actually gained ad tech market share and Facebook saw the littlest loss while smaller ad tech firms lost 20 or 30 percent of their business.

Europe’s GDPR privacy regulations backfired, reinforcing Google and Facebook’s dominance. Chart via Ghostery, Cliqz, and WhoTracksMe.

Even the Honest Ads act, which was designed to bring political campaign transparency to internet platforms following election interference in 2016, has yet to be passed even despite support from Facebook and Twitter. There’s hasn’t been meaningful discussion of blocking social networks from acquiring their competitors in the future, let alone actually breaking Instagram and WhatsApp off of Facebook. Governments like the U.K. that just forcibly seized documents related to Facebook’s machinations surrounding the Cambridge Analytica debacle provide some indication of willpower. But clumsy regulation could deepen the moats of the incumbents, and prevent disruptors from gaining a foothold. We can’t depend on regulators to sufficiently protect us from tech giants right now.

Our Hope On The Inside

The best bet for change will come from the rank and file of these monolithic companies. With the war for talent raging, rock star employees able to have huge impact on products, and compensation costs to keep them around rising, tech giants are vulnerable to the opinions of their own staff. It’s simply too expensive and disjointing to have to recruit new high-skilled workers to replace those that flee.

Google declined to renew a contract with the government after 4000 employees petitioned and a few resigned over Project Maven’s artificial intelligence being used to target lethal drone strikes. Change can even flow across company lines. Many tech giants including Facebook and Airbnb have removed their forced arbitration rules for harassment disputes after Google did the same in response to 20,000 of its employees walking out in protest.

Thousands of Google employees protested the company’s handling of sexual harassment and misconduct allegations on Nov. 1.

Facebook is desperately pushing an internal communications campaign to reassure staffers it’s improving in the wake of damning press reports from the New York Times and others. TechCrunch published an internal memo from Facebook’s outgoing VP of communications Elliot Schrage in which he took the blame for recent issues, encouraged employees to avoid finger-pointing, and COO Sheryl Sandberg tried to reassure employees that “I know this has been a distraction at a time when you’re all working hard to close out the year — and I am sorry.” These internal apologizes could come with much more contrition and real change than those paraded for the public.

And so after years of us relying on these tech workers to build the product we use every day, we must now rely that will save us from them. It’s a weighty responsibility to move their talents where the impact is positive, or commit to standing up against the business imperatives of their employers. We as the public and media must in turn celebrate when they do what’s right for society, even when it reduces value for shareholders. If apps abuse us or unduly rob us of our attention, we need to stay off of them.

And we must accept that shaping the future for the collective good may be inconvenient for the individual. There’s an oppprtunity here not just to complain or wish, but to build a social movement that holds tech giants accountable for delivering the change they’ve promised over and over.

For more on this topic:

Powered by WPeMatico

Facebook Messenger starts rolling out Unsend; here’s how it works

Posted by | Apps, Facebook, facebook messenger, Facebook unsend, Mark Zuckerberg, Mobile, Social, Stan Chudnovsky, TC | No Comments

Facebook secretly retracted messages sent by CEO Mark Zuckerberg, TechCrunch reported seven months ago. Now for the first time, Facebook Messenger users will get the power to unsend too so they can remove their sent messages from the recipient’s inbox. Messages can only be unsent for the first 10 minutes after they’re delivered so that you can correct a mistake or remove something you accidentally pushed, but you won’t be able to edit ancient history. Formally known as “Remove for Everyone,” the button also leaves a “tombstone” indicating a message was retracted. And to prevent bullies from using the feature to cover their tracks, Facebook will retain unsent messages for a short period of time so if they’re reported, it can review them for policy violations.

The Remove feature rolls out in Poland, Bolivia, Colombia and Lithuania today on Messenger for iOS and Android. A Facebook spokesperson tells me the plan is to roll it out globally as soon as possible, though that may be influenced by the holiday App Store update cut-off. In the meantime, it’s also working on more unsend features, potentially including the ability to preemptively set an expiration date for specific messages or entire threads.

“The pros are that users want to be in control . . . and if you make a mistake you can correct it. There are a lot of legitimate use cases out there that we wanted to enable,” Facebook’s head of Messenger Stan Chudnovsky tells me in an exclusive interview. But conversely, he says, “We need to make sure we don’t open up any new venues for bullying. We need to make sure people aren’t sending you bad messages and then removing them because if you report them and the messages aren’t there we can’t do anything.”

Zuckerberg did it; soon you can, too

Facebook first informed TechCrunch it would build an unsend feature back in April after I reported that six sources told me some of Mark Zuckerberg’s Facebook messages had been silently removed from the inboxes of recipients, including non-employees with no tombstone left in their place. We saw that as a violation of user trust and an abuse of the company’s power, given the public had no way to unsend their own messages.

Facebook claimed this was to protect the privacy of its executives and the company’s trade secrets, telling me that “After Sony Pictures’ emails were hacked in 2014 we made a number of changes to protect our executives’ communications. These included limiting the retention period for Mark’s messages in Messenger.” But it seems likely that Facebook also wanted to avoid another embarrassing situation like when Zuckerberg’s old instant messages from 2004 leaked. One damning exchange saw Zuckerberg tell a friend “if you ever need info about anyone at harvard . . . just ask . . . i have over 4000 emails, pictures, addresses, sns.” “what!? how’d you manage that one?”  the friend replied. “People just submitted it . .  i don’t know why . . . they ‘trust me’ . . . dumb fucks” Zuckerberg replied.

The company told me it was actually already working on an Unsend button for everyone, and wouldn’t delete any more executives’ messages until it launched. Chudnovsky tells me he felt like “I wish we launched this sooner” when the news broke. But then six months went by without progress or comment from Facebook before TechCrunch broke the news that tipster Jane Manchun Wong had spotted Facebook prototyping the Remove feature. Then a week ago, Facebook Messenger’s App Store release notes accidentally mentioned that a 10-minute Unsend button was coming soon.

So why the seven-month wait? Especially given Instagram already allows users to unsend messages no matter how old? “The reason why it took so long is because on the server side, it’s actually much harder. All the messages are stored on the server, and that goes into the core transportation layer of our how our messaging system was built,” Chudnovsky explains. “It was hard to do given how we were architected, but we were always worried about the integrity concerns it would open up.” Now the company is confident it’s surmounted the engineering challenge to ensure an Unsent message reliably disappears from the recipient.

“The question becomes ‘who owns that message?’ Before that message is delivered to your Messenger app, it belongs to me. But when it actually arrives, it probably belongs to both of us,” Chudnovsky pontificates.

How Facebook Messenger’s “Remove for Everyone” button works

Facebook settled on the ability to let you remove any kind of message — including text, group chats, photos, videos, links and more — within 10 minutes of sending. You can still delete any message on just your side of the conversation, but only messages you sent can be removed from their recipients. You can’t delete from someone else what they sent you, the feature’s PR manager Kat Chui tells me. And Facebook will keep a private copy of the message for a short while after it’s deleted to make sure it can review if it’s reported for harassment.

To use the unsend feature, tap and hold on a message you sent, then select “Remove.” You’ll get options to “Remove for Everyone” which will retract the message, or “Remove for you,” which replaces the old delete option and leaves the message in the recipient’s inbox. You’ll get a warning that explains “You’ll permanently remove this message for all chat members. They can see that you removed a message and still report it.” If you confirm the removal, a line of text noting “you [or the sender’s name] removed a message” (known as a tombstone) will appear in the thread where the message was. If you want to report a removed message for abuse or another issue, you’ll tap the person’s name, scroll to “Something’s Wrong” and select the proper category such as harassment or that they were pretending to be someone else.

Why the 10-minute limit specifically? “We looked at how the existing delete functionality works. It turns out that when people are deleting messages because it’s a mistake or they sent something they didn’t want to send, it’s under a minute. We decided to extend it to 10, but decided we didn’t need to do more,” Chudnovsky reveals.

He says he’s not sure if Facebook’s security team will now resume removing executive messages. However, he stresses that the Unsend button Facebook is launching “is definitely not the same feature” as what was used on Zuckerberg’s messages. If Facebook wanted to truly respect its users, it would at least insert the tombstone when it erases old messages from executives.

Messenger is also building more unsend functionality. Taking a cue from encrypted messaging app Signal’s customizable per thread expiration date feature, Chudnovsky tells me “hypothetically, if I want all the messages to be deleted after six months, they get purged. This is something that can be set up on a per thread level,” though Facebook is still tinkering with the details. Another option would be for Facebook to extend to all chats the per message expiration date option from its encrypted Secret messages feature.

“It’s one of those things that feels very simple on the surface. And it would be very easy if the servers were built one way or another from the very beginning,” Chudnovsky concludes. “But it’s one of those things philosophically and technologically that once you get to the scale of 1.3 billion people using it, changing from one model to another is way more complicated.” Hopefully in the future, Facebook won’t give its executives extrajudicial ways to manipulate communications… or at least not until it’s sorted out the consequences of giving the public the same power.

Powered by WPeMatico

10 critical points from Zuckerberg’s epic security manifesto

Posted by | Apps, Election Interference, Facebook, Facebook Election Interference, Facebook Policy, Facebook Security, Government, Mark Zuckerberg, Mobile, Personnel, Policy, Talent, TC | No Comments

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

“While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.”

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

“These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.”

Balancing free speech and security is complicated and expensive

“These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”‘

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”

Powered by WPeMatico

Facebook, Google face first GDPR complaints over ‘forced consent’

Posted by | Advertising Tech, Android, data protection, Europe, european union, Facebook, General Data Protection Regulation, Google, instagram, lawsuit, Mark Zuckerberg, Max Schrems, privacy, Social, social network, social networking, terms of service, WhatsApp | No Comments

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response. Update: Facebook has now sent the following statement, attributed to its chief privacy officer, Erin Egan: “We have prepared for the past 18 months to ensure we meet the requirements of the GDPR. We have made our policies clearer, our privacy settings easier to find and introduced better tools for people to access, download, and delete their information. Our work to improve people’s privacy doesn’t stop on May 25th. For example, we’re building Clear History: a way for everyone to see the websites and apps that send us information when you use them, clear this information from your account, and turn off our ability to store it associated with your account going forward.”

Schrems most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights is an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the power imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record of defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation as a data protection rights champion.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build community a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to unilaterally apply and extract ‘consent’ as a consequence of the reach and power of their platforms — arguing there’s an underlying competition concern that GDPR could also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Powered by WPeMatico

Instagram says ‘you’re all caught up’ in first time-well-spent feature

Posted by | Apps, Facebook, Health, instagram, Instagram Usage Insights, Kevin Systrom, Mark Zuckerberg, Mobile, Social, Time Well Spent | No Comments

Without a chronological feed, it can be tough to tell if you’ve seen all the posts Instagram will show you. That can lead to more of the compulsive, passive, zombie browsing that research suggests is unhealthy as users endlessly scroll through stale content hoping for a hit of dopamine-inducing novelty.

But with Instagram’s newest feature, at least users know when they’ve seen everything and can stop scrolling without FOMO. Instagram is showing some users a mid-feed alert after a bunch of browsing that says “You’re All Caught Up – You’ve seen all new post from the past 48 hours.” When asked about it, Instagram confirmed to TechCrunch that it’s testing this feature. It declined to give details about how it works, including whether the announcement means you’ve seen literally every post from people you follow from the last two days, or just the best ones that the algorithm has decided are worth showing you.

The feature could help out Instagram completists who want to be sure they never miss a selfie, sunset or supper pic. Before Instagram rolled out its algorithm in the summer of 2016, they could just scroll to the last post they’d seen or when they knew they’d last visited. Warning them they’ve seen everything could quiet some of the backlash to the algorithm, which has centered around people missing content they wanted to see because the algorithm mixed up the chronology.

But perhaps more importantly, it’s one of the app’s first publicly tested features that’s clearly designed with the “time well spent” movement in mind. Facebook CEO Mark Zuckerberg has been vocal about prioritizing well-being over profits, to the point that the network reduced the prevalence of viral videos in the feed so much that that app lost 1 million users in the U.S. and Canada in Q4 2017. “I expect the time people spend on Facebook and some measures of engagement will go down . . . If we do the right thing, I believe that will be good for our community and our business over the long term too,” he wrote.

But Instagram’s leadership had been quiet on the issue until last week, when TechCrunch broke news that buried inside Instagram was an unlaunched “Usage Insights” feature that would show users their “time spent.” That prompted Instagram CEO Kevin Systrom to tweet our article, noting “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Instagram is preparing a “Usage Insights” feature that will show how long you spend in the app. Image via Jane Manchun Wong

It’s reassuring to hear that one of the world’s most popular, but also overused, social media apps is going to put user health over engagement and revenue. Usage Insights has yet to launch. But the “You’re All Caught Up” alerts show Instagram is being earnest about its commitment. Those warnings almost surely prompt people to close the app and therefore see fewer ads, hurting Instagram’s bottom line.

Perhaps it’s a product of Facebook and Instagram’s dominance that they can afford to trade short-term engagement for long-term sustainability of the product. Some companies like Twitter have been criticized for not doing more to kick abusers off their platforms because it could hurt their user count.

But with Android now offering time management tools and many urging Apple to do the same, the time-well-spent reckoning may be dawning upon the mobile app ecosystem. Apps that continue to exploit users by doing whatever it takes to maximize total time spent may find themselves labeled the enemy, plus may actually be burning out their most loyal users. Urging them to scroll responsibly could not only win their favor, but keep them browsing in shorter, healthier sessions for years to come.

Powered by WPeMatico

The psychological impact of an $11 Facebook subscription

Posted by | Advertising Tech, Apps, eCommerce, Facebook, Facebook ads, Facebook Subscriptions, Mark Zuckerberg, Mobile, payments, Policy, Social, TC, TCUK | No Comments

Would being asked to pay Facebook to remove ads make you appreciate their value or resent them even more? As Facebook considers offering an ad-free subscription option, there are deeper questions than how much money it could earn. Facebook has the opportunity to let us decide how we compensate it for social networking. But choice doesn’t always make people happy.

In February I explored the idea of how Facebook could disarm data privacy backlash and boost well-being by letting us pay a monthly subscription fee instead of selling our attention to advertisers. The big takeaways were:

  • Mark Zuckerberg insists that Facebook will remain free to everyone, including those who can’t afford a monthly fee, so subscriptions would be an opt-in alternative to ads rather than a replacement that forces everyone to pay
  • Partially decoupling the business model from maximizing your total time spent on Facebook could let it actually prioritize time well spent because it wouldn’t have to sacrifice ad revenue
  • The monthly subscription price would need to offset Facebook’s ad earnings. In the US & Canada Facebook earned $19.9 billion in 2017 from 239 million users. That means the average user there would have to pay $7 per month

However, my analysis neglected some of the psychological fallout of telling people they only get to ditch ads if they can afford it, the loss of ubiquitous reach for advertisers, and the reality of which users would cough up the cash. Though on the other hand, I also neglected the epiphany a price tag could produce for users angry about targeted advertising.

What’s Best For Everyone

This conversation is relevant because Zuckerberg was asked twice by congress about Facebook potentially offering subscriptions. Zuckerberg endorsed the merits of ad-supported apps, but never ruled out letting users buy a premium version. “We don’t offer an option today for people to pay to not show ads” Zuckerberg said, later elaborating that “Overall, I think that the ads experience is going to be the best one. I think in general, people like not having to pay for a service. A lot of people can’t afford to pay for a service around the world, and this aligns with our mission the best.”

But that word ‘today’ gave a glimmer of hope that we might be able to pay in the future.

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy and Commerce hearing about Facebook on Capitol Hill in Washington, DC, April 11, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

What would we be paying for beyond removing ads, though?. Facebook already lets users concerned about their privacy opt out of some ad targeting, just not seeing ads as a whole. Zuckerberg’s stumping for free Internet services make it seem unlikely that Facebook would build valuable features and reserve them for subscribers

Spotify only lets paid users play any song they want on-demand, while ad-supported users are stuck on shuffle. LinkedIn only lets paid users message anyone they want and appear as a ‘featured applicant’ to hirers, while ad-supported users can only message their connections. Netflix only lets paid users…use it at all.

But Facebook views social networking as a human right, and would likely want to give all users any extra features it developed like News Feed filters to weed out politics or baby pics. Facebook also probably wouldn’t sell features that break privacy like how LinkedIn subscribers can see who visited their profiles. In fact, I wouldn’t bet on Facebook offering any significant premium-only features beyond removing ads. That could make it a tough sell.

Meanwhile, advertisers trying to reach every member of a demographic might not want a way for people to pay to opt-out of ads. If they’re trying to promote a new movie, a restaurant chain, or an election campaign, they’d want as strong of penetration amongst their target audience as they can get. A subscription model punches holes in the ubiquity of Facebook ads that drive businesses to the app.

Resentment Vs Appreciation

But the biggest issue is that Facebook is just really good at monetizing with ads. For never charging users, it earns a ton of money. $40 billion in 2017. Convincing people to pay more with their wallets than their eyeballs may be difficult. And the ones who want to pay are probably worth much more than the average.

Let’s look at the US & Canada market where Facebook earns the most per user because they’re wealthier and have more disposable income than people in other parts of the world, and therefore command higher ad rates. On average US and Canada users earn Facebook $7 per month from ads. But those willing and able to pay are probably richer than the average user, so luxury businesses pay more to advertise to them, and probably spend more time browsing Facebook than the average user, so they see more of those ads.

Brace for sticker shock, because for Facebook to offset the ad revenue of these rich hardcore users, it might have to charge more like $11 to $14 per month.

With no bonus features, that price for something they can get for free could seem way too high. Many who could afford it still wouldn’t justify it, regardless of how much time they spend on Facebook compared to other media subscriptions they shell out for. Those who truly can’t afford it might suddenly feel more resentment towards the Facebook ads they’ve been scrolling past unperturbed for years. Each one would be a reminder that they don’t have the cash to escape Facebook’s data mines.

But perhaps it’s just as likely that people would feel the exact opposite — that having to see those ads really isn’t so bad when faced with the alternative of a steep subscription price.

People often don’t see worth in what they get for free. Being confronted with a price tag could make them more cognizant of the value exchange they’re voluntarily entering. Social networking costs money to operate, and they have to pay somehow. Seeing ads keeps Facebook’s lights on, its labs full of future products, and its investors happy.

That’s why it might not matter if Facebook can only get 4 percent, or 1 percent, or 0.1 percent of users to pay. It could be worth it for Facebook to build out a subscription option to empower users with a sense of choice and provide perspective on the value they already receive for free.

For more big news about Facebook, check out our recent coverage:

Powered by WPeMatico

Zuckerberg owns or clones most of the “8 social apps” he cites as competition

Posted by | Apps, data portability, Facebook, Facebook Data Portability, facebook messenger, instagram, Mark Zuckerberg, Mobile, Social, TC, WhatsApp, zuckerberg testimony | No Comments

Mark Zuckerberg’s flimsy defense when congress asked about a lack of competition to Facebook has been to cite that the average American uses eight social apps. But that conveniently glosses over the fact that Facebook owns three of the top 10 U.S. iOS apps: #4 Instagram, #6 Messenger, and #8 Facebook according to App Annie. The top 3 apps are games. Facebook is building its Watch video hub to challenge #5 YouTube, and has relentlessly cloned Stories to beat #7 Snapchat. And Facebook also owns #19 WhatsApp. Zoom in to just “social networking apps”, and Facebook owns the entire top 3.

“The average American I think uses eight different communication and social apps. So there’s a lot of different choice and a lot of innovation and activity going on in this space” Zuckerberg said when asked about whether Facebook is a monopoly by Senator Graham during yesterday’s Senate hearing, and he’s trotted out that same talking point that was on his note sheet during today’s House testimony.

But Facebook has relentlessly sought to acquire or co-opt the features of its competitors. That’s why any valuable regulation will require congress to prioritize competition. That means either breaking up Facebook, Instagram, and WhatsApp; avoiding rules that are easy for Facebook to comply with but prohibitively expensive for potential rivals to manage; or ensuring data portability that allows users to choose where to take their content and personal information.

Breaking up Facebook, or at least preventing it from acquiring established social networks in the future, would be the most powerful way to promote competition in the space. Facebook’s multi-app structure creates economies of scale in data that allow it to share ad targeting and sales teams, backend engineering, and relevancy-sorting algorithms. That makes it tough for smaller competitors without as much money or data to provide the public with more choice.

Regulation done wrong could create a moat for Facebook, locking in its lead. Complex transparency laws might be just a paperwork speed bump for Facebook and its army of lawyers, but could be too onerous for upstart companies to follow. Meanwhile, data collection regulation could prevent competitors from ever building as large of a data war chest as Facebook has already generated.

Data portability gives users the option to choose the best social network for them, rather than being stuck where they already are. Facebook provides a Download Your Information tool for exporting your content. But photos come back compressed, and you don’t get the contact info of friends unless they opt in. The list of friends’ names you receive doesn’t allow you to find them on other apps the way contact info would. Facebook should at least offer a method for your exporting hashed version of that contact info that other apps could use to help you find your friends there without violating the privacy of those friends. Meanwhile, Instagram entirely lacks a Download Your Information tool.

Congress should push Zuckerberg to explain what apps compete with Facebook as a core identity provider, an omni-purpose social graph, or cross-platform messaging app. Without choice, users are at the mercy of Facebook’s policy and product examples. All of the congressional questions about data privacy and security don’t mean much to the public if they have no viable alternative to Facebook. The fact that Facebook owns or clones the majority of the 8 social apps used by the average American is nothing for Zuckerberg to boast about.

 

Powered by WPeMatico

Zuckerberg’s boring testimony is a big win for Facebook

Posted by | Apps, Cambridge Analytica, Developer, Facebook Data privacy, Government, Mark Zuckerberg, Mobile, Policy, privacy, Social, TC, zuckerberg testimony | No Comments

Mark Zuckerberg ran his apology scripts, trotted out his lists of policy fixes and generally dulled the Senate into submission. And that constitutes success for Facebook.

Zuckerberg testified before the joint Senate judiciary and commerce committee today, capitalizing on the lack of knowledge of the politicians and their surface-level questions. Half the time, Zuckerberg got to simply paraphrase blog posts and statements he’d already released. Much of the other half, he merely explained how basic Facebook functionality works.

The senators hadn’t done their homework, but he had. All that training with D.C. image consultants paid off.

Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC. (Photo: JIM WATSON/AFP/Getty Images)

Sidestepping any gotcha questions or meme-worthy sound bites, Zuckerberg’s repetitive answers gave the impression that there’s little left to uncover, whether or not that’s true. He made a convincing argument that Facebook is atoning for its sins, is cognizant of its responsibility and has a concrete plan in place to improve data privacy.

With just five minutes per senator, and them each with a queue of questions to get through, few focused on the tougher queries, and even fewer had time for follow-ups to dig for real answers.

Did Facebook cover up the Cambridge Analytica scandal or decide against adding privacy protections earlier to protect its developer platform? Is it a breach of trust for Zuckerberg and other executives to have deleted their Facebook messages out of recipients’ inboxes? How has Facebook used a lack of data portability to inhibit the rise of competitors? Why doesn’t Instagram let users export their data the way they can from Facebook?

The public didn’t get answers to any of those questions today. Just Mark’s steady voice regurgitating Facebook’s talking points. Investors rewarded Facebook for its monotony with a 4.5 percent share price boost.

That’s not to say today’s hearing wasn’t effective. It’s just that the impact was felt before Zuckerberg waded through a hundred photographers to take his seat in the Senate office.

Facebook knew this day was coming, and worked to build Zuckerberg a fortress of facts he could point to no matter what he got asked:

  • Was Facebook asleep at the wheel during the 2016 election? Yesterday it revealed it had deleted the accounts of Russian GRU intelligence operatives in June 2016.
  • How will Facebook prevent this from happening again? Last week it announced plans to require identity and location verification for any political advertiser or popular Facebook Page, and significantly restricted its developer platform.
  • Is Facebook taking this seriously? Zuckerberg wrote in his prepared testimony for today that Facebook is doubling its security and content moderation team from 10,000 to 20,000, and that “protecting our community is more important than maximizing our profits.”
  • Is Facebook sorry? “We didn’t take a broad enough view of what our responsibility is and that was a huge mistake. That was my mistake,” Zuckerberg has said, over and over.

Facebook may never have made such sweeping changes and apologies had it not had today and tomorrow’s testimony on the horizon. But this defensive strategy also led to few meaningful disclosures, to the detriment of the understanding of the public and the Senate — and to the benefit of Facebook.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

We did learn that Facebook is working with Special Counsel Robert Mueller on his investigation into election interference. We learned that Zuckerberg thinks it was a mistake not to suspend the advertising account of Cambridge Analytica when Facebook learned it had bought user data from Dr. Aleksandr Kogan. And we learned that the senate will “haul in” Cambridge Analytica for a future hearing about data privacy.

None of those are earth-shaking.

Perhaps the only fireworks during the testimony came when Senator Ted Cruz laid into Zuckerberg over the Gizmodo report citing that Facebook’s trending topics curators suppressed conservative news trends. Cruz badgered Zuckerberg about whether he believes Facebook is politically neutral, whether Facebook has ever taken down Pages from liberal groups like Planned Parenthood or MoveOn.org, if he knows the political leanings of Facebook’s content moderators and whether Facebook fired Oculus co-founder Palmer Luckey over his [radical conservative] political views.

Zuckerberg maintained that he and Facebook are neutral, but that last question was the only one of the day that seemed to visibly perturb him. “That is a specific personnel matter than seems like it would be inappropriate…” Zuckerberg said before Cruz interrupted, pushing the CEO to exasperatedly respond, “Well then I can confirm that it was not because of a political view.” It should be noted that Cruz has received numerous campaign donations from Luckey.

Full exchange between Senator @tedcruz and Mark Zuckerberg where Senator Ted Cruz questions the Facebook CEO about the censorship of Conservatives on his platform. pic.twitter.com/c6d7jwDbnJ

— The Columbia Bugle 🇺🇸 (@ColumbiaBugle) April 10, 2018

This was the only time Zuckerberg seemed flapped, because he knows the stakes of the public perception of Facebook’s political leanings. Zuckerberg, many Facebook employees and Facebook’s home state of California are all known to lean left. But if the company itself is seen that way, conservative users could flee, shattering Facebook’s network effect. Yet again, Zuckerberg nimbly avoided getting cornered here, and was aided by the bell signaling the end of Cruz’s time. He never noticeably raised his voice, lashed back at the senators or got off message.

By the conclusion of the five hours of questioning, the senators themselves were admitting they hadn’t watched the day’s full testimony. Viewers at home had likely returned to their lives. Even the press corps’ eyes were glazing over. But Zuckerberg was prepared for the marathon. He maintained pace through the finish line. And he made it clear why marathons aren’t TV spectator sports.

The question is no longer what revelations would come from Mr. Zuckerberg going to Washington. Tomorrow’s testimony is likely to go similarly. It’s whether Facebook can coherently execute on the data privacy promises it made leading up to today. This will be a “never-ending battle” as Zuckerberg said, dragging out over many years. And again, that’s in Facebook’s interest. Because in the meantime, everyone’s going back to scrolling their feeds.

Powered by WPeMatico

Zuckerberg tells Congress Facebook is not listening to you through your phone

Posted by | Facebook, Government, Mark Zuckerberg, Mobile, Social, zuckerberg testimony | No Comments

Facebook CEO Mark Zuckerberg officially shot down the conspiracy theory that the social network has some way of keeping tabs on its users by tapping into the mics on people’s smartphones. During Zuckerberg’s testimony before the Senate this afternoon, Senator Gary Peters had asked the CEO if the social network is mining audio from mobile devices — something his constituents have been asking him about, he said.

Zuckerberg denied this sort of audio data collection was taking place.

The fact that so many people believe that Facebook is “listening” to their private conversations is representative of how mistrustful users have grown of the company and its data privacy practices, the Senator noted.

“I think it’s safe to say very simply that Facebook is losing the trust of an awful lot of Americans as a result of this incident,” said Peters, tying his constituents’ questions about mobile data mining to their outrage over the Cambridge Analytica scandal.

Questions about Facebook’s mobile data collection practices aren’t anything new, however.

In fact, Facebook went on record back in 2016 to state — full stop — that it does not use your phone’s microphone to inform ads or News Feed stories.

Despite this, it’s something that keeps coming up, time and again. The Wall Street Journal even ran an explainer video about the conspiracy last month. And yet none of the reporting seems to quash the rumor.

People simply refuse to believe it’s not happening. They’ll tell you of very specific times when something they swear they only uttered aloud quickly appeared in their Facebook News Feed.

Perhaps their inability to believe Facebook on the matter is more of an indication of how precise — and downright creepy — Facebook’s ad targeting capabilities have become over the years.

Peters took the opportunity today to ask Zuckerberg this question straight on today, during Zuckerberg’s testimony.

“Something that I’ve been hearing a lot from folks who have been coming up to me and talking about a kind of experience they’ve had where they’re having a conversation with friends — not on the phone, just talking. And then they see ads popping up fairly quickly on their Facebook,” Peters explained. “So I’ve heard constituents fear that Facebook is mining audio from their mobile devices for the purposes of ad targeting — which I think speaks to the lack of trust that we’re seeing here.”

He then asked Zuckerberg to state if this is something Facebook did.

“Yes or no: Does Facebook use audio obtained from mobile devices to enrich personal information about its users?,” Peters asked.

Zuckerberg responded simply: “No.”

The CEO then added that his answer meant “no” in terms of the conspiracy theory that keeps getting passed around, but noted that the social network does allow users to record videos, which have an audio component. That was a bit of an unnecessary clarification, though, given that the question was about surreptitious recording, not something users were explicitly recording media to share.

“Hopefully that will dispel a lot of what I’ve been hearing,” Peters said, after hearing Zuckerberg’s response.

We wouldn’t be too sure.

There have been a number of lengthy explanations of the technical limitations regarding a project of this scale, which have also pointed out how easy it would be to detect this practice, if it were true. But there are still those people out there who believe things to be true because they feel true.

And at the end of the day, the fact that this conspiracy refuses to die says something about how Facebook users view the company: as a stalker that creeps on their privacy, and then can’t be believed when it tells you, “no, trust me, we don’t do that.”

Powered by WPeMatico