Policy

Apple shares progress report on supplier usage of clean energy

Posted by | Apple, environment, Gadgets, Policy | No Comments

Apple announced that there are now 44 suppliers that have committed to use clean energy for Apple production. It doesn’t mean all suppliers are using renewable energy; it also doesn’t mean that they use 100 percent clean energy for all their clients. But it’s still good news.

All Apple facilities already run on clean energy, such as offices, retails stores and data centers. But Apple is well aware that it manufactures a ton of devices and works with a ton of suppliers. That’s why the company has created a fund to help finance renewable energy projects in China. Apple is also allocating $2.5 billion in green bonds.

Thanks to these initiatives, Apple has financed solar rooftops in Japan, a custom alloy made of recycled aluminum that you can find on the MacBook Air and Mac Mini and more.

Overall, Apple expects to reach its 2020 goal of injecting 4 gigawatts of renewable energy into its supply chain well before 2020. In fact, the company now says that it will indirectly generate around 5 gigawatts of clean energy.

Suppliers in the program include Foxconn, Wistron, TSMC, Corning, STMicroelectronics and dozens of names that are mostly unknown to end customers.

Powered by WPeMatico

Instagram now demotes vaguely ‘inappropriate’ content

Posted by | Apps, Facebook, Facebook Policy, instagram, Instagram Policy, Mobile, Policy, Social, TC | No Comments

Instagram is home to plenty of scantily clad models and edgy memes that may start to get fewer views starting today. Now Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent or hurtful, it could get fewer views.

Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages,” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event at the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community,” Instagram’s product lead for Discovery, Will Ruben, said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content,” which on Facebook would mean being shown lower in News Feed. That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Facebook’s Henry Silverman explained that, “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons.

This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

[Update: After we published, Instagram posted to its Help Center a brief note about its borderline content policy, but with no visual examples, mentions of impacted categories other than sexually suggestive content, or indications of what qualifies content as “inappropriate.” So officially, it’s still leaving users in the dark.]

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.

Powered by WPeMatico

New privacy assistant Jumbo fixes your Facebook & Twitter settings

Posted by | amazon alexa, Apps, Facebook, facebook privacy, funding, Fundings & Exits, google search, Mobile, Policy, privacy, Recent Funding, Social, Startups, TC, Twitter, Twitter Privacy | No Comments

Jumbo could be a nightmare for the tech giants, but a savior for the victims of their shady privacy practices.

Jumbo saves you hours as well as embarrassment by automatically adjusting 30 Facebook privacy settings to give you more protection, and by deleting your old tweets after saving them to your phone. It can even erase your Google Search and Amazon Alexa history, with clean-up features for Instagram and Tinder in the works.

The startup emerges from stealth today to launch its Jumbo privacy assistant app on iPhone (Android coming soon). What could take a ton of time and research to do manually can be properly handled by Jumbo with a few taps.

The question is whether tech’s biggest companies will allow Jumbo to operate, or squash its access. Facebook, Twitter and the rest really should have built features like Jumbo’s themselves or made them easier to use, since they could boost people’s confidence and perception that might increase usage of their apps. But since their business models often rely on gathering and exploiting as much of your data as possible, and squeezing engagement from more widely visible content, the giants are incentivized to find excuses to block Jumbo.

“Privacy is something that people want, but at the same time it just takes too much time for you and me to act on it,” explains Jumbo founder Pierre Valade, who formerly built beloved high-design calendar app Sunrise that he sold to Microsoft in 2015. “So you’re left with two options: you can leave Facebook, or do nothing.”

Jumbo makes it easy enough for even the lazy to protect themselves. “I’ve used Jumbo to clean my full Twitter, and my personal feeling is: I feel lighter. On Facebook, Jumbo changed my privacy settings, and I feel safer.” Inspired by the Cambridge Analytica scandal, he believes the platforms have lost the right to steward so much of our data.

Valade’s Sunrise pedigree and plan to follow Dropbox’s bottom-up freemium strategy by launching premium subscription and enterprise features has already attracted investors to Jumbo. It’s raised a $3.5 million seed round led by Thrive Capital’s Josh Miller and Nextview Ventures’ Rob Go, who “both believe that privacy is a fundamental human right,” Valade notes. Miller sold his link-sharing app Branch to Facebook in 2014, so his investment shows those with inside knowledge see a need for Jumbo. Valade’s six-person team in New York will use the money to develop new features and try to start a privacy moment.

How Jumbo works

First let’s look at Jumbo’s Facebook settings fixes. The app asks that you punch in your username and password through a mini-browser open to Facebook instead of using the traditional Facebook Connect feature. That immediately might get Jumbo blocked, and we’ve asked Facebook if it will be allowed. Then Jumbo can adjust your privacy settings to Weak, Medium, or Strong controls, though it never makes any privacy settings looser if you’ve already tightened them.

Valade details that since there are no APIs for changing Facebook settings, Jumbo will “act as ‘you’ on Facebook’s website and tap on the buttons, as a script, to make the changes you asked Jumbo to do for you.” He says he hopes Facebook makes an API for this, though it’s more likely to see his script as against policies.

.

For example, Jumbo can change who can look you up using your phone number to Strong – Friends only, Medium – Friends of friends, or Weak – Jumbo doesn’t change the setting. Sometimes it takes a stronger stance. For the ability to show you ads based on contact info that advertisers have uploaded, both the Strong and Medium settings hide all ads of this type, while Weak keeps the setting as is.

The full list of what Jumbo can adjust includes Who can see your future posts?, Who can see the people?, Pages and lists you follow, Who can see your friends list?, Who can see your sexual preference?, Do you want Facebook to be able to recognize you in photos and videos?, Who can post on your timeline?, and Review tags people add to your posts the tags appear on Facebook? The full list can be found here.

For Twitter, you can choose if you want to remove all tweets ever, or that are older than a day, week, month (recommended), or three months. Jumbo never sees the data, as everything is processed locally on your phone. Before deleting the tweets, it archives them to a Memories tab of its app. Unfortunately, there’s currently no way to export the tweets from there, but Jumbo is building Dropbox and iCloud connectivity soon, which will work retroactively to download your tweets. Twitter’s API limits mean it can only erase 3,200 tweets of yours every few days, so prolific tweeters may require several rounds.

Its other integrations are more straightforward. On Google, it deletes your search history. For Alexa, it deletes the voice recordings stored by Amazon. Next it wants to build a way to clean out your old Instagram photos and videos, and your old Tinder matches and chat threads.

Across the board, Jumbo is designed to never see any of your data. “There isn’t a server-side component that we own that processes your data in the cloud,” Valade says. Instead, everything is processed locally on your phone. That means, in theory, you don’t have to trust Jumbo with your data, just to properly alter what’s out there. The startup plans to open source some of its stack to prove it isn’t spying on you.

While there are other apps that can clean your tweets, nothing else is designed to be a full-fledged privacy assistant. Perhaps it’s a bit of idealism to think these tech giants will permit Jumbo to run as intended. Valade says he hopes if there’s enough user support, the privacy backlash would be too big if the tech giants blocked Jumbo. “If the social network blocks us, we will disable the integration in Jumbo until we can find a solution to make them work again.”

But even if it does get nixed by the platforms, Jumbo will have started a crucial conversation about how privacy should be handled offline. We’ve left control over privacy defaults to companies that earn money when we’re less protected. Now it’s time for that control to shift to the hands of the user.

Powered by WPeMatico

WhatsApp’s Brian Acton to talk Signal Foundation and leaving Facebook at Disrupt SF

Posted by | Apps, Brian Acton, Facebook, Mobile, Policy, privacy, signal, signal foundation, Talent, TC, TechCrunch Disrupt SF 2019, WhatsApp | No Comments

“We give them the power. That’s the bad part. We buy their products. We sign up for these websites. Delete Facebook, right?”

That’s WhatsApp founder Brian Acton’s most recent quote about his former employer, Facebook. Acton has seemingly been fueled by his experience running WhatsApp from within Facebook, which has been scrutinized for profiting from collecting data on users.

Which explains why now, two years after leaving Facebook, Acton has found a new groove as founder and executive chairman of the Signal Technology Foundation, a 501(c)(3) nonprofit organization dedicated to doing the foundational work around making private communication accessible, secure and ubiquitous. Acton invested $50 million of his own money to start Signal Foundation in February of 2018.

At TechCrunch Disrupt SF in October, we’ll hear more from Acton about Signal Foundation and his predictions for the future of communication and privacy. And, of course, we’ll try to learn more about what Facebook was up to with WhatsApp, why he left and how it felt leaving $850 million on the table.

Though he was rejected for positions at Facebook and Twitter in 2009, Acton is actually a Silicon Valley veteran, working in the industry (mostly as a software builder) for more than 25 years at places like Apple, Yahoo and Adobe before founding WhatsApp.

The chat app he built with co-founder Jan Koum grew to 1.5 billion users and, eventually, saw a $19 billion buyout from Mark Zuckerberg in 2014. But when Facebook wanted to lay the basis for targeted ads and commercial messaging within the encrypted chat app he’d spent years building, he walked away.

The Signal Foundation is all about ensuring people have access to private communication that doesn’t cost their own personal data.

“We believe there is an opportunity to act in the public interest and make a meaningful contribution to society by building sustainable technology that respects users and does not rely on the commoditization of personal data,” Acton wrote when it was first announced. In many ways, the Signal Foundation is a symbol and a continuation of Acton’s most expensive moral stand.

We’re thrilled to hear from Acton about what’s next at Signal Foundation. We’ll also try to learn more about his exit at Facebook and his feelings about the products he spent so much time building there.

After all, unsavvy regulators, legions of competitors and user backlash have all failed to compel Facebook to treat people better. But the real power lies with the talent that tech giants fight over. When people like Acton speak up or walk out, employers are forced to listen.

“No filter” is Acton’s style, so get ready for some fireworks when we sit down with him onstage at Disrupt SF.

Disrupt SF runs October 2 to October 4 at the Moscone Center. Tickets are available here.

Powered by WPeMatico

FTC tells ISPs to disclose exactly what information they collect on users and what it’s for

Posted by | broadband providers, Federal Trade Commission, FTC, Government, isps, Mobile, Policy, privacy | No Comments

The Federal Trade Commission, in what could be considered a prelude to new regulatory action, has issued an order to several major internet service providers requiring them to share every detail of their data collection practices. The information could expose patterns of abuse or otherwise troubling data use against which the FTC — or states — may want to take action.

The letters requesting info (detailed below) went to Comcast, Google, T-Mobile and both the fixed and wireless sub-companies of Verizon and AT&T. These “represent a range of large and small ISPs, as well as fixed and mobile Internet providers,” an FTC spokesperson said. I’m not sure which is meant to be the small one, but welcome any information the agency can extract from any of them.

Since the Federal Communications Commission abdicated its role in enforcing consumer privacy at these ISPs when it and Congress allowed the Broadband Privacy Rule to be overturned, others have taken up the torch, notably California and even individual cities like Seattle. But for enterprises spanning the nation, national-level oversight is preferable to a patchwork approach, and so it may be that the FTC is preparing to take a stronger stance.

To be clear, the FTC already has consumer protection rules in place and could already go after an internet provider if it were found to be abusing the privacy of its users — you know, selling their location to anyone who asks or the like. (Still no action there, by the way.)

But the evolving media and telecom landscape, in which we see enormous companies devouring one another to best provide as many complementary services as possible, requires constant reevaluation. As the agency writes in a press release:

The FTC is initiating this study to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content.

Although the FTC is always extremely careful with its words, this statement gives a good idea of what they’re concerned about. If Verizon (our parent company’s parent company) wants to offer not just the connection you get on your phone, but the media you request, the ads you are served and the tracking you never heard of, it needs to show that these businesses are not somehow shirking rules behind the scenes.

For instance, if Verizon Wireless says it doesn’t collect or share information about what sites you visit, but the mysterious VZ Snooping Co (fictitious, I should add) scoops all that up and then sells it for peanuts to its sister company, that could amount to a deceptive practice. Of course it’s rarely that simple (though don’t rule it out), but the only way to be sure is to comprehensively question everyone involved and carefully compare the answers with real-world practices.

How else would we catch shady zero-rating practices, zombie cookies, backdoor deals or lip service to existing privacy laws? It takes a lot of poring over data and complaints by the detail-oriented folks at these regulatory bodies to find things out.

To that end, the letters to ISPs ask for a whole boatload of information on companies’ data practices. Here’s a summary:

  • Categories of personal information collected about consumers or devices, including purposes, methods and sources of collection
  • how the data has been or is being used
  • third parties that provide or are provided this data and what limitations are imposed thereupon
  • how such data is combined with other types of information and how long it is retained
  • internal policies and practices limiting access to this information by employees or service providers
  • any privacy assessments done to evaluate associated risks and policies
  • how data is aggregated, anonymized or deidentified (and how those terms are defined)
  • how aggregated data is used, shared, etc.
  • “any data maps, inventories, or other charts, schematics, or graphic depictions” of information collection and storage
  • total number of consumers who have “visited or otherwise viewed or interacted with” the privacy policy
  • whether consumers are given any choice in collection and retention of data, and what the default choices are
  • total number and percentage of users that have exercised such a choice, and what choices they made
  • whether consumers are incentivized to (or threatened into) opt into data collection and how those programs work
  • any process for allowing consumers to “access, correct, or delete” their personal information
  • data deletion and retention policies for such information

Substantial, right?

Needless to say, some of this information may not be particularly flattering to ISPs. If only 1 percent of consumers have ever chosen to share their information, for instance, that reflects badly on sharing it by default. And if data capable of being combined across categories or services to de-anonymize it, even potentially, that’s another major concern.

The FTC representative declined to comment on whether there would be any collaboration with the FCC on this endeavor, whether it was preliminary to any other action and whether it can or will independently verify the information provided by the ISPs contacted. That’s an important point, considering how poorly these same companies represented their coverage data to the FCC for its yearly broadband deployment report. A reality check would be welcome.

You can read the rest of the letter here (PDF).

Powered by WPeMatico

Ahead of third antitrust ruling, Google announces fresh tweaks to Android in Europe

Posted by | Android, antitrust, Apple, Apps, chrome os, competition commission, DuckDuckGo, Europe, european commission, european union, France, G Suite, Google, Image search, joaquin almunia, Jolla, Kent Walker, Margrethe Vestager, Mobile, operating systems, play store, Policy, Qwant, search app, search engine, search engines, smartphone, Spotify, travel search | No Comments

Google is widely expected to be handed a third antitrust fine in Europe this week, with reports suggesting the European Commission’s decision in its long-running investigation of AdSense could land later today.

Right on cue the search giant has PRed another Android product tweak — which it bills as “supporting choice and competition in Europe”.

In the coming months Google says it will start prompting users of existing and new Android devices in Europe to ask which browser and search apps they would like to use.

This follows licensing changes for Android in Europe which Google announced last fall, following the Commission’s $5BN antitrust fine for anti-competitive behavior related to how it operates the dominant smartphone OS.

tl;dr competition regulation can shift policy and product.

Albeit, the devil will be in the detail of Google’s self-imposed ‘remedy’ for Android browser and search apps.

Which means how exactly the user is prompted will be key — given tech giants are well-versed in the manipulative arts of dark pattern design, enabling them to create ‘consent’ flows that deliver their desired outcome.

A ‘choice’ designed in such a way — based on wording, button/text size and color, timing of prompt and so on — to promote Google’s preferred browser and search app choice by subtly encouraging Android users to stick with its default apps may not actually end up being much of a ‘choice’.

According to Reuters the prompt will surface to Android users via the Play Store. (Though the version of Google’s blog post we read did not include that detail.)

Using the Play Store for the prompt would require an Android device to have Google’s app store pre-loaded — and licensing tweaks made to the OS in Europe last year were supposedly intended to enable OEMs to choose to unbundle Google apps from Android forks. Ergo making only the Play Store the route for enabling choice would be rather contradictory. (As well as spotlighting Google’s continued grip on Android.)

Add to that Google has the advantage of massive brand dominance here, thanks to its kingpin position in search, browsers and smartphone platforms.

So again the consumer decision is weighted in its favor. Or, to put it another way: ‘This is Google; it can afford to offer a ‘choice’.’

In its blog post getting out ahead of the Commission’s looming AdSense ruling, Google’s SVP of global affairs, Kent Walker, writes that the company has been “listening carefully to the feedback we’re getting” vis-a-vis competition.

Though the search giant is actually appealing both antitrust decisions. (The other being a $2.7BN fine it got slapped with two years ago for promoting its own shopping comparison service and demoting rivals’.)

“After the Commission’s July 2018 decision, we changed the licensing model for the Google apps we build for use on Android phones, creating new, separate licenses for Google Play, the Google Chrome browser, and for Google Search,” Walker continues. “In doing so, we maintained the freedom for phone makers to install any alternative app alongside a Google app.”

Other opinions are available on those changes too.

Such as French pro-privacy Google search rival Qwant, which last year told us how those licensing changes still make it essentially impossible for smartphone makers to profit off of devices that don’t bake in Google apps by default. (More recently Qwant’s founder condensed the situation to “it’s a joke“.)

Qwant and another European startup Jolla, which leads development of an Android alternative smartphone platform called Sailfish — and is also a competition complainant against Google in Europe — want regulators to step in and do more.

The Commission has said it is closely monitoring changes made by Google to determine whether or not the company has complied with its orders to stop anti-competitive behavior.

So the jury is still out on whether any of its tweaks sum to compliance. (Google says so but that’s as you’d expect — and certainly doesn’t mean the Commission will agree.)

In its Android decision last summer the Commission judged that Google’s practices harmed competition and “further innovation” in the wider mobile space, i.e. beyond Internet search — because it prevented other mobile browsers from competing effectively with its pre-installed Chrome browser.

So browser choice is a key component here. And ‘effective competition’ is the bar Google’s homebrew ‘remedies’ will have to meet.

Still, the company will be hoping its latest Android tweaks steer off further Commission antitrust action. Or at least generate more fuzz and fuel for its long-game legal appeal.

Current EU competition commissioner, Margrethe Vestager, has flagged for years that the division is also fielding complaints about other Google products, including travel search, image search and maps. Which suggests Google could face fresh antitrust investigations in future, even as the last of the first batch is about to wrap up.

The FT reports that Android users in the European economic area last week started seeing links to rival websites appearing above Google’s answer box for searches for products, jobs or businesses — with the rival links appearing above paid results links to Google’s own services.

The newspaper points out that tweak is similar to a change promoted by Google in 2013, when it was trying to resolve EU antitrust concerns under the prior commissioner, Joaquín Almunia.

However rivals at the time complained the tweak was insufficient. The Commission subsequently agreed — and under Vestager’s tenure went on to hit Google with antitrust fines.

Walker doesn’t mention these any of additional antitrust complaints swirling around Google’s business in Europe, choosing to focus on highlighting changes it’s made in response to the two extant Commission antitrust rulings.

“After the Commission’s July 2018 decision, we changed the licensing model for the Google apps we build for use on Android phones, creating new, separate licenses for Google Play, the Google Chrome browser, and for Google Search. In doing so, we maintained the freedom for phone makers to install any alternative app alongside a Google app,” he writes.

Nor does he make mention of a recent change Google quietly made to the lists of default search engine choices in its Chrome browser — which expanded the “choice” he claims the company offers by surfacing more rivals. (The biggest beneficiary of that tweak is privacy search rival DuckDuckGo, which suddenly got added to the Chrome search engine lists in around 60 markets. Qwant also got added as a default choice in France.)

Talking about Android specifically Walker instead takes a subtle indirect swipe at iOS maker Apple — which now finds itself the target of competition complaints in Europe, via music streaming rival Spotify, and is potentially facing a Commission probe of its own (albeit, iOS’ marketshare in Europe is tiny vs Android). So top deflecting Google.

“On Android phones, you’ve always been able to install any search engine or browser you want, irrespective of what came pre-installed on the phone when you bought it. In fact, a typical Android phone user will usually install around 50 additional apps on their phone,” Walker writes, drawing attention to the fact that Apple does not offer iOS users as much of a literal choice as Google does.

“Now we’ll also do more to ensure that Android phone owners know about the wide choice of browsers and search engines available to download to their phones,” he adds, saying: “This will involve asking users of existing and new Android devices in Europe which browser and search apps they would like to use.”

We’ve reached out to Commission for comment, and to Google with questions about the design of its incoming browser and search app prompts for Android users in Europe and will update this report with any response.

Powered by WPeMatico

Instagram founders say losing autonomy at Facebook meant “winning”

Posted by | anti-trust, Apps, elizabeth warren, Facebook, instagram, Kevin Systrom, M&A, mike krieger, Mobile, Policy, regulation, Social, SXSW, TC | No Comments

Rather than be sore about losing independence within Facebook, Instagram co-founder Kevin Systrom told me it was an inevitable sign of his app’s triumph. Today at South By South West, Systrom and fellow co-founder Mike Krieger sat down for their first on-stage talk together since leaving Facebook in September. They discussed their super hero origin stories, authenticity on social media, looming regulation for big tech, and how they’re exploring what they’ll do next.

Krieger grew up hitting “view source” on websites while Systrom hacked on AOL booter programs that would kick people off instant messenger, teaching both how code could impact real people. As Instagram grew popular, Krieger described the “incredi-bad” feeling of fighting server fires and trying to keep the widely loved app online even if that meant programming in the middle of a sushi restaurant or camping retreat. He once even revived Instagram while drunk in the middle of the night, and woke up with no memory of the feat, confused about who’d fixed the problem. The former Instagram CTO implored founders not to fall into the “recruiting death spiral” where you’re too busy to recruit which makes you busier which makes you too busy to recruit…

But thankfully, the founders were also willing to dig into some tougher topics than their scrappy startup days.

Kevin Systrom and Mike Krieger (from left) drive to Palo Alto to raise their Series A, circa January 2011

Independence vs Importance.

“In some ways, there being less autonomy is a function of Instagram winning. If Instagram had just been this niche photo app for photographers, we probably would be working on that app for 20 year. Instead what happened was it got better and better and better, and it improved, and it got to a size where it was meaningfully important to this company” Systrom explained. “If this thing gets to that scale that we want it to get to which is why we’re doing this deal, the autonomy will eventually not be there as much because it’s so important. So in some ways it’s just an unavoidable thing if you’re successful. So you can choose, do you want to be unsuccessful and small and have all the autonomy in the world, or no?”

AUSTIN, TX – MARCH 11: Mike Krieger speaks onstage at Interactive Keynote: Instagram Founders Kevin Systrom & Mike Krieger with Josh Constine during the 2019 SXSW Conference and Festivals at Austin Convention Center on March 11, 2019 in Austin, Texas. (Photo by Chris Saucedo/Getty Images for SXSW)

Krieger followed up that “I think if you study . . . all the current companies, the ones that succeed internally eventually have become so important to the acquiring company that it’s almost irresponsible to not be thinking about what are the right models for integration. The advice I generally give is, ‘are you okay with that if you succeed?’ And if you’re not then you shouldn’t do the deal.” If the loss of autonomy can’t be avoided, they suggest selling to a rocket ship that will invest in and care for your baby rather than shift priorities.

Asked if seeing his net worth ever feels surreal, Systrom said  money doesn’t make you happy and “I don’t really wake up in the morning and look at my bank account.” I noted that’s the convenient privilege of having a big one.

The pair threw cold water on the idea that being forced to earn more money drove them out of the company. “I remember having this series of conversations with Mark and other folks at Facebook and they’re like ‘You guys just joined, do not worry about monetization, we’ll figure this out down the road.’ And it actually came a lot more from us saying “1. It’s important for us to be contributing to the overall Fb Inc . . . and 2. Each person who joins before you have ads is a person you’re going to have to introduce ads to.” Systrom added that “to be clear, we were the ones pushing monetization, not the other way around, because we believed Instagram has to make money somehow. It costs a lot to run . . . We pushed hard on it so that we would be a successful unit within Facebook and I think we got to that point, which is really good.”

But from 2015 to 2016, Instagram’s remaining independence fueled a reinvention of its app with non-square photos, the shift to the algorithm, and the launch of Stories. On having to challenge the fundamental assumptions of a business, “You’ve got maybe a couple years of relevance when you build a product. If you don’t reinvent it every quarter or every year, then you fall out of relevance and you go away.”

That last launch was inspired by wanting to offer prismatic identity where people could share non-highlights that wouldn’t haunt them. But also, Systrom admits that “Honestly a big reason why was that for a long time, people’s profiles were filled with Snapchat links and it was clear that people were trying to bridge the two products. So by bringing the two products [Feed and Stories] into one place, we gave consumers what they wanted.” Though when I asked anyone in the crowd who was still mad about the algorithm to hiss, SXSW turned into a snake pit.

Regulating Big Tech

With Systrom and Krieger gone, Facebook is moving forward with plans to more tightly integrate Instagram with Facebook and WhatsApp. That includes unifying their messaging system, which some say is designed to make Facebook’s apps harder to break up with anti-trust regulation. What does Systrom think of the integration? “The more people that are available to talk with, the more useful the platform becomes. And I buy that thesis . . . Whether or not they will in fact want to talk to people on different platforms, I can’t tell the future, so I don’t know” Systrom said.

AUSTIN, TX – MARCH 11: Josh Constine, Mike Krieger and Kevin Systrom speak onstage at Interactive Keynote: Instagram Founders Kevin Systrom & Mike Krieger with Josh Constine during the 2019 SXSW Conference and Festivals at Austin Convention Center on March 11, 2019 in Austin, Texas. (Photo by Chris Saucedo/Getty Images for SXSW)

Krieger recommended Facebook try to prove users want that cross-app messaging before embarking on a giant engineering challenge of merging their backends. When I asked if Systrom ever had a burning desire to Instagram Direct message a WhatsApp user, he admitted “Personally, no.” But in a show of respect and solid media training, he told his former employer “Bravo for making a big bet and going for it.”

Then it was time for the hardest hitting question: their thoughts on Presidential candidate Senator Elizabeth Warren’s proposal to regulate big tech and roll back Facebook’s acquisition of Instagram. “Do we get our job back?” Systrom joked, trying to diffuse the tension. Krieger urged more consideration of downstream externalities, and specificity on what problem a break up fixes. He wants differentiation between regulating Facebook’s acquisitions, Amazon white-labeling and selling products, and Apple’s right to run the only iOS App Store.

Acquisition vs Competition

“We live in a time where I think the anger against big tech has increased ten-fold — whether that’s because the property prices in your neighborhood have gone up, whether it’s because you don’t like Russian meddling in elections — there are a long list of reasons people are angry at tech right now and some of them I think are well-founded” Systrom confirmed. “That doesn’t mean that the answer is to break all the companies up. Breaking companies up is a very specific prescription for a very specific problem. If you want to fix economic issues there are ways of doing that. If you want to fix Russian meddling there are ways of doing that. Breaking up a company doesn’t fix those problems. That doesn’t mean that companies shouldn’t be broken up if they get too big and they’re monopolies and they cause problems, but being big in and of itself is not a crime.”

attends Interactive Keynote: Instagram Founders Kevin Systrom & Mike Krieger with Josh Constine during the 2019 SXSW Conference and Festivals at Austin Convention Center on March 11, 2019 in Austin, Texas

Systrom then took a jab at Warren’s tech literacy, saying “part of what’s surprised me is that generally the policy is all tech should be broken up, and that feels to me again not nuanced enough and it shows me that the understanding of the problem isn’t there. I think it’s going to take a more nuanced proposal, but my fear is that something like a proposal to break up all tech is playing on everyone’s current feeling of anti-tech rather than doing what I think politicians should do which is address real problems and give real solutions.”

The two founders then gave some pretty spurious logic for why Instagram’s acquisition helped consumers. “As someone who ran the company for how many years inside of Facebook? Six? There was a lot of competition internally even and I think better ideas came out because of it. We grew both companies not just one company. It’s really hard question. What consumer was damaged because it grew to the size that it did? I think that’s a strong argument that in fact the acquisition worked out for consumers.” That ignores the fact that if Instagram and Facebook were rivals, they’d have to compete on privacy and treating their users well. Even if they inspired each other to build more engaging products, that doesn’t address where harm to consumers has been done.

Krieger suggested that the acquisition actually spurred competition by making Instagram a role modeI. “There was a gold rush of companies being like ‘I’m going to be the Instagram of X . . . the Instagram of Audio, the Instagram of video, the Instagram of dog photos.’ You saw people start new companies and try to build them out in order to try to achieve what we’ve gotten to.” Yet no startup besides Snapchat, which had already launched, has actually grown to rival Instagram. And seeing Instagram hold its own against the Facebook empire would have likely inspired many more startups — some of which can’t find funding since investors doubt their odds against a combined Facebook and Instagram

As for what’s next for the college buddies, “we’re giving ourselves the time to get curious about things again” Krieger says. They’re still exploring so there was no big reveal about their follow-up venture. But Systrom says they built Instagram by finding the mega-trend of cameras on phones and asking what they’d want to use, “and the question is, what’s the next wave?”

Powered by WPeMatico

Zuckerberg wants messages to auto-expire to make Facebook a ‘living room’

Posted by | Apps, encryption, end-to-end encryption, Facebook, facebook messenger, Facebook Policy, facebook privacy, instagram, Mark Zuckerberg, Mobile, Policy, privacy, Social, TC, WhatsApp | No Comments

On feed-based “broader social networks, where people can accumulate friends or followers until the services feel more public . . . it feels more like a town square than a more intimate space like a living room” Facebook CEO Mark Zuckerberg explained in a blog post today. With messaging, groups, and ephemeral stories as the fastest growing social features, Zuckerberg laid out why he’s rethinking Facebook as a private living room where people can be comfortable being themselves without fear of hackers, government spying, and embarrassment from old content — all without encryption allowing bad actors to hide their crimes.

Perhaps this will just be more lip service in a time of PR crisis for Facebook. But with the business imperative fueled by social networking’s shift away from permanent feed broadcasting, Facebook can espouse the philosophy of privacy while in reality servicing its shareholders and bottom line. It’s this alignment that actually spurs product change. We saw Facebook’s agility with last year’s realization that a misinformation- and hate-plagued platform wouldn’t survive long-term so it had to triple its security and moderation staff. And in 2017, recognizing the threat of Stories, it implemented them across its apps. Now Facebook might finally see the dollar signs within privacy.

The New York Times’ Mike Isaac recently reported that Facebook planned to unify its Facebook, WhatsApp, and Instagram messaging infrastructure to allow cross-app messaging and end-to-end encryption. And Zuckerberg discussed this and the value of ephemerality on the recent earnings call. But now Zuckerberg has roadmapped a clearer slate of changes and policies to turn Facebook into a living room:

-Facebook will let users opt in to the ability to send or receive messages across Facebook, WhatsApp, and Instagram

-Facebook wants to expand that interoperability to SMS on Android

-Zuckerberg wants to make ephemerality automatic on messaging threads, so chats disappear by default after a month or year, with users able to control that or put timers on individual messages.

-Facebook plans to limit how long it retains metadata on messages once it’s no longer needed for spam or safety protections

-Facebook will extend end-to-end encryption across its messaging apps but use metadata and other non-content signals to weed out criminals using privacy to hide their misdeeds.

-Facebook won’t store data in countries with a bad track record of privacy abuse such as Russia, even if that means having to shut down or postpone operations in a country

You can read the full blog post from Zuckerberg below:

A Privacy-Focused Vision for Social Networking

My focus for the last couple of years has been understanding and addressing the biggest challenges facing Facebook. This means taking positions on important issues concerning the future of the internet. In this note, I’ll outline our vision and principles around building a privacy-focused messaging and social networking platform. There’s a lot to do here, and we’re committed to working openly and consulting with experts across society as we develop this.

Over the last 15 years, Facebook and Instagram have helped people connect with friends, communities, and interests in the digital equivalent of a town square. But people increasingly also want to connect privately in the digital equivalent of the living room. As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

Today we already see that private messaging, ephemeral stories, and small groups are by far the fastest growing areas of online communication. There are a number of reasons for this. Many people prefer the intimacy of communicating one-on-one or with just a few friends. People are more cautious of having a permanent record of what they’ve shared. And we all expect to be able to do things like payments privately and securely.

Public social networks will continue to be very important in people’s lives — for connecting with everyone you know, discovering new people, ideas and content, and giving people a voice more broadly. People find these valuable every day, and there are still a lot of useful services to build on top of them. But now, with all the ways people also want to interact privately, there’s also an opportunity to build a simpler platform that’s focused on privacy first.

I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform — because frankly we don’t currently have a strong reputation for building privacy protective services, and we’ve historically focused on tools for more open sharing. But we’ve repeatedly shown that we can evolve to build the services that people really want, including in private messaging and stories.

I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about.

We plan to build this the way we’ve developed WhatsApp: focus on the most fundamental and private use case — messaging — make it as secure as possible, and then build more ways for people to interact on top of that, including calls, video chats, groups, stories, businesses, payments, commerce, and ultimately a platform for many other kinds of private services.

This privacy-focused platform will be built around several principles:

Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.

Encryption. People’s private communications should be secure. End-to-end encryption prevents anyone — including us — from seeing what people share on our services.

Permanence. People should be comfortable being themselves, and should not have to worry about what they share coming back to hurt them later. So we won’t keep messages or stories around for longer than necessary to deliver the service or longer than people want it.

Safety. People should expect that we will do everything we can to keep them safe on our services within the limits of what’s possible in an encrypted service.

Interoperability. People should be able to use any of our apps to reach their friends, and they should be able to communicate across networks easily and securely.

Secure data storage. People should expect that we won’t store sensitive data in countries with weak records on human rights like privacy and freedom of expression in order to protect data from being improperly accessed.

Over the next few years, we plan to rebuild more of our services around these ideas. The decisions we’ll face along the way will mean taking positions on important issues concerning the future of the internet. We understand there are a lot of tradeoffs to get right, and we’re committed to consulting with experts and discussing the best way forward. This will take some time, but we’re not going to develop this major change in our direction behind closed doors. We’re going to do this as openly and collaboratively as we can because many of these issues affect different parts of society.

Private Interactions as a Foundation

For a service to feel private, there must never be any doubt about who you are communicating with. We’ve worked hard to build privacy into all our products, including those for public sharing. But one great property of messaging services is that even as your contacts list grows, your individual threads and groups remain private. As your friends evolve over time, messaging services evolve gracefully and remain intimate.

This is different from broader social networks, where people can accumulate friends or followers until the services feel more public. This is well-suited to many important uses — telling all your friends about something, using your voice on important topics, finding communities of people with similar interests, following creators and media, buying and selling things, organizing fundraisers, growing businesses, or many other things that benefit from having everyone you know in one place. Still, when you see all these experiences together, it feels more like a town square than a more intimate space like a living room.

There is an opportunity to build a platform that focuses on all of the ways people want to interact privately. This sense of privacy and intimacy is not just about technical features — it is designed deeply into the feel of the service overall. In WhatsApp, for example, our team is obsessed with creating an intimate environment in every aspect of the product. Even where we’ve built features that allow for broader sharing, it’s still a less public experience. When the team built groups, they put in a size limit to make sure every interaction felt private. When we shipped stories on WhatsApp, we limited public content because we worried it might erode the feeling of privacy to see lots of public content — even if it didn’t actually change who you’re sharing with.

In a few years, I expect future versions of Messenger and WhatsApp to become the main ways people communicate on the Facebook network. We’re focused on making both of these apps faster, simpler, more private and more secure, including with end-to-end encryption. We then plan to add more ways to interact privately with your friends, groups, and businesses. If this evolution is successful, interacting with your friends and family across the Facebook network will become a fundamentally more private experience.

Encryption and Safety

People expect their private communications to be secure and to only be seen by the people they’ve sent them to — not hackers, criminals, over-reaching governments, or even the people operating the services they’re using.

There is a growing awareness that the more entities that have access to your data, the more vulnerabilities there are for someone to misuse it or for a cyber attack to expose it. There is also a growing concern among some that technology may be centralizing power in the hands of governments and companies like ours. And some people worry that our services could access their messages and use them for advertising or in other ways they don’t expect.

End-to-end encryption is an important tool in developing a privacy-focused social network. Encryption is decentralizing — it limits services like ours from seeing the content flowing through them and makes it much harder for anyone else to access your information. This is why encryption is an increasingly important part of our online lives, from banking to healthcare services. It’s also why we built end-to-end encryption into WhatsApp after we acquired it.

In the last year, I’ve spoken with dissidents who’ve told me encryption is the reason they are free, or even alive. Governments often make unlawful demands for data, and while we push back and fight these requests in court, there’s always a risk we’ll lose a case — and if the information isn’t encrypted we’d either have to turn over the data or risk our employees being arrested if we failed to comply. This may seem extreme, but we’ve had a case where one of our employees was actually jailed for not providing access to someone’s private information even though we couldn’t access it since it was encrypted.

At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services. Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can. We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work. But we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.

Finding the right ways to protect both privacy and safety is something societies have historically grappled with. There are still many open questions here and we’ll consult with safety experts, law enforcement and governments on the best ways to implement safety measures. We’ll also need to work together with other platforms to make sure that as an industry we get this right. The more we can create a common approach, the better.

On balance, I believe working towards implementing end-to-end encryption for all private communications is the right thing to do. Messages and calls are some of the most sensitive private conversations people have, and in a world of increasing cyber security threats and heavy-handed government intervention in many countries, people want us to take the extra step to secure their most private data. That seems right to me, as long as we take the time to build the appropriate safety systems that stop bad actors as much as we possibly can within the limits of an encrypted service. We’ve started working on these safety systems building on the work we’ve done in WhatsApp, and we’ll discuss them with experts through 2019 and beyond before fully implementing end-to-end encryption. As we learn more from those experts, we’ll finalize how to roll out these systems.

Reducing Permanence

We increasingly believe it’s important to keep information around for shorter periods of time. People want to know that what they share won’t come back to hurt them later, and reducing the length of time their information is stored and accessible will help.

One challenge in building social tools is the “permanence problem”. As we build up large collections of messages and photos over time, they can become a liability as well as an asset. For example, many people who have been on Facebook for a long time have photos from when they were younger that could be embarrassing. But people also really love keeping a record of their lives. And if all posts on Facebook and Instagram disappeared, people would lose access to a lot of valuable knowledge and experiences others have shared.

I believe there’s an opportunity to set a new standard for private communication platforms — where content automatically expires or is archived over time. Stories already expire after 24 hours unless you archive them, and that gives people the comfort to share more naturally. This philosophy could be extended to all private content.

For example, messages could be deleted after a month or a year by default. This would reduce the risk of your messages resurfacing and embarrassing you later. Of course you’d have the ability to change the timeframe or turn off auto-deletion for your threads if you wanted. And we could also provide an option for you to set individual messages to expire after a few seconds or minutes if you wanted.

It also makes sense to limit the amount of time we store messaging metadata. We use this data to run our spam and safety systems, but we don’t always need to keep it around for a long time. An important part of the solution is to collect less personal data in the first place, which is the way WhatsApp was built from the outset.

Interoperability

People want to be able to choose which service they use to communicate with people. However, today if you want to message people on Facebook you have to use Messenger, on Instagram you have to use Direct, and on WhatsApp you have to use WhatsApp. We want to give people a choice so they can reach their friends across these networks from whichever app they prefer.

We plan to start by making it possible for you to send messages to your contacts using any of our services, and then to extend that interoperability to SMS too. Of course, this would be opt-in and you will be able to keep your accounts separate if you’d like.

There are privacy and security advantages to interoperability. For example, many people use Messenger on Android to send and receive SMS texts. Those texts can’t be end-to-end encrypted because the SMS protocol is not encrypted. With the ability to message across our services, however, you’d be able to send an encrypted message to someone’s phone number in WhatsApp from Messenger.

This could also improve convenience in many experiences where people use Facebook or Instagram as their social network and WhatsApp as their preferred messaging service. For example, lots of people selling items on Marketplace list their phone number so people can message them about buying it. That’s not ideal, because you’re giving strangers your phone number. With interoperability, you’d be able to use WhatsApp to receive messages sent to your Facebook account without sharing your phone number — and the buyer wouldn’t have to worry about whether you prefer to be messaged on one network or the other.

You can imagine many simple experiences — a person discovers a business on Instagram and easily transitions to their preferred messaging app for secure payments and customer support; another person wants to catch up with a friend and can send them a message that goes to their preferred app without having to think about where that person prefers to be reached; or you simply post a story from your day across both Facebook and Instagram and can get all the replies from your friends in one place.

You can already send and receive SMS texts through Messenger on Android today, and we’d like to extend this further in the future, perhaps including the new telecom RCS standard. However, there are several issues we’ll need to work through before this will be possible. First, Apple doesn’t allow apps to interoperate with SMS on their devices, so we’d only be able to do this on Android. Second, we’d need to make sure interoperability doesn’t compromise the expectation of encryption that people already have using WhatsApp. Finally, it would create safety and spam vulnerabilities in an encrypted system to let people send messages from unknown apps where our safety and security systems couldn’t see the patterns of activity.

These are significant challenges and there are many questions here that require further consultation and discussion. But if we can implement this, we can give people more choice to use their preferred service to securely reach the people they want.

Secure Data Storage

People want to know their data is stored securely in places they trust. Looking at the future of the internet and privacy, I believe one of the most important decisions we’ll make is where we’ll build data centers and store people’s sensitive data.

There’s an important difference between providing a service in a country and storing people’s data there. As we build our infrastructure around the world, we’ve chosen not to build data centers in countries that have a track record of violating human rights like privacy or freedom of expression. If we build data centers and store sensitive data in these countries, rather than just caching non-sensitive data, it could make it easier for those governments to take people’s information.

Upholding this principle may mean that our services will get blocked in some countries, or that we won’t be able to enter others anytime soon. That’s a tradeoff we’re willing to make. We do not believe storing people’s data in some countries is a secure enough foundation to build such important internet infrastructure on.

Of course, the best way to protect the most sensitive data is not to store it at all, which is why WhatsApp doesn’t store any encryption keys and we plan to do the same with our other services going forward.

But storing data in more countries also establishes a precedent that emboldens other governments to seek greater access to their citizen’s data and therefore weakens privacy and security protections for people around the world. I think it’s important for the future of the internet and privacy that our industry continues to hold firm against storing people’s data in places where it won’t be secure.

Next Steps

Over the next year and beyond, there are a lot more details and trade-offs to work through related to each of these principles. A lot of this work is in the early stages, and we are committed to consulting with experts, advocates, industry partners, and governments — including law enforcement and regulators — around the world to get these decisions right.

At the same time, working through these principles is only the first step in building out a privacy-focused social platform. Beyond that, significant thought needs to go into all of the services we build on top of that foundation — from how people do payments and financial transactions, to the role of businesses and advertising, to how we can offer a platform for other private services.

But these initial questions are critical to get right. If we do this well, we can create platforms for private sharing that could be even more important to people than the platforms we’ve already built to help people share and connect more openly.

Doing this means taking positions on some of the most important issues facing the future of the internet. As a society, we have an opportunity to set out where we stand, to decide how we value private communications, and who gets to decide how long and where data should be stored.

I believe we should be working towards a world where people can speak privately and live freely knowing that their information will only be seen by who they want to see it and won’t all stick around forever. If we can help move the world in this direction, I will be proud of the difference we’ve made.

Powered by WPeMatico

Facebook will shut down its spyware VPN app Onavo

Posted by | Android, Apps, Facebook, facebook research, Facebook Researchgate, Google, Mobile, Onavo, Policy, privacy, Social, TC, vpn | No Comments

Facebook will end its unpaid market research programs and proactively take its Onavo VPN app off the Google Play store in the wake of backlash following TechCrunch’s investigation about Onavo code being used in a Facebook Research app the sucked up data about teens. The Onavo Protect app will eventually shut down, and will immediately cease pulling in data from users for market research though it will continue operating as a Virtual Private Network in the short-term to allow users to find a replacement.

Facebook has also ceased to recruit new users for the Facebook Research app that still runs on Android but was forced off of iOS by Apple after we reported on how it violated Apple’s Enterprise Certificate program for employee-only apps. Existing Facebook Research app studies will continue to run, though.

With the suspicions about big tech giants and looming regulation leading to more intense scrutiny of privacy practices, Facebook has decided that giving users a utility like a VPN in exchange for quietly examining their usage of other apps and mobile browsing data isn’t a wise strategy. Instead, it will focus on paid programs where users explicitly understand what privacy they’re giving up for direct financial compensation.

Onavo billed itself as a way to “limit apps from using background data and “use a secure VPN network for your personal info” but also noted it would collect the “Time you spend using apps, mobile and Wi-Fi data you use per app, the websites you visit, and your country, device and network type” A Facebook spokesperson confirmed the change and provided this statement “Market research helps companies build better products for people. We are shifting our focus to reward-based market research which means we’re going to end the Onavo program.”

Facebok acquired Onavo in 2013 for a reported $200 million to use its VPN app the gather data about what people were doing on their phones. That data revealed WhatsApp was sending far more messages per day than Messenger, convincing Facebook to pay a steep sum of $19 billion to buy WhatsApp. Facebook went on to frame Onavo as a way for users to reduce their data usage, block dangerous websites, keep their traffic safe from snooping — while Facebook itself was analyzing that traffic. The insights helped it discover new trends in mobile usage, keep an eye on competitors, and figure out what features or apps to copy. Cloning became core to Facebook’s product strategy over the past years, with Instagram’s versions of Snapchat Stories growing larger than the original.

But last year, privacy concerns led Apple to push Facebook to remove the Onavo VPN app from the App Store, though it continued running on Google Play. But Facebook quietly repurposed Onavo code for use in its Facebook Research app that TechCrunch found was paying users in the U.S. and India ages 13 to 35 up to $20 in gift cards per month to give it VPN and root network access to spy on all their mobile data.

Facebook ran the program in secret, obscured by intermediary beta testing services like Betabound and Applause. It only informed users it recruited with ads on Instagram, Snapchat and elsewhere that they were joining a Facebook Research program after they’d begun signup and signed non-disclosure agreements. A Facebook claimed in a statement that “there was nothing ‘secret’ about this”, but it had threatened legal action if users publicly discussed the Research program.

But the biggest problem for Facebook was that its Research app abused Apple’s Enterprise Certificate program meant for employee-only apps to distribute the app outside the company. That led Apple to ban the Research app from iOS and invalidate Facebook’s certificate. This shut down Facebook’s internal iOS collaboration tools, pre-launch test versions of its popular apps, and even its lunch menu and shuttle schedule to break for 30 hours, causing chaos at the company’s offices.

To preempt any more scandals around Onavo and the Facebook Research app and avoid Google stepping in to forcibly block the apps, Facebook is now taking Onavo off the Play Store and stopping recruitment of Research testers. That’s a somewhat surprising voluntary move that perhaps shows Facebook is finally getting in tune with the public perception of its shady practices. The company has repeatedly misread how users would react to product launches and privacy invasions, leading to near constant gaffes and unending news cycle chronicling its blunders.

Without Onavo, Facebook loses a powerful method of market research, and its future initiatives here will come at a higher price. Facebook has run tons of focus groups, surveys, and other user feedback programs over the past decade to learn where it could improve or what innovations it could co-opt. But given how cloning plus acquisitions like WhatsApp and Instagram have been vital to Facebook’s success, it’s likely worth paying out more gift cards and more tightly monitoring its research. Otherwise Facebook could miss the next big thing that might disrupt it.

Hopefully Facebook will be less clandestine with its future market research programs. It should be upfront about its involvement, make certain that users understand what data they’re giving up, stop researching teens or at the very least verify the consent of their parents, and avoid slurping up sensitive information or data about a user’s unwitting friends.

Powered by WPeMatico

Trump calls for 6G cellular technology, because why the heck not?

Posted by | 5g, donald trump, Government, Mobile, Policy, wireless | No Comments

We’ve been covering the battle for 5G between the U.S. and China for some time. The White House has made 5G technology a national security priority, and industry leaders have followed up that charge with additional investment in the fledgling technology.

What 5G exactly is though remains mostly a mystery. Is it new bandwidth? Edge computing? Decentralized cloud processing technology? Autonomous vehicles? Something else? I get pitched a dozen stories a day about the “5G revolution” and no one can tell me exactly what’s in it for me other than long presentations in hotel ballrooms about bandwidth (ironically, often without any cell reception).

So imagine my surprise this morning when Trump tweeted that U.S. companies need to work harder and faster on building out the tech behind 5G, but also in the process called for …. 6G technology.

I want 5G, and even 6G, technology in the United States as soon as possible. It is far more powerful, faster, and smarter than the current standard. American companies must step up their efforts, or get left behind. There is no reason that we should be lagging behind on………

Donald J. Trump (@realDonaldTrump) February 21, 2019

I want to just say that no, 6G isn’t a thing. I have only received one PR pitch for 6G in the last few months, which said: “Waveguide over copper runs at millimeter frequencies(about30 GHz to 1 THz) and is synergistic with 5G/6G wireless. A type of vectoring is applied to effective separate the many modes that can propagate within a telephone cable.” No, not a thing.

But it could be a thing. Maybe the government is secretly pioneering the next generation of the next generation of telecom technology. Or maybe, just maybe, our president, branding expert that he is, realized that if you are going to sell 5G, you might as well inflate the number to 6G and really get people’s taste buds salivating.

No comment from cleaning supplies company Seventh Generation, but if I were them, I’d be getting worried.

Powered by WPeMatico