privacy

Twitter bug leaks phone number country codes

Posted by | Apps, Government, Mobile, privacy, Social, TC, Twitter | No Comments

Twitter accidentally exposed the ability to pull an account’s phone number country code and whether the account had been locked by Twitter. The concern here is that malicious actors could have used the security flaw to figure out in which countries accounts were based, which could have ramifications for whistleblowers or political dissidents.

The issue came through one of Twitter’s support forms for contacting the company, and the company found that a large number of inquiries through the form came from IP addresses located in China and Saudi Arabia. Twitter writes, “While we cannot confirm intent or attribution for certain, it is possible that some of these IP addresses may have ties to state-sponsored actors.” We’ve requested more info on why it’s suggesting that. Attribution in these situations can be murky, and naming specific countries or suggesting state actors could be involved carries heavy implications.

Twitter began working on the issue on November 15th and fixed it on November 16th. Twitter tells TechCrunch that it has notified the European Union’s Data Protection Commissioner, as EU citizens may have been impacted. However, as country codes aren’t necessarily considered sensitive personal information, the leak may not trigger any GDPR enforcement or fines. Twitter tells us it has also updated the FTC and other regulatory organizations about the issue, though we’ve asked when it informed these different regulators.

Twitter has directly contacted users impacted by the issue, and says full phone numbers were not leaked and users don’t have to do anything in response. Users can contact Twitter here for more info. We’ve asked how many accounts were impacted, but Twitter told us that it doesn’t have more data to share as its investigation continues.

A Twitter spokesperson pointed us to a previous statement:

It is clear that information operations and coordinated inauthentic behavior will not cease. These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge. For our part, we are committed to understanding how bad-faith actors use our services. We will continue to proactively combat nefarious attempts to undermine the integrity of Twitter, while partnering with civil society, government, our industry peers, and researchers to improve our collective understanding of coordinated attempts to interfere in the public conversation.

Sloppy security on the part of tech companies can make it dangerous for political dissidents or others at odds with their governments. Twitter explains that it locks accounts if it suspects they’ve been compromised by hackers or violate “Twitter’s Rules,” which includes “unlawful use” that depends greatly on what national governments deem illegal. What’s worrisome is that attackers with IP addresses in China or Saudi Arabia might have been able to use the exploit to confirm that certain accounts belonged to users in their countries and whether they’ve been locked. That information could be used to hunt down the people who own these accounts.

The company apologized, writing that “We recognize and appreciate the trust you place in us, and are committed to earning that trust every day. We are sorry this happened.” But that echoes other apologies from big tech companies that consistently ring hollow. Here, in particular, it fails to acknowledge how the leak could harm people and how it will prevent this kind of thing from happening again. With these companies judged quarterly by their user growth and business, they’re incentivized to cut corners on security, privacy and societal impact as they chase the favor of Wall Street.

Powered by WPeMatico

3D-printed heads let hackers – and cops – unlock your phone

Posted by | 3d printing, biometrics, face id, facial recognition, facial recognition software, Hack, Identification, iOS, iPhone, learning, Mobile, model, Prevention, privacy, Security, surveillance | No Comments

There’s a lot you can make with a 3D printer: from prosthetics, corneas, and firearms — even an Olympic-standard luge.

You can even 3D print a life-size replica of a human head — and not just for Hollywood. Forbes reporter Thomas Brewster commissioned a 3D printed model of his own head to test the face unlocking systems on a range of phones — four Android models and an iPhone X.

Bad news if you’re an Android user: only the iPhone X defended against the attack.

Gone, it seems, are the days of the trusty passcode, which many still find cumbersome, fiddly, and inconvenient — especially when you unlock your phone dozens of times a day. Phone makers are taking to the more convenient unlock methods. Even if Google’s latest Pixel 3 shunned facial recognition, many Android models — including popular Samsung devices — are relying more on your facial biometrics. In its latest models, Apple effectively killed its fingerprint-reading Touch ID in favor of its newer Face ID.

But that poses a problem for your data if a mere 3D-printed model can trick your phone into giving up your secrets. That makes life much easier for hackers, who have no rulebook to go from. But what about the police or the feds, who do?

It’s no secret that biometrics — your fingerprints and your face — aren’t protected under the Fifth Amendment. That means police can’t compel you to give up your passcode, but they can forcibly depress your fingerprint to unlock your phone, or hold it to your face while you’re looking at it. And the police know it — it happens more often than you might realize.

But there’s also little in the way of stopping police from 3D printing or replicating a set of biometrics to break into a phone.

“Legally, it’s no different from using fingerprints to unlock a device,” said Orin Kerr, professor at USC Gould School of Law, in an email. “The government needs to get the biometric unlocking information somehow,” by either the finger pattern shape or the head shape, he said.

Although a warrant “wouldn’t necessarily be a requirement” to get the biometric data, one would be needed to use the data to unlock a device, he said.

Jake Laperruque, senior counsel at the Project On Government Oversight, said it was doable but isn’t the most practical or cost-effective way for cops to get access to phone data.

“A situation where you couldn’t get the actual person but could use a 3D print model may exist,” he said. “I think the big threat is that a system where anyone — cops or criminals — can get into your phone by holding your face up to it is a system with serious security limits.”

The FBI alone has thousands of devices in its custody — even after admitting the number of encrypted devices is far lower than first reported. With the ubiquitous nature of surveillance, now even more powerful with high-resolution cameras and facial recognition software, it’s easier than ever for police to obtain our biometric data as we go about our everyday lives.

Those cheering on the “death of the password” might want to think again. They’re still the only thing that’s keeping your data safe from the law.

Powered by WPeMatico

Facebook Portal adds games and web browser amidst mediocre Amazon reviews

Posted by | Apps, Facebook, facebook messenger, Facebook Portal, Gadgets, hardware, Media, privacy, smart displays, Social, TC, Video | No Comments

After receiving a flogging from privacy critics, Facebook is scrambling to make its smart display video chat screen Portal more attractive to buyers. Today Facebook is announcing the addition of a web browser, plus some of Messenger’s Instant Games like Battleship, Draw Something, Sudoku and Words With Friends. ABC News and CNN are adding content to Portal, which now also has a manual zoom mode for its auto-zooming smart camera so you can zero in on a particular thing in view. Facebook has also added new augmented reality Story Time tales, seasonal AR masks, in-call music sharing through iHeartRadio beyond Spotify and Pandora that already offer it and nickname calling so you can say “Hey Portal, call Mom.”

But the question remains who’s buying? Facebook is already discounting the 10-inch-screen Portal and 15-inch Portal+. Formerly $100 off if you buy two, Facebook is still offering $50 off just one until Christmas Eve as part of a suspiciously long Black Friday Sale. That doesn’t signal this thing is flying off the shelves. We don’t have sales figures, but Portal has a 3.4 rating on Amazon, while Portal+ has a 3.6 — both trailing the 4.2 rating of Amazon’s own Echo Show’s 2. Users are griping about the lack of Amazon Video support for Ring doorbells, not receiving calls and, of course, the privacy implications.

Personally, I’ve found Portal+ to be competent in the five weeks since launch. The big screen is great as a smart photo frame and video calls look great. But Alexa and Facebook’s own voice assistant have a tough time dividing up functionality, and sometimes I can’t get either to play a specific song on Spotify, pause or change volume or other activities my Google Home has no trouble with. Facebook said it was hoping to add Google Assistant to Portal, but there’s no progress on that front yet.

The browser will be a welcome addition, and allow Facebook to sidestep some of the issues around its thin app platform. While it recently added a Smart TV version of YouTube, now users can access lots of services without those developers having to commit to building something for Portal given its uncertain future.

The hope seems to be that mainstream users who aren’t glued to the tech press where Facebook is constantly skewered might be drawn in by these device’s flashy screens and the admittedly impressive auto-zooming camera. But to overcome the brand tax levied by all of Facebook’s privacy scandals, Portal must be near perfect. Without the native apps for popular video providers like Netflix and Hulu, consistent voice recognition and more unique features missing from competing smart displays, the fear of Facebook’s surveillance may be outweighing people’s love for shiny new gadgets.

Powered by WPeMatico

This early GDPR adtech strike puts the spotlight on consent

Posted by | Advertising Tech, Android, Apps, artificial intelligence, China, data processing, data protection, Europe, european union, Facebook, Fidzup, GDPR, General Data Protection Regulation, Google, location based services, mobile advertising, mobile device, online advertising, privacy, retail, smartphone, TC, terms of service | No Comments

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

Powered by WPeMatico

Popular avatar app Boomoji exposed millions of users’ contact lists and location data

Posted by | Android, california, database, General Data Protection Regulation, privacy, Security, social media, Software, spokesperson, web browser | No Comments

Popular animated avatar creator app Boomoji, with more than five million users across the world, exposed the personal data of its entire user base after it failed to put passwords on two of its internet-facing databases.

The China-based app developer left the ElasticSearch databases online without passwords — a U.S.-based database for its international customers and a Hong Kong-based database containing mostly Chinese users’ data in an effort to comply with China’s data security laws, which requires Chinese citizens’ data to be located on servers inside the country.

Anyone who knew where to look could access, edit or delete the database using their web browser. And, because the database was listed on Shodan, a search engine for exposed devices and databases, they were easily found with a few keywords.

After TechCrunch reached out, Boomoji pulled the two databases offline. “These two accounts were made by us for testing purposes,” said an unnamed Boomoji spokesperson in an email.

But that isn’t true.

The database contained records on all of the company’s iOS and Android users — some 5.3 million users as of this week. Each record contained their username, gender, country and phone type.

Each record also included a user’s unique Boomoji ID, which was linked to other tables in the database. Those other tables included if and which school they go to — a feature Boomoji touts as a way for users to get in touch with their fellow students. That unique ID also included the precise geolocation of more than 375,000 users that had allowed the app to know their location at any given time.

Worse, the database contained every phone book entry of every user who had allowed the app access to their contacts.

One table had more than 125 million contacts, including their names (as written in a user’s phone book) and their phone numbers. Each record was linked to a Boomoji’s unique ID, making it relatively easy to know whose contact list belonged to whom.

Even if you didn’t use the app, anyone who has your phone number stored on their device and used the app more than likely uploaded your number to Boomoji’s database. To our knowledge, there’s no way to opt out or have your information deleted.

Given Boomoji’s response, we verified the contents of the database by downloading the app on a dedicated iPhone using a throwaway phone number, containing a few dummy, but easy-to-search contact list entries. To find friends, the app matches your contacts with those registered with the app in its database. When we were prompted to allow the app access to our contacts list, the entire dummy contact list was uploaded instantly — and viewable in the database.

So long as the app was installed and had access to the contacts, new phone numbers would be automatically uploaded.

Yet, none of the data was encrypted. All of the data was stored in plaintext.

Although Boomoji is based in China, it claims to follow California state law, where data protection and privacy rules are some of the strongest in the U.S. We asked Boomoji if it has or plans to inform California’s attorney general of the exposure as required by state law, but the company did not answer.

Given the vast amount of European users’ information in the database, the company may also face penalties under the EU’s General Data Protection Regulation, which can impose fines of up to four percent of the company’s global annual revenue for serious breaches.

But given its China-based presence, it’s not clear, however, what actionable repercussions the company could face.

This is the latest in a series of exposures involving ElasticSearch instances, a popular open source search and database software. In recent weeks, several high-profile data exposures have been reported as a result of companies’ failure to practice basic data security measures — including Urban Massage exposing its own customer database, Mindbody-owned FitMetrix forgetting to put a password on its servers and Voxox, a communications company, which leaked phone numbers and two-factor codes on millions of unsuspecting users.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Powered by WPeMatico

Keepsafe launches My Number Lookup so you can see the public data tied to your mobile number

Posted by | Keepsafe, Mobile, privacy, Startups | No Comments

Ever wonder how much of your personal information is accessible to marketers? Well, there’s a new service called My Number Lookup that makes it easy (and free) for you to check the data that’s publicly available and tied to your mobile phone number.

The service was created by Keepsafe, maker of privacy-centric products. While there is a My Number Lookup website, the service actually operates over SMS — you just text HELLO to (855) 228-4539 and it will start sending you a report.

Keepsafe co-founder and CEO Zouhair Belkoura said that while marketers are able to access this information with relative ease, it’s difficult for consumers to check.

“We said, ‘Why don’t we make it super easy?’” he said. “Here’s a number you can text that tells you what information is publicly available.”

My Number Lookup

Specifically, My Number Lookup will tell you whether it was able to find a name, home address, age, gender, mobile carrier and associated people tied to your mobile number. It will even show you the data (several of the data points about me were missing, out-of-date or flat-out wrong), then point you toward Keepsafe Unlisted, a service for creating “burner” phone numbers (so you don’t have to share your real number widely), and also toward a Keepsafe blog post that outlines how someone can try to remove their personal information from various data brokers.

Belkoura admitted that even though you’ve got the report, you won’t necessarily be able to scrub the data from the internet. Instead, he sees it as more of “a wake-up call” that people need to be more careful about giving out their phone numbers. And if it leads them to use Keepsafe Unlisted, even better.

“Once information is out there, it’s very difficult to delete,” he said. “The internet is a place that just doesn’t forget.”

As for why the service operates over SMS, Belkoura said My Number Lookup will only provide data about the number you’re texting from. Hopefully that means users will only check on their own data, not someone else’s: “We don’t actually want to create a service where people who don’t have a legitimate interest can pay to look up information.”

Powered by WPeMatico

Apple Pay finally launches in Germany

Posted by | apple inc, Apple Pay, cash, contactless payments, Europe, Germany, iPhone, mastercard, Mobile, mobile payments, payments, privacy | No Comments

Apple’s mobile payment technology has finally launched in Germany, some four years after it debuted in the U.S.

On its newly launched Apple Pay website for Germany, Apple lists partner banks and credit card companies at launch, with customers from the likes of Deutsche Bank, O2 Banking, N26, Comdirect, HypoVerensbank, Bunq and Boon able to tap up the payment method directly.

Some fifteen banks and services are supported at launch. A further nine banks are slated as adding support in 2019, including DKB, INK and Revolut.

iOS users in the country can now add supported debit or credit cards to Apple Pay to make contactless payments with their device, rather than having to carry cash. Apple’s Face ID and Touch ID biometrics are used to a security layer to the payment system.

The local Apple Pay site also lists a selection of retailers, with Apple writing: “Apple Pay works in supermarkets, boutiques, restaurants, hotels and many other places. You can also use Apple Pay in many apps — and on participating websites with Safari on your Mac, iPhone or iPad.”

Aside from convenience, the other consumer advantage Apple touts for the system is privacy, with Apple Pay using a device-specific number and unique transaction code — and the user’s actual card numbers never stored on their device or on Apple’s servers — which means trackable card numbers aren’t shared with merchants, so purchases can’t be tied back to the individual.

While that might sound like an abstract concern, a Bloomberg report this summer revealed details of a multi-million deal in which Google pays for transaction data from Mastercard — in order to try to link online ad views with offline purchases in the US.

Facebook has also long been known to buy offline data to supplement the interest signals it collects on users from inside (and outside) its social network — further fleshing out ad-targeting profiles.

So escaping the surveillance net of one flavor of big tech can require buying into another. Or else going low tech and paying in cash.

Apple does not say what took it so long to add Germany to its now pretty long list of Apple Pay countries but Apple Insider suggests the relatively late adoption was down to pushback from local banks over fees, noting that it’s four months after the official announcement of a German launch.

It’s also true that paying by plastic isn’t always an option in Germany, as cash remains the dominant payment method of choice — also, seemingly, for privacy purposes. So Apple Pay is at least aligned with those concerns.

Powered by WPeMatico

Seized cache of Facebook docs raise competition and consent questions

Posted by | Android, api, competition, Damian Collins, data protection law, DCMS committee, Developer, Europe, european union, Facebook, Mark Zuckerberg, Onavo, Policy, privacy, Six4Three, Social, social network, terms of service, United Kingdom, vpn | No Comments

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  • White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

Facebook responded

  • Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

In their response Facebook contends that this was essentially another “cherrypicked” topic and that the company “ultimately settled on a model where developers did not need to purchase advertising to access APIs and we continued to provide the developer platform for free.”

  • Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  • Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  • Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  • Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.

Update: 11:40am

Facebook has posted a lengthy response (read it here) positing that the “set of documents, by design, tells only one side of the story and omits important context.” They give a blow-by-blow response to Collins’ points below though they are ultimately pretty selective in what they actually address.

Generally they suggest that some of the issues being framed as anti-competitive were in fact designed to prevent “sketchy apps” from operating on the platform. Furthermore, Facebook details that they delete some old call logs on Android, that using “market research” data from Onava is essentially standard practice and that users had the choice whether data was shared reciprocally between FB and developers. In regard to specific competitors’ apps, Facebook appears to have tried to get ahead of this release with their announcement yesterday that it was ending its platform policy of banning apps that “replicate core functionality.” 

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

Onavo charts are quite an insight into facebook’s commanding view of the app-based attention marketplace pic.twitter.com/Ezdaxk6ffC

— David Carroll 🦅 (@profcarroll) December 5, 2018

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Powered by WPeMatico

Facebook exempts news outlets from political ads transparency labels

Posted by | ad transparency, Advertising Tech, Apps, Facebook, Facebook ads, Government, Mobile, Policy, privacy, Social, TC | No Comments

Facebook pissed off journalists earlier this year when it announced that ads run by news publishers to promote their articles involving elected officials, candidates and national issues would have to sport “paid for by…” labels and be included alongside political campaign ads in its ads transparency archive that launched in June, albeit in a separate section. The News Media Alliance — representing 2,000 newspapers, including The New York Times and NewsCorp, plus other new organizations — sent a letter to Mark Zuckerberg in June protesting their inclusion. They claimed it would blur the lines between propaganda and journalism, and asked Facebook to exempt news publishers.

Now Facebook has granted that exception. Next year once Facebook has figured out more ways to verify legitimate news organizations that publish with bylines and dates, cite sources and don’t have a history of having stories flagged as false by third-party fact checkers, they’ll no longer have their U.S. ads appear in the Ads Archive. They also won’t have to carry a “Paid for by…” label when they appear in the News Feed or Instagram. News organizations will still have to verify their identity, but not through the political ads process. This exemption will roll out today in the U.K.

The change will also allow news outlets to run “dark post” ads that target specific users but don’t appear on their Pages. This will allow them to secretly test different ad variants without being exposed to potential criticism or competitors looking to copy their ad strategies.

Facebook’s political ads archive of campaign ads will no longer include publications promoting articles about politics or issues

Facebook will be using its recently built news publisher index to which outlets can apply to decide which ad buyers are exempt. That index is up and running in the U.S. and will expand to other countries, but Facebook still wants to build more safeguards against fake news outlets before starting the exemption in the U.S. For now, Facebook is using a third-party list of legitimate U.K. news outlets that’ll be exempted starting today. Jason Kint of publishers association Digital Content Next tells TechCrunch, “We are pleased that Facebook understands and values the important role of news organizations. We have worked cooperatively with Twitter who understood this from the beginning. We look forward to working in a similar fashion with Facebook.”

Facebook’s “Paid for by…” labels will no longer appear on news publishers’ ads on Facebook or Instagram

The change comes as Facebook rolls out enforcement of its political ads transparency rules in the U.K. today. “Now political advertisers must confirm their identity and location, as well as say who paid for the ad, before they can be approved to run political ads on Facebook and/or Instagram,” Facebook tells TechCrunch. These ads will also feature the “Paid for by…” label. Facebook hoped that by self-regulating ads transparency, it might avoid more heavy-handed government regulation, such as through the U.S.’s proposed Honest Ads Act that would bring internet political advertising to parity with transparency rules for television commercials.

The hope is that by determining who is paying for these ads, properly labeling them and exempting journalists, Facebook will be able to better track foreign misinformation campaigns and election interference. Meanwhile, users will have a better understanding of who’s funding the political and issue ads they see on Facebook.

[Update: This story has been updated to reflect that the news publisher exemption won’t roll out for U.S. outlets until next year.]

Powered by WPeMatico

Facebook staff discussed selling API access to apps in 2012-2014

Posted by | Apps, Facebook, facebook privacy, Mobile, Policy, privacy, Social, TC, Tinder | No Comments

Following a flopped IPO in 2012, Facebook desperately brainstormed new ways to earn money. An employee of unknown rank sent an internal email suggesting Facebook charge developers $250,000 per year for access to its platform APIs for making apps that can ask users for access to their data. Employees also discussed offering Tinder extended access to users’ friends’ data that was being removed from the platform in exchange for Tinder’s trademark on “Moments”, which Facebook wanted to use for a photo sharing app it later launched. Facebook decided against selling access to the API, and did not strike a deal with Tinder or other companies including Amazon and Royal Bank Of Canada mentioned in employee emails.

The discussions were reported by the Wall Street Journal as being part of a sealed court document its reporters had reviewed from a lawsuit by bikini photo finding app developer Six4Three against Facebook alleging anti-competitive practices in how it changed the platform in 2014 to restrict access to friends’ data through the platform.

The biggest question remaining is how high in rank the employees who discussed these ideas were. If the ideas were seriously considered by high-ranking executives, especially CEO Mark Zuckerberg, the revelation could contradict the company’s long-running philosophy on not selling data access. Zuckerberg told congress in April that “I can’t be clearer on this topic: We don’t sell data.” If the discussion was between low-level employees, it may have been little more than an off-hand suggestion as Facebook was throwing ideas against the wall, and may have been rejected or ignored by higher-ups. But either way, now that the discussion has leaked, it could validate the public’s biggest fears about Facebook and whether it’s a worthy steward of our personal data.

An employee emailed others about the possibility of removing platform API access “in one-go to all apps that don’t spend… at least $250k a year to maintain access to the data”, the document shows. Facebook clarified to TechCrunch that these discussions were regarding API access, and not selling data directly to businesses. The fact that the discussions were specifically about API access, which Facebook continues to give away for free to developers, had not been previously reported.

Facebook provided this full statement to TechCrunch:

“As we’ve said many times, the documents Six4Three gathered for this baseless case are only part of the story and are presented in a way that is very misleading without additional context. Evidence has been sealed by a California court so we are not able to disprove every false accusation. That said, we stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Any short-term extensions granted during this platform transition were to prevent the changes from breaking user experience. To be clear, Facebook has never sold anyone’s data. Our APIs have always been free of charge and we have never required developers to pay for using them, either directly or by buying advertising.”

A half decade-later, with the world’s will turned against Facebook, the discussions of selling data access couldn’t come at a worse time for the company. Even if quickly aborted, the idea could now stoke concerns that Facebook has too much power and too much of our personal information. While the company eventually found other money-makers and became highly profitable, the discussions illuminate how Facebook could potentially exploit people’s data more aggressively if it deemed it necessary.

Powered by WPeMatico