Advertising Tech

Corey Weiner is taking over as CEO of mobile ad company Jun Group

Posted by | Advertising Tech, Corey Weiner, Jun Group, Mitchell Reichgut, Mobile, Personnel | No Comments

After 18 years at the helm, Mitchell Reichgut is stepping down as CEO of Jun Group, with COO and president Corey Weiner taking over as chief executive.

The news comes just about a year after Jun Group was acquired by Advantage Solutions, but Reichgut said the acquisition was a “non-factor” in his decision.

“I think it is the right time for the company to have a leadership change,” he said. “I have been stepping back more and more, so it’s a natural progression, with a bunch of managers here taking on larger roles as I move on.”

In addition to Weiner (who’s been at Jun Group since 2003), other Jun Group executives taking on new roles include Mishel Alon becoming COO, Leslie Bargmann becoming vice president of client services and Jeremy Ellison becoming vice president of technology.

Reichgut, meanwhile, said he’s “stepping back entirely to focus on artwork and writing and community service after a long, long career.”

Looking ahead, Weiner plans to double down on Jun Group’s approach to advertising, where it builds custom audience segments by polling users in its network, then shows video ads and branded content to interested viewers.

“Our primary motivation is to evangelize that format,” he said. “As you know, most advertising is interruptive and consumers don’t like that kind of advertising very much — in some cases, they’re annoyed by it. This value exchange flips the advertising paradigm on its head. By choosing to engage with advertising, they are getting something amazing in return.”

Powered by WPeMatico

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

Posted by | Advertising Tech, america, Android, cookies, data processing, data protection, data security, ePrivacy Regulation, Europe, european union, Facebook, France, GDPR, General Data Protection Regulation, Germany, Google, information commissioner's office, instagram, law, online advertising, privacy, spamming, TC, United States, University of Michigan | No Comments

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

Powered by WPeMatico

Luna Labs creates playable ads, directly from Unity

Posted by | Advertising Tech, Gaming, Luna Labs, Mobile, Startups | No Comments

It seems obvious that the best way to advertise a game is to let people play the game itself — and we’ve covered other startups tackling this problem, such as AppOnboard and mNectar.

But Luna Labs co-founder and CEO Steven Chard said that for most developers, the creation of these ads involves outsourcing: “It might take weeks to make an ad, and the quality of the content at the end could be limited.”

The problem, Chard said, is that most games are built on the Unity engine, while the ads need to be in HTML5, which means that developers often have to build playable ads from scratch — hence the outsourcing.

“There’s this huge demand for playables, but the tech hasn’t caught up with it,” he said. “Our view — and I think why it’s really resonating with developers — we’re saying to developers: Use that same [Unity] editor to create a playable ad. You’re going to give the user a playable ad which genuinely feels like the game.”

In fact, while Luna is officially launching its service to developers this week, it’s already been working with a few partners like Kwalee and Voodoo. Luna says that in Kwalee’s case, the results were good enough that the company spent 60% more than they did on other playable ads, and the Luna playables drove more than 250,000 installs per day.

“Luna is solving a real pain point for our studio, and the initial results have been tremendous,” said Kwalee COO Jason Falcus in a statement. “Integrating the Luna service has allowed us to significantly scale our campaigns by a comfortable margin, to the best results so far.”

Jetpack Jump

Luna’s investors include Ben Holmes (formerly of Index Ventures, backer of King and Playfish) and Chris Lee (who also invested in Space Ape and Hello Games).

Chard said the startup is currently focused on providing tools to developers, rather than getting involved in the ad-buying process. More generally, he said the company has been focused on the technology rather than the business model.

“We’re an early company with a very, very complex piece of technology — it’s taken a lot of time to get where we are,” he said. “We’re not doing it for free, but the focus isn’t on short-term profitability. It is, in the longer term, on creating a scalable product which can be used by developers.”

Chard added that eventually, he’s hoping Luna can become more involved in “at the content creation level.” For example, he suggested that developers could use the technology to test out playable concepts and see what resonates, before building a full game.

You can test it out for yourself on the Luna Labs website.

Powered by WPeMatico

Sex tech companies and advocates protest unfair ad standards outside Facebook’s NY HQ

Posted by | Advertising Tech, computing, digital advertising, Facebook, Gadgets, Google, instagram, internet culture, Lora DiCarlo, online ads, operating systems, photo sharing, social media, Software, TC, United States | No Comments

A group of sex tech startup founders, employees and supporters gathered outside of Facebook’s NY office in Manhattan to protest its advertising policies with respect to what it classifies as sexual content. The protest, and a companion website detailing their position we reported on Tuesday, are the work of “Approved, Not Approved,” a coalition of sex health companies co-founded by Dame Products and Unbound Babes.

These policies as applied have fallen out of step with “the average person’s views of what should or shouldn’t be approved of ads,” according to Janet Lieberman, co-founder and CTO of Dame Products.

“If you look at the history of the sex toy industry, for example, vibrators were sexual health products until advertising restrictions were put on them in the 1920s and 1930s — and then they became dirty, and that’s how the industry got shady, and that’s why we have negative thoughts towards them,” she told me in an interview at the protest. “They’re moving back towards wellness in people’s minds, but not in advertising policies. There’s a double standard for what is seen as obscene, talking about men’s sexual health versus women’s sexual health and talking about products that aren’t sexual, and using sex to sell them, versus taking sexual products and having completely non-sexual ads for them.”

facebook ad protest nyc

Credit: TechCrunch

It’s a problem that extends beyond just Facebook and Instagram, Lieberman says. In fact, her company is also suing NYC’s MTA for discrimination for its own ad standards after it refused to run ads for women’s sex toys in their out-of-home advertising inventory. But it also has ramifications beyond just advertising, because in many ways what we see in ads helps define what we see as acceptable in terms of our everyday lives and conversations.

“Some of this stems from society’s inability to separate sexual products from feeling sexual, and that’s a real problem that we see that hurts women more than men, but hurts both genders, in not knowing how to help our sexual health,” Lieberman said. “We can’t talk about it without being sexual, and that we can’t bring things up, without it seeming like we’re bringing up something that is dirty.”

IMG 9739

Credit: Unbound / Dame Products

“A lot of the people you see here today have Instagrams that have been shut down, or ads that have been not approved on Facebook,” said Bryony Cole, CEO at Future of Sex, in an interview. “Myself, I run Future of Sex, which is a sex tech hackathon, and a podcast focused on sex tech, and my Instagram’s been shut down twice with no warning. It’s often for things that Facebook will say they consider phallic imagery, but they’re not […] and yet if you look at images for something like HIMS [an erectile dysfunction medication startup, examples of their ads here], you’ll see those phallic practice images. So there’s this gross discrepancy, and it’s very frustrating, especially for these companies where a lot of the revenue in their business is around community that are online, which is true for sex toys.”

Online ads aren’t just a luxury for many of these startup brands and companies — they’re a necessary ingredient to continued success. Google and Facebook together account for the majority of digital advertising spend in the U.S., according to eMarketer, and it’s hard to grow a business that caters to primarily online customers without fair access to their platforms, Cole argues.

“You see a lot of sex tech or sexual wellness brands having to move off Instagram and find other ways to reach their communities,” she said. “But the majority of people, that’s where they are. And if they’re buying these products, they’re still overcoming a stigma about buying the product, so it’s great to be able to purchase these online. A lot of these companies started either crowdfunding, like Dame Products, or just through e-commerce sites. So the majority of their business is online. It’s not in a store.”

IMG 9753

Credit: Unbound / Dame Products

Earlier this year, sex tech company Lora DiCarlo netted a win in getting the Consumer Technology Association to restore its CES award after community outcry. Double standards in advertising is a far more systemic and distributed problem, but these protests will hopefully help open up the conversation and prompt more change.

Powered by WPeMatico

AppLovin acquires SafeDK to improve brand safety

Posted by | Advertising Tech, applovin, Fundings & Exits, M&A, Mobile, safedk, Startups | No Comments

Mobile marketing company AppLovin is announcing that it has acquired SafeDK.

While AppLovin started out as a mobile ad business, it now bills itself as “a comprehensive mobile gaming platform,” offering tools for game developers around user acquisition, monetization, analytics and (through Lion Studios, launched last year) publishing. SafeDK, meanwhile, allows developers to manage all the different SDKs on which their apps rely.

Palo Alto-headquartered AppLovin says that by incorporating SafeDK technology, it will help its publishers ensure GDPR compliance and brand safety.

It also says SafeDK will continue to support existing customers, while its headquarters in Herzliya, Israel will become AppLovin’s first office in Israel. Co-founders Orly Shoavi and Ronnie Sternberg will remain on-board as the heads of SafeDK and general managers of AppLovin Israel.

The companies are not disclosing the financial terms of the deal, except to say that it was all-cash. According to Crunchbase, SafeDK has raised a total of $5.8 million from investors, including Samsung Next Tel Aviv, Marius Nacht, StageOne Ventures and Kaedan Capital.

“We are delighted to be working with the AppLovin team to help mobile game publishers grow their businesses,” Shoavi said in a statement. “AppLovin has been a trusted partner for the biggest mobile game studios around the world and SafeDK’s technology will strengthen that trust.”

Powered by WPeMatico

File-storage app 4shared caught serving invisible ads and making purchases without consent

Posted by | 4shared, Advertising Tech, Android, app-store, computing, file-sharing, Google Play, instagram, malaysia, mobile software, privacy, Security | No Comments

With more than 100 million installs, file-sharing service 4shared is one of the most popular apps in the Android app store.

But security researchers say the app is secretly displaying invisible ads and subscribes users to paid services, racking up charges without the user’s knowledge — or their permission — collectively costing millions of dollars.

“It all happens in the background… nothing appears on the screen,” said Guy Krief, chief executive of London-based Upstream, which shared its research exclusively with TechCrunch.

The researchers say the app contains suspicious third-party code that allowed the app to automate clicks and make fraudulent purchases. They said the component, built by Hong Kong-based Elephant Data, downloads code which is “directly responsible” for generating the automated clicks without the user’s knowledge. The code also sets a cookie to determine if a device has previously been used to make a purchase, likely as a way to hide the activity.

Upstream also said the code deliberately obfuscates the web addresses it accesses and uses redirection chains to hide the suspicious activity.

Over the past few weeks Upstream said it’s blocked more than 114 million suspicious transactions originating from two million unique devices, according to data from its proprietary security platform, which the company said would cost consumers if they are not blocked. Upstream only has visibility in certain parts of the world — Brazil, Indonesia and Malaysia to name a few — suggesting the number of observed suspicious transactions was likely a fraction of the total number.

Then in mid-April, 4shared’s app suddenly disappeared from Google Play and was replaced with a near-identical app with the suspicious components removed.

At the time of writing, 4shared’s new app has more than 10 million users.

Irin Len, a spokesperson for 4shared, told TechCrunch that the company was “unaware” of the fraudulent ad activity in its app until we reached out, but confirmed the company no longer works with Elephant Data.

Len said the old app was removed by Google “without reason,” but its suspicions quickly fell on the third-party components, which the company removed and resubmitted the app for approval. But because their old app was pulled from Android’s app store, 4shared said it wasn’t allowed to push an update to existing users to remove the suspicious components from their devices.

Google did not respond to TechCrunch’s request for comment.

We sent Elephant Data several questions and follow-up emails prior to publication but we did not hear back.

4shared, owned by New IT Solutions based in the British Virgin Islands, makes a brief reference to Elephant Data in its privacy policy but doesn’t explicitly say what the service does. 4shared said since it’s unable to control or disable Elephant Data’s components in its old app, “we’re bound to keep the detailed overview of which data may be processed and how it may be shared” in its privacy policy.

Little else is known about Elephant Data, except that it bills itself as a “market intelligence” solution designed to “maximize ad revenue.”

The ad firm has drawn criticism in several threads on Reddit, one of which accused the company of operating a “scam” and another called the offering “dodgy.” One developer said he removed the components from his app after it began to suffer from battery-life issues, but Elephant Data was “still collecting data” from users who hadn’t updated their apps.

The developer said Google also banned his app, forcing him to resubmit an entirely new version of his app to the store.

It’s the latest app in recent months to be accused of using invisible ads to generate fraudulent revenue. In May, BuzzFeed News reported similar suspicious behavior and fraudulent purchases in Chinese video app VidMate.

Powered by WPeMatico

Facebook squeezes money from Instagram with new ads in Explore

Posted by | Advertising Tech, Apps, Facebook, instagram, instagram ads, Instagram Explore, Media, Mobile, Social, TC | No Comments

Half of Instagram’s billion-plus users open its Explore tab each month to find fresh content and creators. Now the Facebook-owned app will do more to carry its weight by injecting ads into Instagram Explore for the first time. But rather than bombard users with marketing right on the Explore grid, Instagram will instead only show ads after users tap into a post and then start scrolling through similar imagery.

The move feels like a respectful way to monetize Explore without annoying users too much or breaking the high visual quality of the space. Instagram’s director of business product marketing Susan Bucker Rose tells me she believes the ads will feel natural because users already come to Explore “in the mindset of discovery. They want to be exposed to new accounts, people, and brands.”

Instagram Ads In Explore Tab

Instagram will test the ad slots itself at first to promote its ailing IGTV feature before they “launch to a handful of brands over the coming weeks” Rose says. That includes both big name corporations and smaller advertisers looking to drive conversions, video views, or reach. Instagram hopes to roll the ad format out broadly in the next few months.

Advertisers will buy the slots through the same Facebook ads manager and API they use to buy Instagram feed and Stories space. At first advertisers will have to opt in to placing their ads in Instagram Explore too, but eventually that will be the default with an opportunity to opt out.

Here’s how ads work in Instagram Explore. When you open the tab it will look the same as always with a scrollable grid of posts with high engagement that are personalized based on your interests. When you tap into a photo or video, you’ll first see that full-screen. But if you keep scrolling down, Instagram will show you a contextual feed of content similar to the original post where it will insert photo and video ads. And if you tap into one of the themed video channels and then keep scrolling after watching the clip to check out more videos in the same vein, you may see Instagram video ads.

Instagram describes the introduction as “slowly and thoughtfully” — which makes it sound like the volume of ads will ramp up over time.

Explore was first launched in 2012, some two years after Instagram itself, as a merger of the app’s search and “popular” tabs, with an aim of using algorithms that were informed by your existing interests to give you a new way to discover new people and themes to follow in the app beyond those you might pick up by way of you own social circles. It’s had a few revamps, such as the addition of topical channels and hashtags, and the addition of Stories, the format that has proven to be such a hit on Instagram itself. There won’t be any ads in Stories that recently started appearing in Explore.

But interestingly, through all of that, Instagram stayed hands-off when it came to advertising and Explore. The idea is that the content that each person sees in Explore is individualized, with algorithms detecting the kinds of things you like to show you photos, videos and subjects you might most want to see. Apparently Instagram didn’t want to deter browsing of this content.

On the other side of the coin, this has meant that up to now, individuals and brands have not been able to proactively request or pay to be in anyone’s specific Explore tab — although that doesn’t mean that people don’t game this situation (just Google “how to get on Instagram Explore” and you will find many how-to’s to show you the way).

Instagram Explore Ads

The move to bring ads into the Explore experience has some logic to it. Even before monetization made its way to Instagram in the form of feed advertising, shoppable links and sponsored content posted by influencers, brands and businesses had started using the platform to promote products and to connect with customers. Instagram says that today, 80 percent of its users follow at least one business on Instagram. Now instead of trying desperately to game the Explore algorithm, Instagram can just sell businesses space instead.

With Facebook’s News Feed usage in danger as attention shifts to Stories that it’s still learning to monetize, the company is leaning more on Instagram to keep revenue growing. But Instagram must be sure not to suffocate the golden goose with too many ads.

Powered by WPeMatico

Target Circle and TapHeaven team up in a mobile marketing merger

Posted by | Advertising Tech, Fundings & Exits, M&A, Mobile, Startups, tapheaven, Target Circle, TC | No Comments

Target Circle and TapHeaven announced they’re merging into a single company under the Target Circle brand.

TapHeaven co-founder and CEO Chris Hoyt, who is becoming chief growth officer at the combined organization, said the two companies have been “trying to solve the same problem” — namely, eliminating many of the inefficiencies in the mobile advertising business.

Hoyt said that for Target Circle, that meant trying to “unify this fragmented ecosystem into a single dashboard for contracts, invoices and offers.” And for TapHeaven, that meant a focus on automation, resulting in the launch of what the company calls a “command center” for user acquisition, where advertisers can optimize their ad campaigns “at the source level, by country” while getting high-quality traffic without fraud.

The companies also complement each other geographically — Target Circle is headquartered in Oslo, Norway, while TapHeaven is headquartered in San Francisco.

According to Hoyt, they first came across each other because they were talking to the same mobile studio about supporting the launch of a new game, and it became clear they “both had the same vision for our businesses, the same future with a unified dashboard wrapped in automation and machine learning to simplify and help the ecosystem perform for these advertisers.”

Target Circle founder and CEO Heiko Hildebrandt will continue to serve as chief executive for the combined companies — in the announcement, he said TapHeaven allows the company to “strengthen and expand its technology in the automation of advertising and fraud prevention and resolution.” Meanwhile, TapHeaven executives Brian Krebs and Jeremy Jones will become CIO and chief of user experience, respectively.

The financial terms of the deal were not disclosed. Moving forward, Hoyt said Target Circle will continue to support its existing products while focusing on the new UA Command Center as “the future of our business.” He also suggested that the platform could help advertisers move away from Facebook and Google, allowing them to get the performance they need from other ad networks.

“What impact this is going to have on the market is really lifting up the rest of the ecosystem,” he said. “I feel like Facebook and Google have had their day, a little bit … With the serious things that are going on with these companies, advertisers are desperate for the answers to where [else] can they spend their money and diversify their portfolio.”

Powered by WPeMatico

Alexa, does the Echo Dot Kids protect children’s privacy?

Posted by | Advertising Tech, Amazon, Amazon Echo, Amazon.com, artificial intelligence, center for digital democracy, coppa, Disney, echo, echo dot kids, eCommerce, Federal Trade Commission, Gadgets, nickelodeon, privacy, privacy policy, smart assistant, smart speaker, Speech Recognition, terms of service, United States, voice assistant | No Comments

A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’s Echo smart speaker.

The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the e-commerce giant is violating the Children’s Online Privacy Protection Act (COPPA) — including by failing to obtain proper consents for the use of kids’ data.

As with its other smart speaker Echo devices, the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to U.S. privacy regulation intended to protect kids from commercial exploitation online.

The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.

They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under COPPA to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).

A number of other concerns about Amazon’s device are also being raised with the FTC.

Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.

There are differences in the software, though. In parallel, Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.

Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).

A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon .

At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).

But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the COPPA complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.

Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.

More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third-party services and skills.

Instead, the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.

They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.

“It does not verify that the person ‘consenting’ is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”

Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite COPPA requiring that children’s data be held for no longer than is reasonably necessary.

They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.

Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.

They further highlight that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s “remember” feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.

The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents’ consent — such as when a child has a friend or family member visiting on a play date and they end up playing with the Echo together.

Responding to the complaint, Amazon has denied it is in breach of COPPA. In a statement, a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”

An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.

At the time of writing, the FTC had not responded to a request for comment on the complaint.

In Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied,” and suggested profiling at such an early age could lead to a data-disadvantaged generation.

Responding to rising concerns the U.K. privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.

The U.K. government has also recently published a whitepaper setting out a policy plan to regulate internet content that has a heavy focus on child safety.

Powered by WPeMatico

Takeaways from F8 and Facebook’s next phase

Posted by | Advertising Tech, Apps, artificial intelligence, augmented reality, conference call, data privacy, data security, dating, Developer, eCommerce, Enterprise, Entertainment, events, Extra Crunch Conference Call, Facebook, Facebook Dating, facebook groups, Facebook Marketplace, facebook messenger, Facebook Watch, Gadgets, Gaming, hardware, investment opportunities, marketplace, Media, Oculus, Oculus Quest, Oculus Rift, privacy, Security, Social, Startups, TC, transcript, Venture Capital, Virtual reality, WhatsApp | No Comments

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Powered by WPeMatico