artificial intelligence

This early GDPR adtech strike puts the spotlight on consent

Posted by | Advertising Tech, Android, Apps, artificial intelligence, China, data processing, data protection, Europe, european union, Facebook, Fidzup, GDPR, General Data Protection Regulation, Google, location based services, mobile advertising, mobile device, online advertising, privacy, retail, smartphone, TC, terms of service | No Comments

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

Powered by WPeMatico

What China searched for in 2018: World Cup, trade war, Apple

Posted by | Android, Apple, artificial intelligence, Asia, Baidu, China, Entertainment, Facebook, Google, huawei, iQiyi, Netflix, oppo, producer, Qualcomm, quantum computing, search engine, shenzhen, smartphone, TC, Tencent, world cup | No Comments

Soon after Google unveiled the top trends in what people searched for in 2018, Baidu published what captivated the Chinese in a parallel online universe, where most of the West’s mainstream tech services, including Google and Facebook, are inaccessible.

China’s top search engine put together the report “based on trillions of trending queries” to present a “social collective memory” of internet users, said Baidu; 802 million people have come online in China as of August, and many of them use Baidu to look things up daily.

Overall, Chinese internet users were transfixed on a mix of sports events, natural disasters, politics and entertainment, a pattern that also prevails in Google’s year-in-search. On Baidu, the most popular queries of the year are:

  1. World Cup: China shares its top search with the rest of the world. Despite China’s lackluster performance in the tournament, World Cup managed to capture a massive Chinese fan base who supported an array of foreign teams. People filled bars in big cities at night to watch the heart-thumping matches, and many even trekked north to Russia to show their support.
  2. U.S.-China trade war: The runner-up comes as no surprise, given the escalating conflict between the world’s two largest economies. A series of events have stoked more fears of the stand-off, including the arrest of Huawei’s financial chief.
  3. Typhoon Mangkhut: The massive tropical cyclone swept across the Pacific Ocean in September, leaving the Philippines and South China in shambles. Shenzhen, the Chinese city dubbed the Silicon Valley for hardware, reportedly submitted more than $20.4 million in damage claims after the storm.
  4. Apple launch: The American smartphone giant is still getting a lot of attention in China even as local Android competitors like Huawei and Oppo chip away at its market share. Apple is also fighting a legal battle with chipmaker Qualcomm, which wanted the former to stop selling certain smartphone models in China.
  5. The story of Yanxi Palace: The historical drama of backstabbing concubines drew record-breaking views for its streamer and producer iQiyi, China’s answer to Netflix that floated in the U.S. in February. The 70-episode show was watched not only in China but also across more than 70 countries around the world.
  6. Produce 101: The talent show in which 101 young women race to be the best performer is one of Tencent Video’s biggest hits of the year, but its reach has gone beyond its targeted young audience as it popularized a meme, which made it to No. 9 on this list.
  7. Skr: A buzzword courtesy of pop idol Kris Wu, who extensively used it on a whim during iQiyi’s rap competition “Rap of China,” prompting his fans and internet users to bestow it with myriad interpretations.
  8. Li Yong passed away: The sudden death of the much-loved television host after he fought a 17-month battle with cancer stirred an outpouring of grief on social media.
  9. Koi: A colored variety of carps, the fish is associated with good luck in Chinese culture. Yang Chaoyue, a Produce 101 contestant whom the audience believed to be below average surprisingly rose to fame and has since been compared to a koi.
  10.  Esports: Professional gaming has emerged from the underground to become a source of national pride recently after a Chinese team championed the League of Legend finals, an event regarded as the Olympics for esports.

In addition to the overall ranking, Baidu also listed popular terms by category, with staple areas like domestic affairs alongside those with a local flavor, such as events that inspire national pride or are tear-jerking.

This was also the first year that Baidu added a category dedicated to AI-related keywords. The search giant, which itself has pivoted to go all in AI and has invested heavily in autonomous driving, said the technology “has not only become a nationwide buzzword but also a key engine in transforming lives across the globe.” In 2018, Chinese people were keen to learn about these AI terms: robots, chips, internet of things, smart speakers, autonomous driving, face recognition, quantum computing, unmanned vehicles, World Artificial Intelligence Conference and quantum mechanics.

Powered by WPeMatico

Prisma’s new AI-powered app, Lensa, helps the selfie camera lie

Posted by | AI, Android, Apps, artificial intelligence, Europe, machine learning, photo editing, Prisma, selfie, smartphone | No Comments

Prisma Labs, the startup behind the style transfer craze of a couple of years ago, has a new AI-powered iOS app for retouching selfies. An Android version of the app — which is called Lensa — is slated as coming in January.

It bills Lensa as a “one-button Photoshop”, offering a curated suite of photo-editing features intended to enhance portrait photos — including teeth whitening; eyebrow tinting; ‘face retouch’ which smooths skin tone and texture (but claims to do so naturally); and ‘eye contrast’ which is supposed to make your eye color pop a bit more (but doesn’t seem to do too much if, like me, you’re naturally dark eyed).

There’s also a background blur option for adding a little bokeh to make your selfie stand out from whatever unattractive clutter you’re surrounded by — much like the portrait mode that Apple added to iOS two years ago.

Lensa can also correct for lens distortion, such as if a selfie has been snapped too close. “Our algorithm reconstructs face in 3D and fixes those disproportions,” is how it explains that.

The last slider on the app’s face menu offers this feature, letting you play around with making micro-adjustments to the 3D mesh underpinning your face. (Which feels as weird to see as it sounds to type.)

Of course there’s no shortage of other smartphone apps out there on stores — and/or baked right into smartphones’ native camera apps — offering to ‘beautify’ selfies.

But the push-button pull here is that Lensa automatically — and, it claims, professionally — performs AI-powered retouching of your selfie. So you don’t have to do any manual tweaking yourself (though you also can if you like).

If you just snap a selfie you’ll see an already enhanced version of you. Who said the camera never lies? Thanks AI…

Prisma Labs’ new app, Lensa, uses machine learning to automagically edit selfies

Lensa also lets you tweak visual parameters across the entire photo, as per a standard photo-editing app, via an ‘adjust’ menu — which (at launch) offers sliders for: Exposure, contrast, saturation, plus fade, sharpen; temperature, tint; highlights, shadows.

While Lensa is free to download, an in-app subscription (costing $4.99 per month) can let you get a bit more serious about editing its AI-enhanced selfies — by unlocking the ability to adjust all those parameters across just the face; or just the background.

Prisma Labs says that might be useful if, for example, you want to fix an underexposed selfie shot against a brighter background.

“Lensa utilizes a bunch of Machine Learning algorithms to precisely extract face skin from the image and then retouching portraits like a professional artist,” is how it describes the app, adding: “The process is fully automated, but the user can set up an intensity level of the effect.”

The startup says it’s drawn on its eponymous style transfer app for Lensa’s machine learning as the majority of photos snapped and processed in Prisma are selfies — giving it a relevant heap of face data to train the photo-editing algorithms.

Having played around with Lensa I can say its natural looking instant edits are pretty seductive — in that it’s not immediately clear algorithmic fingers have gone in and done any polishing. At a glance you might just think oh, that’s a nice photo.

On closer inspection you can of course see the airbrushing that’s gone on but the polish is applied with enough subtly that it can pass as naturally pleasing.

And natural edits is one of the USP’s Prisma Labs is claiming for Lensa. “Our mission is to allow people to edit a portrait but keep it looking natural,” it tells us. (The other key feature it touts is automation, so it’s selling the time you’ll save not having to manually tweak your selfies.)

Anyone who suffers from a chronic skin condition might view Lensa as a welcome tool/alternative to make-up in an age of the unrelenting selfies (when cameras that don’t lie can feel, well, exhausting).

But for those who object to AI stripping even skin-deep layers off of the onion of reality, Lensa’s subtle algorithmic fiddling might still come over as an affront.

This report was updated with a correction after Prisma told us it had decided to remove watermarks and ads from the free version of the app so it is not necessary to pay for a subscription to remove them

Powered by WPeMatico

Watch Google CEO Sundar Pichai testify in Congress — on bias, China and more

Posted by | algorithmic accountability, Android, artificial intelligence, bias, China, Google, Government, House Judiciary Committee, Policy, Social, Sundar Pichai, United States | No Comments

Google CEO Sundar Pichai has managed to avoid the public political grillings that have come for tech leaders at Facebook and Twitter this year. But not today.

Today he will be in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices.

The hearing kicks off at 10:00 ET — and will be streamed live via our YouTube channel (with the feed also embedded above in this post).

Announcing the hearing last month, committee chairman Bob Goodlatte said it would “examine potential bias and the need for greater transparency regarding the filtering practices of tech giant Google”.

Republicans have been pressuring the Silicon Valley giant over what they claim is ‘liberal bias’ embedded at the algorithmic level.

This summer President Trump publicly lashed out at Google, expressing displeasure about news search results for his name in a series of tweets in which he claimed: “Google & others are suppressing voices of Conservatives and hiding information and news that is good.”

Google rejected the allegation, responding then that: “Search is not used to set a political agenda and we don’t bias our results toward any political ideology.”

In his prepared remarks ahead of the hearing, Pichai reiterates this point.

“I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests,” he writes. “We are a company that provides platforms for diverse perspectives and opinions—and we have no shortage of them among our own employees.”

He also seeks to paint a picture of Google as a proudly patriotic “American company” — playing up its role as a creator of local jobs and a bolster for the wider US economy, likely in the hopes of defusing some of the expected criticism from conservatives on the committee.

However his statement makes no mention of a separate controversy that’s been dogging Google this year — after news leaked this summer that it had developed a censored version of its search service for a potential relaunch in China.

The committee looks certain to question Google closely on its intentions vis-a-vis China.

In statements ahead of the hearing last month, House majority leader, Kevin McCarthy, flagged up reports he said suggested Google is “compromising its core principles by complying with repressive censorship mandates from China”.

Trust in general is a key theme, with lawmakers expressing frustration at both the opacity of Google’s blackbox algorithms, which ultimately shape content hierarchies on its platforms, and the difficulty they’ve had in getting facetime with its CEO to voice questions and concerns.

At a Senate Intelligence committee hearing three months ago, which was attended by Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg, senators did not hide their anger that Pichai had turned down their invitation — openly ripping into company leaders for not bothering to show up. (Google offered to send its chief legal officer instead.)

“For months, House Republicans have called for greater transparency and openness from Google. Company CEO Sundar Pichai met with House Republicans in September to answer some of our questions. Mr. Pichai’s scheduled appearance in front of the House Judiciary Committee is another important step to restoring public trust in Google and all the companies that shape the Internet,” McCarthy wrote last month.

Other recent news that could inform additional questions for Pichai from the committee include the revelation of yet another massive security breach at Google+; and a New York Times investigation of how mobile apps are location tracking users — with far more Android apps found to contain location-sharing code than iOS apps.

Powered by WPeMatico

Krisp reduces noise on calls using machine learning, and it’s coming to Windows soon

Posted by | artificial intelligence, audio, funding, Gadgets, noise reduction, TC | No Comments

If your luck is anything like mine, as soon as you jump on an important call, someone decides it’s a great time to blow some leaves off the sidewalk outside your window. 2Hz’s Krisp is a new desktop app that uses machine learning to subtract background noise like that, or crowds, or even crying kids — while keeping your voice intact. It’s already out for Macs and it’s coming to Windows soon.

I met the creators of Krisp, including 2Hz co-founder Davit Baghdasaryan, earlier this year at UC Berkeley’s Skydeck accelerator, where they demonstrated their then-prototype tech.

The tech involved is complex, but the idea is simple: If you create a machine learning system that understands what the human voice sounds like, on average, then it can listen to an audio signal and select only that part of it, cutting out a great deal of background noise.

Baghdasaryan, formerly of Twilio, originally wanted to create something that would run on mobile networks, so T-Mobile or whoever could tout built-in noise cancellation. This platform approach proved too slow, however, so they decided to go straight to consumers.

“Traction with customers was slow, and this was a problem for a young startup,” Baghdasaryan said in an email later. However, people were loving the idea of ‘muting noise,’ so we decided to switch all our focus and build a user-facing product.”

That was around the time I talked with them in person, incidentally, and just six months later they had released on Mac.

It’s simple: You run the app, and it modifies both the outgoing and incoming audio signals, with the normal noisy signal going in one end and a clean, voice-focused one coming out the other. Everything happens on-device and with very short latency (around 15 milliseconds), so there’s no cloud involved and nothing is ever sent to any server or even stored locally. The team is working on having the software adapt and learn on the fly, but it’s not implemented yet.

Another benefit of this approach is it doesn’t need any special tweaking to work with, say, Skype instead of Webex. Because it works at the level of the OS’s sound processing, whatever app you use just hears the Krisp-modified signal as if it were clean out of your mic.

They launched on Mac because they felt the early-adopter type was more likely to be on Apple’s platform, and the bet seems to have paid off. But a Windows version is coming soon — the exact date isn’t set, but expect it either late this month or early January. (We’ll let you know when it’s live.)

It should be more or less identical to the Mac version, but there will be a special gaming-focused one. Gamers, Baghdasaryan pointed out, are much more likely to have GPUs to run Krisp on, and also have a real need for clear communication (as a PUBG player I can speak to the annoyance of an open mic and clacky keys). So there will likely be a few power-user features specific to gamers, but it’s not set in stone yet.

You may wonder, as I did, why they weren’t going after chip manufacturers, perhaps to include Krisp as a tech built into a phone or computer’s audio processor.

In person, they suggested that this ultimately was also too slow and restrictive. Meanwhile, they saw that there was no real competition in the software space, which is massively easier to enter.

“All current noise cancellation solutions require multiple microphones and a special form factor where the mouth must be close to one of the mics. We have no such requirement,” Baghdasaryan explained. “We can do it with single-mic or operate on an audio stream coming from the network. This makes it possible to run the software in any environment you want (edge or network) and any direction (inbound or outbound).”

If you’re curious about the technical side of things — how it was done with one mic, or at low latency, and so on — there’s a nice explanation Baghdasaryan wrote for the Nvidia blog a little while back.

Furthermore, a proliferation of AI-focused chips that Krisp can run on easily means easy entry to the mobile and embedded space. “We have already successfully ported our DNN to NVIDIA GPUs, Intel CPU/GNA, and ARM. Qualcomm is in the pipeline,” noted Baghdasaryan.

To pursue this work the company has raised a total of $2 million so far: $500K from Skydeck as well as friends and family for a pre-seed round, then a $1.5 M round led by Sierra Ventures and Shanda Group.

Expect the Windows release later this winter, and if you’re already a user, expect a few new features to come your way in the same time scale. You can download Krisp for free here.

Powered by WPeMatico

Qualcomm announces the Snapdragon 855 and its new under-display fingerprint sensor

Posted by | 5g, artificial intelligence, Gadgets, gigabit, hardware, Mobile, Qualcomm, snapdragon, system on a chip | No Comments

This week, Qualcomm is hosting press and analysts on Maui for its annual Snapdragon Summit. Sadly, we’re not there, but a couple of weeks ago, Qualcomm gave us a preview of the news. There’ll be three days of news and the company decided to start with a focus on 5G, as well as a preview of its new Snapdragon 855 mobile platform. In addition, the company announced its new ultrasonic fingerprint solution for sensors that can sit under the display.

It’ll probably still be a while before there’ll be a 5G tower in your neighborhood, but after years of buzz, it’s fair to say that we’re now getting to the point where 5G is becoming real. Indeed, AT&T and Verizon are showing off live 5G networks on Maui this week. Qualcomm described its event as the “coming out party for 5G,” though I’m sure we’ll hear from plenty of other players who will claim the same in the coming months.

In the short term, what’s maybe more interesting is that Qualcomm also announced its new flagship 855 mobile platform today. While the company didn’t release all of the details yet, it stressed that the 855 is “the world’s first commercial mobile platform supporting multi-gigabit 5G.”

The 855 also features a new multi-core AI engine that promises up to 3x better AI performance compared to its previous mobile platform, as well as specialized computer vision silicon for enhanced computational photography (think something akin to Google’s Night Light) and video capture.

The company also briefly noted that the new platform has been optimized for gaming. The product name for this is “Snapdragon Elite Gaming,” but details remain sparse. Qualcomm also continues to bet on AR (or “extended reality” as the company brands it).

The last piece of news is likely the most interesting here. Fingerprint sensors are now standard, even on mid-market phones. With its new 3D Sonic Sensors, Qualcomm promises an enhanced ultrasonic fingerprint solution that can sit under the display. In part, this is a rebranding of Qualcomm’s existing under-display sensor, but there’s some new technology here, too. The promise here is that the scanner will work, even if the display is very dirty or if the user installs a screen protector. Chances are, we’ll see quite a few new flagship phones in the next few months (Mobile World Congress is coming up quickly, after all) that will feature these new fingerprint scanners.

Powered by WPeMatico

Researchers use AI and 3D printing to recreate paintings from photographs

Posted by | 3d printing, artificial intelligence, Gadgets, TC | No Comments

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created a system that can reproduce paintings from a single photo, allowing museums and art lovers to snap their favorite pictures and print new copies, complete with paint textures.

Called RePaint, the project uses machine learning to recreate the exact colors of each painting and then prints it using a high-end 3D printer that can output thousands of colors using half-toning.

The researchers, however, found a better way to capture a fuller spectrum of Degas and Dali. They used a special technique they developed called “color-contoning,” which involves using a 3D printer and 10 different transparent inks stacked in very thin layers, much like the wafers and chocolate in a Kit-Kat bar. They combined their method with a decades-old technique called “halftoning,” where an image is created by tons of little ink dots, rather than continuous tones. Combining these, the team says, better captured the nuances of the colors.

“If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home,” said researcher Changil Kim. “Our system works under any lighting condition, which shows a far greater color reproduction capability than almost any other previous work.”

Sadly the prints are only about as big as a business card. The system also can’t yet support matte finishes and detailed surface textures, but the team is working on improving the algorithms and the 3D printing tech so you’ll finally be able to recreate that picture of dogs playing poker in 3D plastic.

Powered by WPeMatico

Loro’s mounted wheelchair assistant puts high tech to work for people with disabilities

Posted by | accessibility, artificial intelligence, Battlefield, disrupt berlin 2018, events, Gadgets, hardware, Health, loro co, Startup Battlefield, Startup Battlefield Disrupt Berlin 2018, Startups, TC, TechCrunch Disrupt Berlin 2018 | No Comments

A person with physical disabilities can’t interact with the world the same way as the able, but there’s no reason we can’t use tech to close that gap. Loro is a device that mounts to a wheelchair and offers its occupant the ability to see and interact with the people and things around them in powerful ways.

Loro’s camera and app work together to let the user see farther, read or translate writing, identify people, gesture with a laser pointer and more. They demonstrated their tech onstage today during Startup Battlefield at TechCrunch Disrupt Berlin.

Invented by a team of mostly students who gathered at Harvard’s Innovation Lab, Loro began as a simple camera for disabled people to more easily view their surroundings.

“We started this project for our friend Steve,” said Loro co-founder and creative director, Johae Song. A designer like her and others in their friend group, he was diagnosed with Amyotrophic Lateral Sclerosis, or ALS, a degenerative neural disease that paralyzes the muscles of the afflicted. “So we decided to come up with ideas of how to help people with mobility challenges.”

“We started with just the idea of a camera attached to the wheelchair, to give people a panoramic view so they can navigate easily,” explained co-founder David Hojah. “We developed from that idea after talking with mentors and experts; we did a lot of iterations, and came up with the idea to be smarter, and now it’s this platform that can do all these things.”

It’s not simple to design responsibly for a population like ALS sufferers and others with motor problems. The problems they may have in everyday life aren’t necessarily what one would think, nor are the solutions always obvious. So the Loro team determined to consult many sources and expend a great deal of time in simple observation.

“Very basic observation — just sit and watch,” Hojah said. “From that you can get ideas of what people need without even asking them specific questions.”

Others would voice specific concerns without suggesting solutions, such as a flashlight the user can direct through the camera interface.

“People didn’t say, ‘I want a flashlight,’ they said ‘I can’t get around in the dark.’ So we brainstormed and came up with the flashlight,” he said. An obvious solution in some ways, but only through observation and understanding can it be implemented well.

The focus is always on communication and independence, Song said, and users are the ones who determine what gets included.

“We brainstorm together and then go out and user test. We realize some features work, others don’t. We try to just let them play with it and see what features people use the most.”

There are assistive devices for motor-impaired people out there already, Song and Hojah acknowledged, but they’re generally expensive, unwieldy and poorly designed. Hojah’s background is in medical device design, so he knows of what he speaks.

Consequently, Loro has been designed to be as accessible as possible, with a tablet interface that can be navigated using gaze tracking (via a Tobii camera setup) or other inputs like joysticks and sip-and-puff tubes.

The camera can be directed to, for example, look behind the wheelchair so the user can safely back up. Or it can zoom in on a menu that’s difficult to see from the user’s perspective and read the items off. The laser pointer allows a user with no ability to point or gesture to signal in ways we take for granted, such as choosing a pastry from a case. Text to speech is built right in, so users don’t have to use a separate app to speak out loud.

The camera also tracks faces and can recognize them from a personal (though for now, cloud-hosted) database for people who need help tracking those with whom they interact. The best of us can lose a name or fail to place a face — honestly, I wouldn’t mind having a Loro on my shoulder during some of our events.


Right now the team is focused on finalizing the hardware; the app and capabilities are mostly finalized but the enclosure and so on need to be made production-ready. The company itself is very early-stage — they just incorporated a few months ago and worked with $100,000 in pre-seed funding to create the prototype. Next up is doing a seed round to get ready to manufacture.

“The whole team, we’re really passionate about empowering these people to be really independent, not just waiting for help from others,” Hojah said. Their driving force, he made clear, is compassion.

 

Powered by WPeMatico

That night, a forest flew: DroneSeed is planting trees from the air

Posted by | artificial intelligence, Computer Vision, drones, Gadgets, GreenTech, hardware, robotics, science, Startups, TC, UAVs | No Comments

Wildfires are consuming our forests and grasslands faster than we can replace them. It’s a vicious cycle of destruction and inadequate restoration rooted, so to speak, in decades of neglect of the institutions and technologies needed to keep these environments healthy.

DroneSeed is a Seattle-based startup that aims to combat this growing problem with a modern toolkit that scales: drones, artificial intelligence and biological engineering. And it’s even more complicated than it sounds.

Trees in decline

A bit of background first. The problem of disappearing forests is a complex one, but it boils down to a few major factors: climate change, outdated methods and shrinking budgets (and as you can imagine, all three are related).

Forest fires are a natural occurrence, of course. And they’re necessary, as you’ve likely read, to sort of clear the deck for new growth to take hold. But climate change, monoculture growth, population increases, lack of control burns and other factors have led to these events taking place not just more often, but more extensively and to more permanent effect.

On average, the U.S. is losing 7 million acres a year. That’s not easy to replace to begin with — and as budgets for the likes of national and state forest upkeep have shrunk continually over the last half century, there have been fewer and fewer resources with which to combat this trend.

The most effective and common reforestation technique for a recently burned woodland is human planters carrying sacks of seedlings and manually selecting and placing them across miles of landscapes. This back-breaking work is rarely done by anyone for more than a year or two, so labor is scarce and turnover is intense.

Even if the labor was available on tap, the trees might not be. Seedlings take time to grow in nurseries and a major wildfire might necessitate the purchase and planting of millions of new trees. It’s impossible for nurseries to anticipate this demand, and the risk associated with growing such numbers on speculation is more than many can afford. One missed guess could put the whole operation underwater.

Meanwhile, if nothing gets planted, invasive weeds move in with a vengeance, claiming huge areas that were once old growth forests. Lacking the labor and tree inventory to stem this possibility, forest keepers resort to a stopgap measure: use helicopters to drench the area in herbicides to kill weeds, then saturate it with fast-growing cheatgrass or the like. (The alternative to spraying is, again, the manual approach: machetes.)

At least then, in a year, instead of a weedy wasteland, you have a grassy monoculture — not a forest, but it’ll do until the forest gets here.

One final complication: helicopter spraying is a horrendously dangerous profession. These pilots are flying at sub-100-foot elevations, performing high-speed maneuvers so that their sprays reach the very edge of burn zones but they don’t crash head-on into the trees. This is an extremely dangerous occupation: 80 to 100 crashes occur every year in the U.S. alone.

In short, there are more and worse fires and we have fewer resources — and dated ones at that — with which to restore forests after them.

These are facts anyone in forest ecology and logging are familiar with, but perhaps not as well known among technologists. We do tend to stay in areas with cell coverage. But it turns out that a boost from the cloistered knowledge workers of the tech world — specifically those in the Emerald City — may be exactly what the industry and ecosystem require.

Simple idea, complex solution

So what’s the solution to all this? Automation, right?

Automation, especially via robotics, is proverbially suited for jobs that are “dull, dirty, and dangerous.” Restoring a forest is dirty and dangerous to be sure. But dull isn’t quite right. It turns out that the process requires far more intelligence than anyone was willing, it seems, to apply to the problem — with the exception of those planters. That’s changing.

Earlier this year, DroneSeed was awarded the first multi-craft, over-55-pounds unmanned aerial vehicle license ever issued by the FAA. Its custom UAV platforms, equipped with multispectral camera arrays, high-end lidar, six-gallon tanks of herbicide and proprietary seed dispersal mechanisms have been hired by several major forest management companies, with government entities eyeing the service as well.

These drones scout a burned area, mapping it down to as high as centimeter accuracy, including objects and plant species, fumigate it efficiently and autonomously, identify where trees would grow best, then deploy painstakingly designed seed-nutrient packages to those locations. It’s cheaper than people, less wasteful and dangerous than helicopters and smart enough to scale to national forests currently at risk of permanent damage.

I met with the company’s team at their headquarters near Ballard, where complete and half-finished drones sat on top of their cases and the air was thick with capsaicin (we’ll get to that).

The idea for the company began when founder and CEO Grant Canary burned through a few sustainable startup ideas after his last company was acquired, and was told, in his despondency, that he might have to just go plant trees. Canary took his friend’s suggestion literally.

“I started looking into how it’s done today,” he told me. “It’s incredibly outdated. Even at the most sophisticated companies in the world, planters are superheroes that use bags and a shovel to plant trees. They’re being paid to move material over mountainous terrain and be a simple AI and determine where to plant trees where they will grow — microsites. We are now able to do both these functions with drones. This allows those same workers to address much larger areas faster without the caloric wear and tear.”

It may not surprise you to hear that investors are not especially hot on forest restoration (I joked that it was a “growth industry” but really because of the reasons above it’s in dire straits).

But investors are interested in automation, machine learning, drones and especially government contracts. So the pitch took that form. With the money DroneSeed secured, it has built its modestly sized but highly accomplished team and produced the prototype drones with which is has captured several significant contracts before even announcing that it exists.

“We definitely don’t fit the mold or metrics most startups are judged on. The nice thing about not fitting the mold is people double take and then get curious,” Canary said. “Once they see we can actually execute and have been with 3 of the 5 largest timber companies in the U.S. for years, they get excited and really start advocating hard for us.”

The company went through Techstars, and Social Capital helped them get on their feet, with Spero Ventures joining up after the company got some groundwork done.

If things go as DroneSeed hopes, these drones could be deployed all over the world by trained teams, allowing spraying and planting efforts in nurseries and natural forests to take place exponentially faster and more efficiently than they are today. It’s genuine change-the-world-from-your-garage stuff, which is why this article is so long.

Hunter (weed) killers

The job at hand isn’t simple or even straightforward. Every landscape differs from every other, not just in the shape and size of the area to be treated but the ecology, native species, soil type and acidity, type of fire or logging that cleared it and so on. So the first and most important task is to gather information.

For this, DroneSeed has a special craft equipped with a sophisticated imaging stack. This first pass is done using waypoints set on satellite imagery.

The information collected at this point is really far more detailed than what’s actually needed. The lidar, for instance, collects spatial information at a resolution much beyond what’s needed to understand the shape of the terrain and major obstacles. It produces a 3D map of the vegetation as well as the terrain, allowing the system to identify stumps, roots, bushes, new trees, erosion and other important features.

This works hand in hand with the multispectral camera, which collects imagery not just in the visible bands — useful for identifying things — but also in those outside the human range, which allows for in-depth analysis of the soil and plant life.

The resulting map of the area is not just useful for drone navigation, but for the surgical strikes that are necessary to make this kind of drone-based operation worth doing in the first place. No doubt there are researchers who would love to have this data as well.

Now, spraying and planting are very different tasks. The first tends to be done indiscriminately using helicopters, and the second by laborers who burn out after a couple of years — as mentioned above, it’s incredibly difficult work. The challenge in the first case is to improve efficiency and efficacy, while in the second case is to automate something that requires considerable intelligence.

Spraying is in many ways simpler. Identifying invasive plants isn’t easy, exactly, but it can be done with imagery like that the drones are collecting. Having identified patches of a plant to be eliminated, the drones can calculate a path and expend only as much herbicide is necessary to kill them, instead of dumping hundreds of gallons indiscriminately on the entire area. It’s cheaper and more environmentally friendly. Naturally, the opposite approach could be used for distributing fertilizer or some other agent.

I’m making it sound easy again. This isn’t a plug and play situation — you can’t buy a DJI drone and hit the “weedkiller” option in its control software. A big part of this operation was the creation not only of the drones themselves, but the infrastructure with which to deploy them.

Conservation convoy

The drones themselves are unique, but not alarmingly so. They’re heavy-duty craft, capable of lifting well over the 57 pounds of payload they carry (the FAA limits them to 115 pounds).

“We buy and gut aircraft, then retrofit them,” Canary explained simply. Their head of hardware, would probably like to think there’s a bit more to it than that, but really the problem they’re solving isn’t “make a drone” but “make drones plant trees.” To that end, Canary explained, “the most unique engineering challenge was building a planting module for the drone that functions with the software.” We’ll get to that later.

DroneSeed deploys drones in swarms, which means as many as five drones in the air at once — which in turn means they need two trucks and trailers with their boxes, power supplies, ground stations and so on. The company’s VP of operations comes from a military background where managing multiple aircraft onsite was part of the job, and she’s brought her rigorous command of multi-aircraft environments to the company.

The drones take off and fly autonomously, but always under direct observation by the crew. If anything goes wrong, they’re there to take over, though of course there are plenty of autonomous behaviors for what to do in case of, say, a lost positioning signal or bird strike.

They fly in patterns calculated ahead of time to be the most efficient, spraying at problem areas when they’re over them, and returning to the ground stations to have power supplies swapped out before returning to the pattern. It’s key to get this process down pat, since efficiency is a major selling point. If a helicopter does it in a day, why shouldn’t a drone swarm? It would be sad if they had to truck the craft back to a hangar and recharge them every hour or two. It also increases logistics costs like gas and lodging if it takes more time and driving.

This means the team involves several people, as well as several drones. Qualified pilots and observers are needed, as well as people familiar with the hardware and software that can maintain and troubleshoot on site — usually with no cell signal or other support. Like many other forms of automation, this one brings its own new job opportunities to the table.

AI plays Mother Nature

The actual planting process is deceptively complex.

The idea of loading up a drone with seeds and setting it free on a blasted landscape is easy enough to picture. Hell, it’s been done. There are efforts going back decades to essentially load seeds or seedlings into guns and fire them out into the landscape at speeds high enough to bury them in the dirt: in theory this combines the benefits of manual planting with the scale of carpeting the place with seeds.

But whether it was slapdash placement or the shock of being fired out of a seed gun, this approach never seemed to work.

Forestry researchers have shown the effectiveness of finding the right “microsite” for a seed or seedling; in fact, it’s why manual planting works as well as it does. Trained humans find perfect spots to put seedlings: in the lee of a log; near but not too near the edge of a stream; on the flattest part of a slope, and so on. If you really want a forest to grow, you need optimal placement, perfect conditions and preventative surgical strikes with pesticides.

Although it’s difficult, it’s also the kind of thing that a machine learning model can become good at. Sorting through messy, complex imagery and finding local minima and maxima is a specialty of today’s ML systems, and the aerial imagery from the drones is rich in relevant data.

The company’s CTO led the creation of an ML model that determines the best locations to put trees at a site — though this task can be highly variable depending on the needs of the forest. A logging company might want a tree every couple of feet, even if that means putting them in sub-optimal conditions — but a few inches to the left or right may make all the difference. On the other hand, national forests may want more sparse deployments or specific species in certain locations to curb erosion or establish sustainable firebreaks.

Once the data has been crunched, the map is loaded into the drones’ hive mind and the convoy goes to the location, where the craft are loaded with seeds instead of herbicides.

But not just any old seeds! You see, that’s one more wrinkle. If you just throw a sagebrush seed on the ground, even if it’s in the best spot in the world, it could easily be snatched up by an animal, roll or wash down to a nearby crevasse, or simply fail to find the right nutrients in time despite the planter’s best efforts.

That’s why DroneSeed’s head of Planting and his team have been working on a proprietary seed packet that they were unbelievably reticent to detail.

From what I could gather, they’ve put a ton of work into packaging the seeds into nutrient-packed little pucks held together with a biodegradable fiber. The outside is dusted with capsaicin, the chemical that makes spicy food spicy (and also what makes bear spray do what it does). If they hadn’t told me, I might have guessed, since the workshop area was hazy with it, leading us all to cough and tear up a little. If I were a marmot, I’d learn to avoid these things real fast.

The pucks, or “seed vessels,” can and must be customized for the location and purpose — you have to match the content and acidity of the soil, things like that. DroneSeed will have to make millions of these things, but it doesn’t plan to be the manufacturer.

Finally these pucks are loaded in a special puck-dispenser which, closely coordinating with the drone, spits one out at the exact moment and speed needed to put it within a few centimeters of the microsite.

All these factors should improve the survival rate of seedlings substantially. That means that the company’s methods will not only be more efficient, but more effective. Reforestation is a numbers game played at scale, and even slight improvements — and DroneSeed is promising more than that — are measured in square miles and millions of tons of biomass.

Proof of life

DroneSeed has already signed several big contracts for spraying, and planting is next. Unfortunately, the timing on their side meant they missed this year’s planting season, though by doing a few small sites and showing off the results, they’ll be in pole position for next year.

After demonstrating the effectiveness of the planting technique, the company expects to expand its business substantially. That’s the scaling part — again, not easy, but easier than hiring another couple thousand planters every year.

Ideally the hardware can be assigned to local teams that do the on-site work, producing loci of activity around major forests from which jobs can be deployed at large or small scales. A set of five or six drones does the work of one helicopter, roughly speaking, so depending on the volume requested by a company or forestry organization, you may need dozens on demand.

That’s all yet to be explored, but DroneSeed is confident that the industry will see the writing on the wall when it comes to the old methods, and identify them as a solution that fits the future.

If it sounds like I’m cheerleading for this company, that’s because I am. It’s not often in the world of tech startups that you find a group of people not just attempting to solve a serious problem — it’s common enough to find companies hitting this or that issue — but who have spent the time, gathered the expertise and really done the dirty, boots-on-the-ground work that needs to happen so it goes from great idea to real company.

That’s what I felt was the case with DroneSeed, and here’s hoping their work pays off — for their sake, sure, but mainly for ours.

Powered by WPeMatico

Banuba raises $7M to supercharge any app or device with the ability to really see you

Posted by | artificial intelligence, augmented reality, Banuba, belarus, Europe, Mobile, neural network, Startups, TC | No Comments

Walking into the office of Viktor Prokopenya — which overlooks a central London park — you would perhaps be forgiven for missing the significance of this unassuming location, just south of Victoria Station in London. While giant firms battle globally to make augmented reality a “real industry,” this jovial businessman from Belarus is poised to launch a revolutionary new technology for just this space. This is the kind of technology some of the biggest companies in the world are snapping up right now, and yet, scuttling off to make me a coffee in the kitchen is someone who could be sitting on just such a company.

Regardless of whether its immediate future is obvious or not, AR has a future if the amount of investment pouring into the space is anything to go by.

In 2016 AR and VR attracted $2.3 billion worth of investments (a 300 percent jump from 2015) and is expected to reach $108 billion by 2021 — 25 percent of which will be aimed at the AR sector. But, according to numerous forecasts, AR will overtake VR in 5-10 years.

Apple is clearly making headway in its AR developments, having recently acquired AR lens company Akonia Holographics and in releasing iOS 12 this month, it enables developers to fully utilize ARKit 2, no doubt prompting the release of a new wave of camera-centric apps. This year Sequoia Capital China, SoftBank invested $50 million in AR camera app Snow. Samsung recently introduced its version of the AR cloud and a partnership with Wacom that turns Samsung’s S-Pen into an augmented reality magic wand.

The IBM/Unity partnership allows developers to integrate into their Unity applications Watson cloud services such as visual recognition, speech to text and more.

So there is no question that AR is becoming increasingly important, given the sheer amount of funding and M&A activity.

Joining the field is Prokopenya’s “Banuba” project. For although you can download a Snapchat-like app called “Banuba” from the App Store right now, underlying this is a suite of tools of which Prokopenya is the founding investor, and who is working closely to realize a very big vision with the founding team of AI/AR experts behind it.

The key to Banuba’s pitch is the idea that its technology could equip not only apps but even hardware devices with “vision.” This is a perfect marriage of both AI and AR. What if, for instance, Amazon’s Alexa couldn’t just hear you? What if it could see you and interpret your facial expressions or perhaps even your mood? That’s the tantalizing strategy at the heart of this growing company.

Better known for its consumer apps, which have been effectively testing their concepts in the consumer field for the last year, Banuba is about to move heavily into the world of developer tools with the release of its new Banuba 3.0 mobile SDK. (Available to download now in the App Store for iOS devices and Google Play Store for Android.) It’s also now secured a further $7 million in funding from Larnabel Ventures, the fund of Russian entrepreneur Said Gutseriev, and Prokopenya’s VP Capital.

This move will take its total funding to $12 million. In the world of AR, this is like a Romulan warbird de-cloaking in a scene from Star Trek.

Banuba hopes that its SDK will enable brands and apps to utilise 3D Face AR inside their own apps, meaning users can benefit from cutting-edge face motion tracking, facial analysis, skin smoothing and tone adjustment. Banuba’s SDK also enables app developers to utilise background subtraction, which is similar to “green screen” technology regularly used in movies and TV shows, enabling end-users to create a range of AR scenarios. Thus, like magic, you can remove that unsightly office surrounding and place yourself on a beach in the Bahamas…

Because Banuba’s technology equips devices with “vision,” meaning they can “see” human faces in 3D and extract meaningful subject analysis based on neural networks, including age and gender, it can do things that other apps just cannot do. It can even monitor your heart rate via spectral analysis of the time-varying color tones in your face.

It has already been incorporated into an app called Facemetrix, which can track a child’s eyes to ascertain whether they are reading something on a phone or tablet or not. Thanks to this technology, it is possible to not just “track” a person’s gaze, but also to control a smartphone’s function with a gaze. To that end, the SDK can detect micro-movements of the eye with subpixel accuracy in real time, and also detects certain points of the eye. The idea behind this is to “Gamify education,” rewarding a child with games and entertainment apps if the Facemetrix app has duly checked that they really did read the e-book they told their parents they’d read.

If that makes you think of a parallel with a certain Black Mirror episode where a young girl is prevented from seeing certain things via a brain implant, then you wouldn’t be a million miles away. At least this is a more benign version…

Banuba’s SDK also includes “Avatar AR,” empowering developers to get creative with digital communication by giving users the ability to interact with — and create personalized — avatars using any iOS or Android device.Prokopenya says: “We are in the midst of a critical transformation between our existing smartphones and future of AR devices, such as advanced glasses and lenses. Camera-centric apps have never been more important because of this.” He says that while developers using ARKit and ARCore are able to build experiences primarily for top-of-the-range smartphones, Banuba’s SDK can work on even low-range smartphones.

The SDK will also feature Avatar AR, which allows users to interact with fun avatars or create personalised ones for all iOS and Android devices. Why should users of Apple’s iPhone X be the only people to enjoy Animoji?

Banuba is also likely to take advantage of the news that Facebook recently announced it was testing AR ads in its newsfeed, following trials for businesses to show off products within Messenger.

Banuba’s technology won’t simply be for fun apps, however. Inside two years, the company has filed 25 patent applications with the U.S. patent office, and of six of those were processed in record time compared with the average. Its R&D center, staffed by 50 people and based in Minsk, is focused on developing a portfolio of technologies.

Interestingly, Belarus has become famous for AI and facial recognition technologies.

For instance, cast your mind back to early 2016, when Facebook bought Masquerade, a Minsk-based developer of a video filter app, MSQRD, which at one point was one of the most popular apps in the App Store. And in 2017, another Belarusian company, AIMatter, was acquired by Google, only months after raising $2 million. It too took an SDK approach, releasing a platform for real-time photo and video editing on mobile, dubbed Fabby. This was built upon a neural network-based AI platform. But Prokopenya has much bolder plans for Banuba.

In early 2017, he and Banuba launched a “technology-for-equity” program to enroll app developers and publishers across the world. This signed up Inventain, another startup from Belarus, to develop AR-based mobile games.

Prokopenya says the technologies associated with AR will be “leveraged by virtually every kind of app. Any app can recognize its user through the camera: male or female, age, ethnicity, level of stress, etc.” He says the app could then respond to the user in any number of ways. Literally, your apps could be watching you.

So, for instance, a fitness app could see how much weight you’d lost just by using the Banuba SDK to look at your face. Games apps could personalize the game based on what it knows about your face, such as reading your facial cues.

Back in his London office, overlooking a small park, Prokopenya waxes lyrical about the “incredible concentration of diversity, energy and opportunity” of London. “Living in London is fantastic,” he says. “The only thing I am upset about, however, is the uncertainty surrounding Brexit and what it might mean for business in the U.K. in the future.”

London may be great (and will always be), but sitting on his desk is a laptop with direct links back to Minsk, a place where the facial recognition technologies of the future are only now just emerging.

Powered by WPeMatico