machine learning

OpenAI Five crushes Dota2 world champs, and soon you can lose to it too

Posted by | artificial intelligence, Gaming, machine learning, OpenAI, science | No Comments

Dota2 is one of the most popular, and complex, online games in the world, but an AI has once again shown itself to supersede human skill. In matches over the weekend, OpenAI’s “Five” system defeated two pro teams soundly, and soon you’ll be able to test your own mettle against — or alongside — the ruthless agent.

In a blog post, OpenAI detailed how its game-playing agent has progressed from its younger self — it seems wrong to say previous version, since it really is the same extensive neural network as many months ago, but with much more training.

The version that played at Dota2’s premiere tournament, The International, gets schooled by the new version 99 percent of the time. And it’s all down to more practice:

In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day.

To the best of our knowledge, this is the first time an RL [reinforcement learning] agent has been trained using such a long-lived training run.

One is tempted to cry foul at a data center-spanning intelligence being allowed to train for 600 human lifespans. But really it’s more of a compliment to human cognition that we can accomplish the same thing with a handful of months or years, while still finding time to eat, sleep, socialize (well, some of us) and so on.

Dota2 is an intense and complex game with some rigid rules but a huge amount of fluidity, and representing it in a way that makes sense to a computer isn’t easy (which likely accounts partly for the volume of training required). Controlling five “heroes” at once on a large map with so much going on at any given time is enough to tax a team of five human brains. But teams work best when they’re acting as a single unit, which is more or less what Five was doing from the start. Rather than five heroes, it was more like five fingers of a hand to the AI.

Interestingly, OpenAI also discovered lately that Five is capable of playing cooperatively with humans as well as in competition. This was far from a sure thing — the whole system might have frozen up or misbehaved if it had a person in there gumming up the gears. But in fact it works pretty well.

You can watch the replays or get the pro commentary on the games if you want to hear exactly how the AI won (I’ve played but I’m far from good. I’m not even bad yet). I understand they had some interesting buy-back tactics and were very aggressive. Or, if you’re feeling masochistic, you can take on the AI yourself in a limited-time event later this week.

We’re launching OpenAI Five Arena, a public experiment where we’ll let anyone play OpenAI Five in both competitive and cooperative modes. We’d known that our 1v1 bot would be exploitable through clever strategies; we don’t know to what extent the same is true of OpenAI Five, but we’re excited to invite the community to help us find out!

Although a match against pros would mean all-out war using traditional tactics, low-stakes matches against curious players might reveal interesting patterns or exploits that the AI’s creators aren’t aware of. Results will be posted publicly, so be ready for that.

You’ll need to sign up ahead of time, though: The system will only be available to play from Thursday night at 6 PM to the very end of Sunday, Pacific time. They need to reserve the requisite amount of computing resources to run the thing, so sign up now if you want to be sure to get a spot.

OpenAI’s team writes that this is the last we’ll hear of this particular iteration of the system; it’s done competing (at least in tournaments) and will be described more thoroughly in a paper soon. They’ll continue to work in the Dota2 environment because it’s interesting, but what exactly the goals, means or limitations will be are yet to be announced.

Powered by WPeMatico

This little translator gadget could be a traveling reporter’s best friend

Posted by | Crowdfunding, Gadgets, hardware, Kickstarter, machine learning, TC, Translation | No Comments

If you’re lucky enough to get to travel abroad, you know it’s getting easier and easier to use our phones and other gadgets to translate for us. So why not do so in a way that makes sense to you? This little gadget seeking funds on Kickstarter looks right up my alley, offering quick transcription and recording — plus music playback, like an iPod Shuffle with superpowers.

The ONE Mini is really not that complex of a device — a couple of microphones and a wireless board in tasteful packaging — but that combination allows for a lot of useful stuff to happen both offline and with its companion app.

You activate the device, and it starts recording and both translating and transcribing the audio via a cloud service as it goes (or later, if you choose). That right there is already super useful for a reporter like me — although you can always put your phone on the table during an interview, this is more discreet, and of course a short-turnaround translation is useful, as well.

Recordings are kept on the phone (no on-board memory, alas) and there’s an option for a cloud service, but that probably won’t be necessary, considering the compact size of these audio files. If you’re paranoid about security, this probably isn’t your jam, but for everyday stuff it should be just fine.

If you want to translate a conversation with someone whose language you don’t speak, you pick two of the 12 built-in languages in the app and then either pass the gadget back and forth or let it sit between you while you talk. The transcript will show on the phone and the ONE Mini can bleat out the translation in its little robotic voice.

Right now translation online only works, but I asked and offline is in the plans for certain language pairs that have reliable two-way edge models, probably Mandarin-English and Korean-Japanese.

It has a headphone jack, too, which lets it act as a wireless playback device for the recordings or for your music, or to take calls using the nice onboard mics. It’s lightweight and has a little clip, so it’s probably better than connecting directly to your phone in many cases.

There’s also a 24/7 interpreter line that charges two bucks a minute that I probably wouldn’t use. I think I would feel weird about it. But in an emergency it could be pretty helpful to have a panic button that sends you directly to a person who speaks both the languages you’ve selected.

I have to say, normally I wouldn’t highlight a random crowdfunded gadget, but I happen to have met the creator of this one, Wells Tu, at one of our events, and trust him and his team to actually deliver. The previous product he worked on was a pair of translating wireless earbuds that worked surprisingly well, so this isn’t their first time shipping a product in this category — that makes a lot of difference for a hardware startup. You can see it in action here:

He pointed out in an email to me that obviously wireless headphones are hot right now, but the translation functions aren’t good and battery life is short. This adds a lot of utility in a small package.

Right now you can score a ONE Mini for $79, which seems reasonable to me. They’ve already passed their goal and are planning on shipping in June, so it shouldn’t be a long wait.

Powered by WPeMatico

MIT’s ‘cyber-agriculture’ optimizes basil flavors

Posted by | agriculture, artificial intelligence, food, Gadgets, GreenTech, hardware, hydroponics, machine learning, MIT, science | No Comments

The days when you could simply grow a basil plant from a seed by placing it on your windowsill and watering it regularly are gone — there’s no point now that machine learning-optimized hydroponic “cyber-agriculture” has produced a superior plant with more robust flavors. The future of pesto is here.

This research didn’t come out of a desire to improve sauces, however. It’s a study from MIT’s Media Lab and the University of Texas at Austin aimed at understanding how to both improve and automate farming.

In the study, published today in PLOS ONE, the question being asked was whether a growing environment could find and execute a growing strategy that resulted in a given goal — in this case, basil with stronger flavors.

Such a task is one with numerous variables to modify — soil type, plant characteristics, watering frequency and volume, lighting and so on — and a measurable outcome: concentration of flavor-producing molecules. That means it’s a natural fit for a machine learning model, which from that variety of inputs can make a prediction as to which will produce the best output.

“We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” explained MIT’s Caleb Harper in a news release. The better you understand those interactions, the better you can design the plant’s lifecycle, perhaps increasing yield, improving flavor or reducing waste.

In this case the team limited the machine learning model to analyzing and switching up the type and duration of light experienced by the plants, with the goal of increasing flavor concentration.

A first round of nine plants had light regimens designed by hand based on prior knowledge of what basil generally likes. The plants were harvested and analyzed. Then a simple model was used to make similar but slightly tweaked regimens that took the results of the first round into account. Then a third, more sophisticated model was created from the data and given significantly more leeway in its ability to recommend changes to the environment.

To the researchers’ surprise, the model recommended a highly extreme measure: Keep the plant’s UV lights on 24/7.

Naturally this isn’t how basil grows in the wild, since, as you may know, there are few places where the sun shines all day long and all night strong. And the arctic and antarctic, while fascinating ecosystems, aren’t known for their flavorful herbs and spices.

Nevertheless, the “recipe” of keeping the lights on was followed (it was an experiment, after all), and incredibly, this produced a massive increase in flavor molecules, doubling the amount found in control plants.

“You couldn’t have discovered this any other way,” said co-author John de la Parra. “Unless you’re in Antarctica, there isn’t a 24-hour photoperiod to test in the real world. You had to have artificial circumstances in order to discover that.”

But while a more flavorful basil is a welcome result, it’s not really the point. The team is more happy that the method yielded good data, validating the platform and software they used.

“You can see this paper as the opening shot for many different things that can be applied, and it’s an exhibition of the power of the tools that we’ve built so far,” said de la Parra. “With systems like ours, we can vastly increase the amount of knowledge that can be gained much more quickly.”

If we’re going to feed the world, it’s not going to be done with amber waves of grain, i.e. with traditional farming methods. Vertical, hydroponic, computer-optimized — we’ll need all these advances and more to bring food production into the 21st century.

Powered by WPeMatico

Blind users can now explore photos by touch with Microsoft’s Seeing AI

Posted by | accessibility, Apps, artificial intelligence, augmented reality, Blindness, Computer Vision, Disabilities, machine learning, Microsoft, Mobile | No Comments

Microsoft’s Seeing AI is an app that lets blind and limited-vision folks convert visual data into audio feedback, and it just got a useful new feature. Users can now use touch to explore the objects and people in photos.

It’s powered by machine learning, of course, specifically object and scene recognition. All you need to do is take a photo or open one up in the viewer and tap anywhere on it.

“This new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them,” wrote Seeing AI lead Saqib Shaikh in a blog post. “The app can even describe the physical appearance of people and predict their mood.”

Because there’s facial recognition built in as well, you could very well take a picture of your friends and hear who’s doing what and where, and whether there’s a dog in the picture (important) and so on. This was possible on an image-wide scale already, as you can see in this image:

But the app now lets users tap around to find where objects are — obviously important to understanding the picture or recognizing it from before. Other details that may not have made it into the overall description may also appear on closer inspection, such as flowers in the foreground or a movie poster in the background.

In addition to this, the app now natively supports the iPad, which is certainly going to be nice for the many people who use Apple’s tablets as their primary interface for media and interactions. Lastly, there are a few improvements to the interface so users can order things in the app to their preference.

Seeing AI is free — you can download it for iOS devices here.

Powered by WPeMatico

Sam’s Club to test new Scan & Go system that uses computer vision instead of barcodes

Posted by | Apps, barcode, Computer Vision, e-commerce, eCommerce, machine learning, Mobile, mobile app, retail, retailers, sams club, shopping, TC, Walmart | No Comments

In October, Walmart-owned Sam’s Club opened a test store in Dallas where it planned to trial new technology, including mobile checkout, an Amazon Go-like camera system, in-store navigation, electronic shelf labels and more. This morning, the retailer announced it will now begin testing a revamped Scan & Go service as well, which leverages computer vision and machine learning to make mobile scanning easier and faster.

The current Scan & Go system, launched two years ago, requires Sam’s Club shoppers to locate the barcode on the item they’re buying and scan it using the Sam’s Club mobile app. The app allows shoppers to account for items they’re buying as they place them in their shopping cart, then pay in the app instead of standing in line at checkout.

However convenient, the system itself can still be frustrating at times because you’ll need to actually find the barcode on the item — often turning the item over from one side to the other to find the sticker or tag. This process can be difficult for heavier items, and frustrating when the barcoded label or tag has fallen off.

It also can end up taking several seconds to complete — which adds up when you’re filling a cart with groceries during a big stocking-up trip.

The new scanning technology will instead use computer vision and ML (machine learning) to recognize products without scanning the barcode, cutting the time it takes for the app to identify the product in question, the retailer explains.

In a video demo, Sam’s Club showed how it might take a typical shopper 9.3 seconds to scan a pack of water using the old system, versus 3.4 seconds using the newer technology.

Of course, the times will vary based on the shopper’s skill, the item being scanned and how well the technology performs, among other factors. A large package of water is a more extreme example, but one that demonstrates well the potential of the system… if it works.

The idea with the newly opened Dallas test store is to put new technology into practice quickly in a real-world environment, to see what performs well and what doesn’t, while also gathering customer feedback. Dallas was chosen as the location for the store because of the tech talent and recruiting potential in the area, and because it’s a short trip from Walmart’s Bentonville, Arkansas headquarters, the company said earlier.

Sam’s Club says it has filed a patent related to the new scanning technology, and will begin testing it this spring at the Dallas area “Sam’s Club Now” store. It will later expand the technology to the tools used by employees, too.

Powered by WPeMatico

Koala-sensing drone helps keep tabs on drop bear numbers

Posted by | artificial intelligence, Australia, Computer Vision, conservation, drones, Gadgets, hardware, machine learning, science, TC, UAVs | No Comments

It’s obviously important to Australians to make sure their koala population is closely tracked — but how can you do so when the suckers live in forests and climb trees all the time? With drones and AI, of course.

A new project from Queensland University of Technology combines some well-known techniques in a new way to help keep an eye on wild populations of the famous and soft marsupials. They used a drone equipped with a heat-sensing camera, then ran the footage through a deep learning model trained to look for koala-like heat signatures.

It’s similar in some ways to an earlier project from QUT in which dugongs — endangered sea cows — were counted along the shore via aerial imagery and machine learning. But this is considerably harder.

A koala

“A seal on a beach is a very different thing to a koala in a tree,” said study co-author Grant Hamilton in a news release, perhaps choosing not to use dugongs as an example because comparatively few know what one is.

“The complexity is part of the science here, which is really exciting,” he continued. “This is not just somebody counting animals with a drone, we’ve managed to do it in a very complex environment.”

The team sent their drone out in the early morning, when they expected to see the greatest contrast between the temperature of the air (cool) and tree-bound koalas (warm and furry). It traveled as if it was a lawnmower trimming the tops of the trees, collecting data from a large area.

Infrared image, left, and output of the neural network highlighting areas of interest

This footage was then put through a deep learning system trained to recognize the size and intensity of the heat put out by a koala, while ignoring other objects and animals like cars and kangaroos.

For these initial tests, the accuracy of the system was checked by comparing the inferred koala locations with ground truth measurements provided by GPS units on some animals and radio tags on others. Turns out the system found about 86 percent of the koalas in a given area, considerably better than an “expert koala spotter,” who rates about a 70. Not only that, but it’s a whole lot quicker.

“We cover in a couple of hours what it would take a human all day to do,” Hamilton said. But it won’t replace human spotters or ground teams. “There are places that people can’t go and there are places that drones can’t go. There are advantages and downsides to each one of these techniques, and we need to figure out the best way to put them all together. Koalas are facing extinction in large areas, and so are many other species, and there is no silver bullet.”

Having tested the system in one area of Queensland, the team is now going to head out and try it in other areas of the coast. Other classifiers are planned to be added as well, so other endangered or invasive species can be identified with similar ease.

Their paper was published today in the journal Nature Scientific Reports.

Powered by WPeMatico

Google will bring its Assistant to Android Messages

Posted by | allo, Android, Apps, artificial intelligence, Assistant, computing, Google, Google Allo, machine learning, messaging apps, Mobile, mobile software, mwc 2018, operating system, Software, technology | No Comments

It’s only been a few weeks since Google brought the Assistant to Google Maps to help you reply to messages, play music and more. This feature first launched in English and will soon start rolling out to all Assistant phone languages. In addition, Google also today announced that the Assistant will come to Android Messages, the standard text messaging app on Google’s mobile operating system, in the coming months.

If you remember Allo, Google’s last failed messaging app, then a lot of this will sound familiar. For Allo, after all, Assistant support was one of the marquee features. The different, though, is that for the time being, Google is mostly using the Assistant as an additional layer of smarts in Messages while in Allo, you could have full conversations with a special Assistant bot.

In Messages, the Assistant will automatically pop up suggestion chips when you are having conversations with somebody about movies, restaurants and the weather. That’s a pretty limited feature set for now, though Google tells us that it plans to expand it over time.

What’s important here is that the suggestions are generated on your phone (and that may be why the machine learning model is limited, too, since it has to run locally). Google is clearly aware that people don’t want the company to get any information about their private text chats. Once you tap on one of the Assistant suggestions, though, Google obviously knows that you were talking about a specific topic, even though the content of the conversation itself is never sent to Google’s servers. The person you are chatting with will only see the additional information when you push it to them.

Powered by WPeMatico

IBM Research develops fingerprint sensor to monitor disease progression

Posted by | Gadgets, Health, IBM, machine learning, Parkinson's Disease, TC | No Comments

IBM today announced that it has developed a small sensor that sits on a person’s fingernail to help monitor the effectiveness of drugs used to combat the symptoms of Parkinson’s and other diseases. Together with the custom software that analyzes the data, the sensor measures how the nail warps as the user grips something. Because virtually any activity involves gripping objects, that creates a lot of data for the software to analyze.

Another way to get this data would be to attach a sensor to the skin and capture motion, as well as the health of muscles and nerves that way. The team notes that skin-based sensors can cause plenty of other problems, including infections, so it decided to look at using data from how a person’s fingernails bend instead.

For the most part, though, fingernails don’t bend all that much, so the sensor had to be rather sensitive. “It turns out that our fingernails deform — bend and move — in stereotypic ways when we use them for gripping, grasping, and even flexing and extending our fingers,” the researchers explain. “This deformation is usually on the order of single digit microns and not visible to the naked eye. However, it can easily detect with strain gauge sensors. For context, a typical human hair is between 50 and 100 microns across and a red blood cell is usually less than 10 microns across.”

In its current version, the researchers glue the prototype to the nail. Because fingernails are pretty tough, there’s very little risk in doing so, especially when compared to a sensor that would sit on the skin. The sensor then talks to a smartwatch that runs machine learning models to detect tremors and other symptoms of Parkinson’s disease. That model can detect what a wearer is doing (opening a doorknob, using a screwdriver, etc.). The data and the model are accurate enough to track when wearers write digits with their fingers.

Over time, the team hopes that it can extend this prototype and the models that analyze the data to recognize other diseases as well. There’s no word on when this sensor could make it onto the market, though.

Powered by WPeMatico

K Health raises $25M for its AI-powered primary care platform

Posted by | 14w, a.i., AI, Apps, artificial intelligence, Bessemer Venture Partners, boxgroup, Comcast Ventures, Community, consumer, Crowdfunding, doctors, funding, Fundings & Exits, Health, health app, health apps, healthcare, K Health, lerer hippeau, machine learning, Mangrove Capital Partners, Mobile, primary ventures, Recent Funding, series B, Series B funding, Startups, TC, Venture Capital | No Comments

K Health, the startup providing consumers with an AI-powered primary care platform, has raised $25 million in Series B funding. The round was led by 14W, Comcast Ventures and Mangrove Capital Partners, with participation from Lerer HippeauBoxGroup and Max Ventures — all previous investors from the company’s seed or Series A rounds. Other previous investors include Primary Ventures and Bessemer Venture Partners.

Co-founded and led by former Vroom CEO and Wix co-CEO Allon Bloch, K Health (previously Kang Health) looks to equip consumers with a free and easy-to-use application that can provide accurate, personalized, data-driven information about their symptoms and health.

“When your child says their head hurts, you can play doctor for the first two questions or so — where does it hurt? How does it hurt?” Bloch explained in a conversation with TechCrunch. “Then it gets complex really quickly. Are they nauseous or vomiting? Did anything unusual happen? Did you come back from a trip somewhere? Doctors then use differential diagnosis to prove that it’s a tension headache versus other things by ruling out a whole list of chronic or unusual conditions based on their deep knowledge sets.”

K Health’s platform, which currently focuses on primary care, effectively looks to perform a simulation and data-driven version of the differential diagnosis process. On the company’s free mobile app, users spend three-to-four minutes answering an average of 21 questions about their background and the symptoms they’re experiencing.

Using a data set of two billion historical health events over the past 20 years — compiled from doctors’ notes, lab results, hospitalizations, drug statistics and outcome data — K Health is able to compare users to those with similar symptoms and medical histories before zeroing in on a diagnosis. 

With its expansive comparative approach, the platform hopes to offer vastly more thorough, precise and user-specific diagnostic information relative to existing consumer alternatives, like WebMD or, what Bloch calls “Dr. Google,” which often produce broad, downright frightening and inaccurate diagnoses. 

Ease and efficiency for both consumers and physicians

Users are able to see cases and diagnoses that had symptoms similar to their own, with K Health notifying users with serious conditions when to consider seeking immediate care. (K Health Press Image / K Health / https://www.khealth.ai)

In addition to pure peace of mind, the utility provided to consumers is clear. With more accurate at-home diagnostic information, users are able to make better preventative health decisions, avoid costly and unnecessary trips to in-person care centers or appointments with telehealth providers and engage in constructive conversations with physicians when they do opt for in-person consultations.

K Health isn’t looking to replace doctors, and, in fact, believes its platform can unlock tremendous value for physicians and the broader healthcare system by enabling better resource allocation. 

Without access to quality, personalized medical information at home, many defer to in-person doctor visits even when it may not be necessary. And with around one primary care physician per 1,000 people in the U.S., primary care practitioners are subsequently faced with an overwhelming number of patients and are unable to focus on more complex cases that may require more time and resources. The high volume of patients also forces physicians to allocate budgets for support staff to help interact with patients, collect initial background information and perform less-demanding tasks.

K Health believes that by providing an accurate alternative for those with lighter or more trivial symptoms, it can help lower unnecessary in-person visits, reduce costs for practices and allow physicians to focus on complicated, rare or resource-intensive cases, where their expertise can be most useful and where brute machine processing power is less valuable.

The startup is looking to enhance the platform’s symbiotic patient-doctor benefits further in early-2019, when it plans to launch in-app capabilities that allow users to share their AI-driven health conversations directly with physicians, hopefully reducing time spent on information gathering and enabling more-informed treatment.

With K Health’s AI and machine learning capabilities, the platform also gets smarter with every conversation as it captures more outcomes, hopefully enriching the system and becoming more valuable to all parties over time. Initial results seem promising, with K Health currently boasting around 500,000 users, most having joined since this past July.

Using access and affordability to improve global health outcomes

With the latest round, the company has raised a total of $37.5 million since its late-2016 founding. K Health plans to use the capital to ramp up marketing efforts, further refine its product and technology and perform additional research to identify methods for earlier detection and areas outside of primary care where the platform may be valuable.

Longer term, the platform has much broader aspirations of driving better health outcomes, normalizing better preventative health behavior and creating more efficient and affordable global healthcare systems.

The high costs of the American healthcare system and the impacts they have on health behavior has been well-documented. With heavy co-pays, premiums and treatment cost, many avoid primary care altogether or opt for more reactionary treatment, leading to worse health outcomes overall.

Issues seen in the American healthcare system are also observable in many emerging market countries with less medical infrastructure. According to the World Health Organization, the international standard for the number of citizens per primary care physician is one for every 1,500 to 2,000 people, with some countries facing much steeper gaps — such as China, where there is only one primary care doctor for every 6,666.

The startup hopes it can help limit the immense costs associated with emerging countries educating millions of doctors for eight-to-10 years and help provide more efficient and accessible healthcare systems much more quickly.

By reducing primary care costs for consumers and operating costs for medical practices, while creating a more convenient diagnostic experience, K Health believes it can improve access to information, ultimately driving earlier detection and better health outcomes for consumers everywhere.

Powered by WPeMatico

Prisma’s new AI-powered app, Lensa, helps the selfie camera lie

Posted by | AI, Android, Apps, artificial intelligence, Europe, machine learning, photo editing, Prisma, selfie, smartphone | No Comments

Prisma Labs, the startup behind the style transfer craze of a couple of years ago, has a new AI-powered iOS app for retouching selfies. An Android version of the app — which is called Lensa — is slated as coming in January.

It bills Lensa as a “one-button Photoshop”, offering a curated suite of photo-editing features intended to enhance portrait photos — including teeth whitening; eyebrow tinting; ‘face retouch’ which smooths skin tone and texture (but claims to do so naturally); and ‘eye contrast’ which is supposed to make your eye color pop a bit more (but doesn’t seem to do too much if, like me, you’re naturally dark eyed).

There’s also a background blur option for adding a little bokeh to make your selfie stand out from whatever unattractive clutter you’re surrounded by — much like the portrait mode that Apple added to iOS two years ago.

Lensa can also correct for lens distortion, such as if a selfie has been snapped too close. “Our algorithm reconstructs face in 3D and fixes those disproportions,” is how it explains that.

The last slider on the app’s face menu offers this feature, letting you play around with making micro-adjustments to the 3D mesh underpinning your face. (Which feels as weird to see as it sounds to type.)

Of course there’s no shortage of other smartphone apps out there on stores — and/or baked right into smartphones’ native camera apps — offering to ‘beautify’ selfies.

But the push-button pull here is that Lensa automatically — and, it claims, professionally — performs AI-powered retouching of your selfie. So you don’t have to do any manual tweaking yourself (though you also can if you like).

If you just snap a selfie you’ll see an already enhanced version of you. Who said the camera never lies? Thanks AI…

Prisma Labs’ new app, Lensa, uses machine learning to automagically edit selfies

Lensa also lets you tweak visual parameters across the entire photo, as per a standard photo-editing app, via an ‘adjust’ menu — which (at launch) offers sliders for: Exposure, contrast, saturation, plus fade, sharpen; temperature, tint; highlights, shadows.

While Lensa is free to download, an in-app subscription (costing $4.99 per month) can let you get a bit more serious about editing its AI-enhanced selfies — by unlocking the ability to adjust all those parameters across just the face; or just the background.

Prisma Labs says that might be useful if, for example, you want to fix an underexposed selfie shot against a brighter background.

“Lensa utilizes a bunch of Machine Learning algorithms to precisely extract face skin from the image and then retouching portraits like a professional artist,” is how it describes the app, adding: “The process is fully automated, but the user can set up an intensity level of the effect.”

The startup says it’s drawn on its eponymous style transfer app for Lensa’s machine learning as the majority of photos snapped and processed in Prisma are selfies — giving it a relevant heap of face data to train the photo-editing algorithms.

Having played around with Lensa I can say its natural looking instant edits are pretty seductive — in that it’s not immediately clear algorithmic fingers have gone in and done any polishing. At a glance you might just think oh, that’s a nice photo.

On closer inspection you can of course see the airbrushing that’s gone on but the polish is applied with enough subtly that it can pass as naturally pleasing.

And natural edits is one of the USP’s Prisma Labs is claiming for Lensa. “Our mission is to allow people to edit a portrait but keep it looking natural,” it tells us. (The other key feature it touts is automation, so it’s selling the time you’ll save not having to manually tweak your selfies.)

Anyone who suffers from a chronic skin condition might view Lensa as a welcome tool/alternative to make-up in an age of the unrelenting selfies (when cameras that don’t lie can feel, well, exhausting).

But for those who object to AI stripping even skin-deep layers off of the onion of reality, Lensa’s subtle algorithmic fiddling might still come over as an affront.

This report was updated with a correction after Prisma told us it had decided to remove watermarks and ads from the free version of the app so it is not necessary to pay for a subscription to remove them

Powered by WPeMatico