artificial intelligence

OpenAI Five crushes Dota2 world champs, and soon you can lose to it too

Posted by | artificial intelligence, Gaming, machine learning, OpenAI, science | No Comments

Dota2 is one of the most popular, and complex, online games in the world, but an AI has once again shown itself to supersede human skill. In matches over the weekend, OpenAI’s “Five” system defeated two pro teams soundly, and soon you’ll be able to test your own mettle against — or alongside — the ruthless agent.

In a blog post, OpenAI detailed how its game-playing agent has progressed from its younger self — it seems wrong to say previous version, since it really is the same extensive neural network as many months ago, but with much more training.

The version that played at Dota2’s premiere tournament, The International, gets schooled by the new version 99 percent of the time. And it’s all down to more practice:

In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day.

To the best of our knowledge, this is the first time an RL [reinforcement learning] agent has been trained using such a long-lived training run.

One is tempted to cry foul at a data center-spanning intelligence being allowed to train for 600 human lifespans. But really it’s more of a compliment to human cognition that we can accomplish the same thing with a handful of months or years, while still finding time to eat, sleep, socialize (well, some of us) and so on.

Dota2 is an intense and complex game with some rigid rules but a huge amount of fluidity, and representing it in a way that makes sense to a computer isn’t easy (which likely accounts partly for the volume of training required). Controlling five “heroes” at once on a large map with so much going on at any given time is enough to tax a team of five human brains. But teams work best when they’re acting as a single unit, which is more or less what Five was doing from the start. Rather than five heroes, it was more like five fingers of a hand to the AI.

Interestingly, OpenAI also discovered lately that Five is capable of playing cooperatively with humans as well as in competition. This was far from a sure thing — the whole system might have frozen up or misbehaved if it had a person in there gumming up the gears. But in fact it works pretty well.

You can watch the replays or get the pro commentary on the games if you want to hear exactly how the AI won (I’ve played but I’m far from good. I’m not even bad yet). I understand they had some interesting buy-back tactics and were very aggressive. Or, if you’re feeling masochistic, you can take on the AI yourself in a limited-time event later this week.

We’re launching OpenAI Five Arena, a public experiment where we’ll let anyone play OpenAI Five in both competitive and cooperative modes. We’d known that our 1v1 bot would be exploitable through clever strategies; we don’t know to what extent the same is true of OpenAI Five, but we’re excited to invite the community to help us find out!

Although a match against pros would mean all-out war using traditional tactics, low-stakes matches against curious players might reveal interesting patterns or exploits that the AI’s creators aren’t aware of. Results will be posted publicly, so be ready for that.

You’ll need to sign up ahead of time, though: The system will only be available to play from Thursday night at 6 PM to the very end of Sunday, Pacific time. They need to reserve the requisite amount of computing resources to run the thing, so sign up now if you want to be sure to get a spot.

OpenAI’s team writes that this is the last we’ll hear of this particular iteration of the system; it’s done competing (at least in tournaments) and will be described more thoroughly in a paper soon. They’ll continue to work in the Dota2 environment because it’s interesting, but what exactly the goals, means or limitations will be are yet to be announced.

Powered by WPeMatico

Talk all things robotics and AI with TechCrunch writers

Posted by | articles, artificial intelligence, Automation, conference calls, deep learning, Emerging-Technologies, events, Extra Crunch Conference Call, Extra Crunch members, Gadgets, hardware, robotics, science, Startups, TC, tc sessions, TC Sessions: Robotics + AI 2019, technology, uc-berkeley | No Comments

This Thursday, we’ll be hosting our third annual Robotics + AI TechCrunch Sessions event at UC Berkeley’s Zellerbach Hall. The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists.

The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s Brian Heater spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside Lucas Matney, who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with Extra Crunch members (on a conference call) what they saw and what excited them most.

Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? It’s not too late to get tickets.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Powered by WPeMatico

The team behind Baidu’s first smart speaker is now using AI to make films

Posted by | AI, alpha, animation, Apple, artificial intelligence, Asia, Baidu, Beijing, california, Entertainment, Entrepreneur, Gaming, HBO, Los Angeles, natural language processing, Pixar Animation Studios, Series A, Speaker, Virtual reality, Westworld, Y Combinator | No Comments

The HBO sci-fi blockbuster Westworld has been an inspiring look into what humanlike robots can do for us in the meatspace. While current technologies are not quite advanced enough to make Westworld a reality, startups are attempting to replicate the sort of human-robot interaction it presents in virtual space.

Rct studio, which just graduated from Y Combinator and ranked among TechCrunch’s nine favorite picks from the batch, is one of them. The “Westworld” in the TV series, a far-future theme park staffed by highly convincing androids, lets visitors live out their heroic and sadistic fantasies free of consequences.

There are a few reasons why rct studio, which is keeping mum about the meaning of its deliberately lower-cased name for later revelation, is going for the computer-generated world. Besides the technical challenge, playing a fictional universe out virtually does away the geographic constraint. The Westworld experience, in contrast, happens within a confined, meticulously built park.

“Westworld is built in a physical world. I think in this age and time, that’s not what we want to get into,” Xinjie Ma, who heads up marketing for rct, told TechCrunch. “Doing it in the physical environment is too hard, but we can build a virtual world that’s completely under control.”

rct studio

Rct studio wants to build the Westworld experience in virtual worlds. / Image: rct studio

The startup appears suitable to undertake the task. The eight-people team is led by Cheng Lyu, the 29-year-old entrepreneur who goes by Jesse and helped Baidu build up its smart speaker unit from scratch after the Chinese search giant acquired his voice startup Raven in 2017. Along with several of Raven’s core members, Lyu left Baidu in 2018 to start rct.

“We appreciate a lot the support and opportunities given by Baidu and during the years we have grown up dramatically,” said Ma, who previously oversaw marketing at Raven.

Let AI write the script

Immersive films, or games, depending on how one wants to classify the emerging field, are already available with pre-written scripts for users to pick from. Rct wants to take the experience to the next level by recruiting artificial intelligence for screenwriting.

At the center of the project is the company’s proprietary engine, Morpheus. Rct feeds it mountains of data based on human-written storylines so the characters it powers know how to adapt to situations in real time. When the codes are sophisticated enough, rct hopes the engine can self-learn and formulate its own ideas.

“It takes an enormous amount of time and effort for humans to come up with a story logic. With machines, we can quickly produce an infinite number of narrative choices,” said Ma.

To venture through rct’s immersive worlds, users wear a virtual reality headset and control their simulated self via voice. The choice of audio came as a natural step given the team’s experience with natural language processing, but the startup also welcomes the chance to develop new devices for more lifelike journeys.

“It’s sort of like how the film Ready Player One built its own gadgets for the virtual world. Or Apple, which designs its own devices to carry out superior software experience,” explained Ma.

On the creative front, rct believes Morpheus could be a productivity tool for filmmakers as it can take a story arc and dissect it into a decision-making tree within seconds. The engine can also render text to 3D images, so when a filmmaker inputs the text “the man throws the cup to the desk behind the sofa,” the computer can instantly produce the corresponding animation.

Path to monetization

Investors are buying into rct’s offering. The startup is about to close its Series A funding round just months after banking seed money from Y Combinator and Chinese venture capital firm Skysaga, the startup told TechCrunch.

The company has a few imminent tasks before achieving its Westworld dream. For one, it needs a lot of technical talent to train Morpheus with screenplay data. No one on the team had experience in filmmaking, so it’s on the lookout for a creative head who appreciates AI’s application in films.

rct studio

Rct studio’s software takes a story arc and dissects it into a decision-making tree within seconds. / Image: rct studio

“Not all filmmakers we approach like what we do, which is understandable because it’s a very mature industry, while others get excited about tech’s possibility,” said Ma.

The startup’s entry into the fictional world was less about a passion for films than an imperative to shake up a traditional space with AI. Smart speakers were its first foray, but making changes to tangible objects that people are already accustomed to proved challenging. There has been some interest in voice-controlled speakers, but they are far from achieving ubiquity. Then movies crossed the team’s mind.

“There are two main routes to make use of AI. One is to target a vertical sector, like cars and speakers, but these things have physical constraints. The other application, like Alpha Go, largely exists in the lab. We wanted something that’s both free of physical limitation and holds commercial potential.”

The Beijing and Los Angeles-based startup isn’t content with just making the software. Eventually, it wants to release its own films. The company has inked a long-term partnership with Future Affairs Administration, a Chinese sci-fi publisher representing about 200 writers, including the Hugo award-winning Cixin Liu. The pair is expected to start co-producing interactive films within a year.

Rct’s path is reminiscent of a giant that precedes it: Pixar Animation Studios . The Chinese company didn’t exactly look to the California-based studio for inspiration, but the analog was a useful shortcut to pitch to investors.

“A confident company doesn’t really draw parallels with others, but we do share similarities to Pixar, which also started as a tech company, publishes its own films, and has built its own engine,” said Ma. “A lot of studios are asking how much we price our engine at, but we are targeting the consumer market. Making our own films carry so many more possibilities than simply selling a piece of software.”

Powered by WPeMatico

The Google Assistant on Android gets more visual responses

Posted by | Android, artificial intelligence, Assistant, Google, Google Assistant, google now, google search, Mobile, TC | No Comments

About half a year ago, Google gave the Assistant on phones a major visual refresh. Today, the company is following up with a couple of small but welcome tweaks that’ll see the Assistant on Android provide more and better visual responses that are more aligned with what users already expect to see from other Google services.

That means when you ask for events now, for example, the response will look exactly like what you’d see if you tried the same query from your mobile browser. Until now, Google showed a somewhat pared-down version in the Assistant.

Also — and this is going to be a bit of a controversial change — when the Assistant decides that the best answer is simply a list of websites (or when it falls back to those results because it simply doesn’t have any other answer), the Assistant used to show you a couple of boxes in a vertical layout that were not exactly user-friendly. Now, the Assistant will simply show the standard Google Search layout.

Seems like a good idea, so why would that be controversial? Together with the search results, Google will also show its usual Search ads. This marks the first time that Google is showing ads in the Assistant experience. To be fair, the Assistant will only show these kinds of results for a very small number of queries, but users will likely worry that Google will bring more ads to the rest of the Assistant.

Google tells me that advertisers can’t target their ads to Assistant users and won’t get any additional information about them.

The Assistant will now also show built-in mortgage calculators, color pickers, a tip calculator and a bubble level when you ask for those. Also, when you ask for a stock quote, you’ll now see a full interactive graph, not just the current price of the quote.

These new features are rolling out to Android phones in the U.S. now. As usual, it may take a bit before you see them pop up on your own phone.

Powered by WPeMatico

MIT’s ‘cyber-agriculture’ optimizes basil flavors

Posted by | agriculture, artificial intelligence, food, Gadgets, GreenTech, hardware, hydroponics, machine learning, MIT, science | No Comments

The days when you could simply grow a basil plant from a seed by placing it on your windowsill and watering it regularly are gone — there’s no point now that machine learning-optimized hydroponic “cyber-agriculture” has produced a superior plant with more robust flavors. The future of pesto is here.

This research didn’t come out of a desire to improve sauces, however. It’s a study from MIT’s Media Lab and the University of Texas at Austin aimed at understanding how to both improve and automate farming.

In the study, published today in PLOS ONE, the question being asked was whether a growing environment could find and execute a growing strategy that resulted in a given goal — in this case, basil with stronger flavors.

Such a task is one with numerous variables to modify — soil type, plant characteristics, watering frequency and volume, lighting and so on — and a measurable outcome: concentration of flavor-producing molecules. That means it’s a natural fit for a machine learning model, which from that variety of inputs can make a prediction as to which will produce the best output.

“We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” explained MIT’s Caleb Harper in a news release. The better you understand those interactions, the better you can design the plant’s lifecycle, perhaps increasing yield, improving flavor or reducing waste.

In this case the team limited the machine learning model to analyzing and switching up the type and duration of light experienced by the plants, with the goal of increasing flavor concentration.

A first round of nine plants had light regimens designed by hand based on prior knowledge of what basil generally likes. The plants were harvested and analyzed. Then a simple model was used to make similar but slightly tweaked regimens that took the results of the first round into account. Then a third, more sophisticated model was created from the data and given significantly more leeway in its ability to recommend changes to the environment.

To the researchers’ surprise, the model recommended a highly extreme measure: Keep the plant’s UV lights on 24/7.

Naturally this isn’t how basil grows in the wild, since, as you may know, there are few places where the sun shines all day long and all night strong. And the arctic and antarctic, while fascinating ecosystems, aren’t known for their flavorful herbs and spices.

Nevertheless, the “recipe” of keeping the lights on was followed (it was an experiment, after all), and incredibly, this produced a massive increase in flavor molecules, doubling the amount found in control plants.

“You couldn’t have discovered this any other way,” said co-author John de la Parra. “Unless you’re in Antarctica, there isn’t a 24-hour photoperiod to test in the real world. You had to have artificial circumstances in order to discover that.”

But while a more flavorful basil is a welcome result, it’s not really the point. The team is more happy that the method yielded good data, validating the platform and software they used.

“You can see this paper as the opening shot for many different things that can be applied, and it’s an exhibition of the power of the tools that we’ve built so far,” said de la Parra. “With systems like ours, we can vastly increase the amount of knowledge that can be gained much more quickly.”

If we’re going to feed the world, it’s not going to be done with amber waves of grain, i.e. with traditional farming methods. Vertical, hydroponic, computer-optimized — we’ll need all these advances and more to bring food production into the 21st century.

Powered by WPeMatico

Fleksy’s AI keyboard is getting a store to put mini apps at chatters’ fingertips

Posted by | Android, api, Apple, Apps, artificial intelligence, barcelona, e-commerce, Europe, european commission, fleksy, Fleksyapps, Fleksynext, flight search, gboard, gif, Google, imessage, Instant Messaging, keyboard apps, Messenger, Mobile, Pinterest, play store, Qwant, Skyscanner, smartphone, social media, Startups, SwiftKey, TC, Thingthing, tripadvisor, United States, WeChat | No Comments

Remember Fleksy? The customizable Android keyboard app has a new trick up its sleeve: It’s adding a store where users can find and add lightweight third party apps to enhance their typing experience.

Right now it’s launched a taster, preloading a selection of ‘mini apps’ into the keyboard — some from very familiar brand names, some a little less so — so users can start to see how it works.

The first in-keyboard apps are Yelp (local services search); Skyscanner (flight search); Giphy (animated Gif search); GifNote (music Gifs; launching for U.S. users only for rights reasons); Vlipsy (reaction video clips); and Emogi (stickers) — with “many more” branded apps slated as coming in the next few months.

They’re not saying exactly what other brands are coming but there are plenty of familiar logos to be spotted in their press materials — from Spotify to Uber to JustEat to Tripadvisor to PayPal and more…

The full keyboard store itself — which will let users find and add and/or delete apps — will be launching at the end of this month.

The latest version of the Fleksy app can be downloaded for free via the Play Store.

Mini apps made for messaging

The core idea for these mini apps (aka Fleksyapps) is to offer lightweight additions designed to serve the messaging use case.

Say, for example, you’re chatting about where to eat and a friend suggests sushi. The Yelp Fleksyapp might pop up a contextual suggestion for a nearby Japanese restaurant that can be shared directly into the conversation — thereby saving time by doing away with the need for someone to cut out of the chat, switch apps, find some relevant info and cut and paste it back into the chat.

Fleksyapps are intended to be helpful shortcuts that keep the conversation flowing. They also of course put brands back into the conversation.

“We couldn’t be more excited to bring the power of the world’s popular songs with GIFs, videos and photos to the new Fleksyapps platform,” says Gifnote co-founder, John vanSuchtelen, in a supporting statement.

Fleksy’s mini apps appear above the Qwerty keyboard — in much the same space as a next-word prediction. The user can scroll through the app stack (each a tiny branded circle until tapped on to expand) and choose one to interact with. It’s similar to the micro apps lodged in Apple’s iMessage but on Android where iMessage isn’t… The team also plans for Fleksy to support a much wider range of branded apps — hence the Fleksyapps store.

In-keyboard apps is not a new concept for the dev team behind Fleksy; an earlier keyboard app of theirs (called ThingThing) offered micro apps they built themselves as a tool to extend its utility.

But now they’re hoping to garner backing and buy in from third party brands excited about the exposure and reach they could gain by being where users spend the most device time: The keyboard.

“Think of it a bit like the iMessage equivalent but on Android across any app. Or the WeChat mini program but inside the keyboard, available everywhere — not only in one app,” CEO Olivier Plante tells TechCrunch. “That’s a problem of messaging apps these days. All of them are verticals but the keyboard is horizontal. So that’s the benefit for those brands. And the user will have the ability to move them around, add some, to remove some, to explore, to discover.”

“The brands that want to join our platform they have the option of being preloaded by default. The analogy is that by default on the home screen of a phone you are by default in our keyboard. And moving forward you’ll be able to have a membership — you’re becoming a ‘brand member’ of the Fleksyapps platform, and you can have your brand inside the keyboard,” he adds.

The first clutch of Fleksyapps were developed jointly, with the team working with the brands in question. But Plante says they’re planning to launch a tool in future so brands will be able to put together their own apps — in as little as just a few hours.

“We’re opening this array of functionalities and there’s a lot of verticals possible,” he continues. “In the future months we will embed new capabilities for the platform — new type of apps. You can think about professional apps, or cloud apps. Accessing your files from different types of clouds. You have the weather vertical. You have ecommerce vertical. You have so many verticals.

“What you have on the app store today will be reflected into the Fleksyappstore. But really with the focus of messaging and being useful in messaging. So it’s not the full app that we want to bring in — it’s really the core functionality of this app.”

The Yelp Fleksyapp, for example, only includes the ability to see nearby places and search for and share places. So it’s intentionally stripped down. “The core benefit for the brand is it gives them the ability to extend their reach,” says Plante. “We don’t want to compete with the app, per se, we just want to bring these types of app providers inside the messenger on Android across any app.”

On the user side, the main advantage he touts is “it’s really, really fast — fleshing that out to: “It’s very lightweight, it’s very, very fast and we want to become the fastest access to content across any app.”

Users of Fleksyapps don’t need to have the full app installed because the keyboard plugs directly into the API of each branded service. So they get core functionality in bite-sized form without a requirement to download the full app. (Of course they can if they wish.)

So Plante also notes the approach has benefits vis-a-vis data consumption — which could be an advantage in emerging markets where smartphone users’ choices may be hard-ruled by the costs of data and/or connectivity limits.

“For those types of users it gives them an ability to access content but in a very light way — where the app itself, loading the app, loading all the content inside the app can be megabits. In Fleksy you’re talking about kilobits,” he says.

Privacy-sensitive next app suggestions

While baking a bunch of third party apps into a keyboard might sound like a privacy nightmare, the dev team behind Fleksy have been careful to make sure users remain in control.

To wit: Also on board is an AI keyboard assistant (called Fleksynext) — aka “a neural deep learning engine” — which Plante says can detect the context, intention and sentiment of conversations in order to offer “very useful” app suggestions as the chat flows.

The idea is the AI supports the substance of the chat by offering useful functionality from whatever pick and mix of apps are available. Plante refers to these AI-powered ‘next app’ suggestions as “pops”.

And — crucially, from a privacy point of view — the Fleksynext suggestion engine operates locally, on device.

That means no conversation data is sent out of the keyboard. Indeed, Plante says nothing the user types in the keyboard itself is shared with brands (including suggestions that pop up but get ignored). So there’s no risk — as with some other keyboard apps — of users being continually strip-mined for personal data to profile them as they type.

That said, if the user chooses to interact with a Fleksyapp (or its suggestive pop) they are then interacting with a third party’s API. So the usual tracking caveats apply.

“We interact with the web so there’s tracking everywhere,” admits Plante. “But, per se, there’s not specific sensitive data that is shared suddenly with someone. It is not related with the service itself — with the Fleksy app.”

The key point is that the keyboard user gets to choose which apps they want to use and which they don’t. So they can choose which third parties they want to share their plans and intentions with and which they don’t.

“We’re not interesting in making this an advertising platform where the advertiser decides everything,” emphasizes Plante. “We want this to be really close to the user. So the user decides. My intentions. My sentiment. What I type decides. And that is really our goal. The user is able to power it. He can tap on the suggestion or ignore it. And then if he taps on it it’s a very good quality conversion because the user really wants to access restaurants nearby or explore flights for escaping his daily routine… or transfer money. That could be another use-case for instance.”

They won’t be selling brands a guaranteed number of conversions, either.

That’s clearly very important because — to win over users — Fleksynext suggestions will need to feel telepathically useful, rather than irritating, misfired nag. Though the risk of that seems low given how Fleksy users can customize the keyboard apps to only see stuff that’s useful to them.

“In a sense we’re starting reshape a bit how advertising is seen by putting the user in the center,” suggests Plante. “And giving them a useful means of accessing content. This is the original vision and we’ve been very loyal to that — and we think it can reshape the landscape.”

“When you look into five years from now, the smartphone we have will be really, really powerful — so why process things in the cloud? When you can process things on the phone. That’s what we are betting on: Processing everything on the phone,” he adds.

When the full store launches users will be able to add and delete (any) apps — included preloads. So they will be in the driving seat. (We asked Plante to a confirm the user will be able to delete all apps, including any pre-loadeds and he said yes. So if you take him at his word Fleksy will not be cutting any deals with OEMs or carriers to indelibly preload certain Fleksyapps. Or, to put it another way, crapware baked into the keyboard is most definitely not plan.)

Depending on what other Fleksyapps launch in future a Fleksy keyboard user could choose to add, for example, a search service like DuckDuckGo or France’s Qwant to power a pro-privacy alternative to using Google search in the keyboard. Or they could choose Google.

Again the point is the choice is theirs.

Scaling a keyboard into a platform

The idea of keyboard-as-platform offers at least the possibility of reintroducing the choice and variety of smartphone app stores back before the cynical tricks of attention-harvesting tech giants used their network effects and platform power to throttle the app economy.

The Android keyboard space was also a fertile experiment ground in years past. But it’s now dominated by Google’s Gboard and Microsoft-acquired Swiftkey. Which makes Fleksy the plucky upstart gunning to scale an independent alternative that’s not owned by big tech and is open to any third party that wants to join its mini apps party.

“It will be Bing search for Swiftkey, it will be Google search for Gboard, it will be Google Music, it will be YouTube. But on our side we can have YouTube, we can also have… other services that exist for video. The same way with pictures and the same way for file-sharing and drive. So you have Google Drive but you have Dropbox, you have OneDrive, there’s a lot of services in the cloud. And we want to be the platform that has them all, basically,” says Plante.

The original founding team of the Fleksy keyboard was acqui-hired by Pinterest back in 2016, leaving the keyboard app itself to languish with minimal updates. Then two years ago Barcelona-based keyboard app maker, ThingThing, stepped in to take over development.

Plante confirms it’s since fully acquired the Fleksy keyboard technology itself — providing a solid foundation for the keyboard-as-platform business it’s now hoping to scale with the launch of Fleksyapps.

Talking of scale, he tells us the startup is in the process of raising a multi-million Series A — aiming to close this summer. (ThingThing last took in $800,000 via equity crowdfunding last fall.)

The team’s investor pitch is the keyboard offers perhaps the only viable conduit left on mobile to reset the playing field for brands by offering a route to cut through tech giant walled gardens and get where users are spending most of their time and attention: i.e. typing and sharing stuff with their friends in private one-to-one and group chats.

That means the keyboard-as-platform has the potential to get brands of all stripes back in front of users — by embedding innovative, entertaining and helpful bite-sized utility where it can prove its worth and amass social currency on the dominant messaging platforms people use.

The next step for the rebooted Fleksy team is of course building scale by acquiring users for a keyboard which, as of half a year ago, only had around 1M active users from pure downloads.

Its strategy on this front is to target Android device makers to preload Fleksy as the default keyboard.

ThingThing’s business model is a revenue share on any suggestions the keyboard converts, which it argues represent valuable leads for brands — given the level of contextual intention. It is also intending to charge brands that want to be preloaded on the Fleksy keyboard by default.

Again, though, a revenue share model requires substantial scale to work. Not least because brands will need to see evidence of scale to buy into the Fleksyapps’ vision.

Plante isn’t disclosing active users of the Fleksy keyboard right now. But says he’s confident they’re on track to hit 30M-35M active users this year — on account of around ten deals he says are in the pipeline with device makers to preload Fleksy’s keyboard. (Palm was an early example, as we reported last year.)

The carrot for OEMs to join the Fleksyapps party is they’re cutting them in on the revenue share from user interactions with branded keyboard apps — playing to device makers’ needs to find ways to boost famously tight hardware margins.

“The fact that the keyboard can monetize and provide value to the phone brands — this is really massive for them,” argues Plante. “The phone brands can expect revenue flowing in their bank account because we give the brands distribution and the handset manufacturer will make money and we will make money.”

It’s a smart approach, and one that’s essentially only possible because Google’s own Gboard keyboard doesn’t come preloaded on the majority of Android devices. (Exceptions include its own Pixel brand devices.) So — unusually for a core phone app on Android — there’s a bit of an open door where the keyboard sits, instead of the usual preloaded Google wares. And that’s an opportunity.

Markets wise, ThingThing is targeting OEMs in all global regions with its Fleksy pitch — barring China (which Plante readily admits it too complex for a small startup to sensibly try jumping at).

Apps vs tech giants

In its stamping ground of Europe there are warm regulatory winds blowing too: An European Commission antitrust intervention last year saw Google hit with a $5BN fine over anti-competitive practices attached to its Android platform — forcing the company to change local licensing terms.

That antirust decision means mobile makers finally have the chance to unbundle Google apps from devices they sell in the region.

Which translates into growing opportunities for OEMs to rethink their Android strategies. Even as Google remains under pressure not to get in the way by force feeding any more of its wares.

Really, a key component of this shift is that device makers are being told to think, to look around and see what else is out there. For the first time there looks to be a viable chance to profit off of Android without having to preload everything Google wants.

“For us it’s a super good sign,” says Plante of the Commission decision. “Every monopolistic situation is a problem. And the market needs to be fragmented. Because if not we’re just going to lose innovation. And right now Europe — and I see good progress for the US as well — are trying to dismantle the imposed power of those big guys. For the simple evolution of human being and technology and the future of us.”

“I think good things can happen,” he adds. “We’re in talks with handset manufacturers who are coming into Europe and they want to be the most respectful of the market. And with us they have this reassurance that you have a good partner that ensures there’s a revenue stream, there’s a business model behind it, there’s really a strong use-case for users.

“We can finally be where we always wanted to be: A choice, an alternative. But having Google imposing its way since start — and making sure that all the direct competition of Google is just a side, I think governments have now seen the problem. And we’re a winner of course because we’re a keyboard.”

But what about iOS? Plante says the team has plans to bring what they’re building with Fleksy to Apple’s mobile platform too, in time. But for now they’re fully focusing efforts on Android — to push for scale and execute on their vision of staking their claim to be the independent keyboard platform.

Apple has supported third party keyboards on iOS for years. Unfortunately, though, the experience isn’t great — with a flaky toggle to switch away from the default Apple keyboard, combined with heavy system warnings about the risks of using third party keyboards.

Meanwhile the default iOS keyboard ‘just works’ — and users have loads of extra features baked by default into Apple’s native messaging app, iMessage.

Clearly alternative keyboards have found it all but impossible to build any kind of scale in that iOS pincer.

“iOS is coming later because we need to focus on these distribution deals and we need to focus on the brands coming into the platform. And that’s why iOS right now we’re really focusing for later. What we can say is it will come later,” says Plante, adding: “Apple limits a lot keyboards. You can see it with other keyboard companies. It’s the same. The update cycle for iOS keyboard is really, really, really slow.”

Plus, of course, Fleksy being preloaded as a default keyboard on — the team hopes — millions of Android devices is a much more scalable proposition vs just being another downloadable app languishing invisibly on the side lines of another tech giant’s platform.

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, hardware, Intel, Mobileye, nvidia, robotics, self-driving cars, TC, Transportation | No Comments

While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by Mobileye two years ago were just too much for the latter company’s CEO to take politely.

Amnon Shashua, in a blog post on parent company Intel’s news feed cheekily titled “Innovation Requires Originality, openly mocks Nvidia’s “Safety Force Field,” pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017.

He writes:

It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety.

Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate.

Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least.

The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his):

RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold.

SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.”  Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.”

This is followed by numerous other close parallels, and just when you think it’s done, he includes a whole separate document (PDF) showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again.

Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches.

You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.”

Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.”

Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it.

It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility.

I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.

Powered by WPeMatico

The damage of defaults

Posted by | AirPods, algorithmic accountability, algorithmic bias, Apple, Apple earbuds, apple inc, artificial intelligence, Bluetooth, Diversity, Gadgets, headphones, hearables, iphone accessories, mobile computing, siri, smartphone, TC, voice assistant, voice computing | No Comments

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

Hi @tim_cook, I fixed that sketch for you. Introducing #InPods — because one size doesn’t fit all 😉pic.twitter.com/jubagMnwjt

— Natasha (@riptari) March 20, 2019

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: “The creators and designers of AI must be broadly representative of humanity.”

121 faculty members listed.

Not a single faculty member is Black. pic.twitter.com/znCU6zAxui

— Chad Loder ❁ (@chadloder) March 21, 2019

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.

Powered by WPeMatico

CoParenter helps divorced parents settle disputes using AI and human mediation

Posted by | AI, android apps, Apps, artifical intelligence, artificial intelligence, children, divorce, iOS apps, kids, Mobile, parenting, parents, Startups | No Comments

A former judge and family law educator has teamed up with tech entrepreneurs to launch an app they hope will help divorced parents better manage their co-parenting disputes, communications, shared calendar and other decisions within a single platform. The app, called coParenter, aims to be more comprehensive than its competitors, while also leveraging a combination of AI technology and on-demand human interaction to help co-parents navigate high-conflict situations.

The idea for coParenter emerged from co-founder Hon. Sherrill A. Ellsworth’s personal experience and entrepreneur Jonathan Verk, who had been through a divorce himself.

Ellsworth had been a presiding judge of the Superior Court in Riverside County, California for 20 years and a family law educator for 10. During this time, she saw firsthand how families were destroyed by today’s legal system.

“I witnessed countless families torn apart as they slogged through the family law system. I saw how families would battle over the simplest of disagreements like where their child will go to school, what doctor they should see and what their diet should be — all matters that belong at home, not in a courtroom,” she says.

Ellsworth also notes that 80 percent of the disagreements presented in the courtroom didn’t even require legal intervention — but most of the cases she presided over involved parents asking the judge to make the co-parenting decision.

As she came to the end of her career, she began to realize the legal system just wasn’t built for these sorts of situations.

She then met Jonathan Verk, previously EVP Strategic Partnerships at Shazam and now coParenter CEO. Verk had just divorced and had an idea about how technology could help make the co-parenting process easier. He already had on board his longtime friend and serial entrepreneur Eric Weiss, now COO, to help build the system. But he needed someone with legal expertise.

That’s how coParenter was born.

The app, also built by CTO Niels Hansen, today exists alongside a whole host of other tools built for different aspects of the co-parenting process.

That includes those apps designed to document communication, like OurFamilyWizard, Talking Parents, AppClose and Divvito Messenger; those for sharing calendars, like Custody Connection, Custody X Exchange and Alimentor; and even those that offer a combination of features like WeParent, 2houses, SmartCoparent and Fayr, among others.

But the team at coParenter argues that their app covers all aspects of co-parenting, including communication, documentation, calendar and schedule sharing, location-based tools for pickup and drop-off logging, expense tracking and reimbursements, schedule change requests, tools for making decisions on day-to-day parenting choices like haircuts, diet, allowance, use of media, etc. and more.

Notably, coParenter also offers a “solo mode” — meaning you can use the app even if the other co-parent refuses to do the same. This is a key feature that many rival apps lack.

However, the biggest differentiator is how coParenter puts a mediator of sorts in your pocket.

The app begins by using AI, machine learning and sentiment analysis technology to keep conversations civil. The tech will jump in to flag curse words, inflammatory phrases and offensive names to keep a heated conversation from escalating — much like a human mediator would do when trying to calm two warring parties.

When conversations take a bad turn, the app will pop up a warning message that asks the parent if they’re sure they want to use that term, allowing them time to pause and think. (If only social media platforms had built features like this!)

 

When parents need more assistance, they can opt to use the app instead of turning to lawyers.

The company offers on-demand access to professionals as both monthly ($12.99/mo – 20 credits, or enough for two mediations) or yearly ($119.99/year – 240 credits) subscriptions. Both parents can subscribe for $199.99/year, each receiving 240 credits.

“Comparatively, an average hour with a lawyer costs between $250 and upwards of $500, just to file a single motion,” Ellsworth says.

These professionals are not mediators, but are licensed in their respective fields — typically family law attorneys, therapists, social workers or other retired bench officers with strong conflict resolution backgrounds. Ellsworth oversees the professionals to ensure they have the proper guidance.

All communication between the parent and the professional is considered confidential and not subject to admission as evidence, as the goal is to stay out of the courts. However, all the history and documentation elsewhere in the app can be used in court, if the parents do end up there.

The app has been in beta for nearly a year, and officially launched this January. To date, coParenter claims it has already helped to resolve more than 4,000 disputes and more than 2,000 co-parents have used it for scheduling. Indeed, 81 percent of the disputing parents resolved all their issues in the app, without needing a professional mediator or legal professional, the company says.

CoParenter is available on both iOS and Android.

Powered by WPeMatico