machine learning

This 16-game arcade for AIs tests their playing prowess

Posted by | artificial intelligence, Gaming, machine learning, OpenAI, science | No Comments

Figuring out just what an AI is good at is one of the hardest thing about understanding them. To help determine this, OpenAI has designed a set of games that can help researchers tell whether their machine learning agent is actually learning basic skills or, what is equally likely, has figured out how to rig the system in its favor.

It’s one of those aspects of AI research that never fails to delight: the ways an agent will bend or break the rules in its endeavors to appear good at whatever the researchers are asking it to do. Cheating may be thinking outside the box, but it isn’t always welcome, and one way to check is to change the rules a bit and see if the system breaks down.

What the agent actually learned can be determined by seeing if those “skills” can be applied when it’s put into new circumstances where only some of its knowledge is relevant.

For instance, say you want to learn if an AI has learned to play a Mario-like game where it travels right and jumps over obstacles. You could switch things around so it has to walk left; you could change the order of the obstacles; or you could change the game entirely and have monsters appear that the AI has to shoot while it travels right instead.

If the agent has really learned something about playing a game like this, it should be able to pick up the modified versions of the game much quicker than something entirely new. This is called “generalizing” — applying existing knowledge to a new set of circumstances — and humans do it constantly.

OpenAI researchers have encountered this many times in their research, and in order to test generalizable AI knowledge at a basic level, they’ve designed a sort of AI arcade where an agent has to prove its mettle in a variety of games with varying overlap of gameplay concepts.

The 16 game environments they designed are similar to games we know and love, like Pac-Man, Super Mario Bros., Asteroids, and so on. The difference is the environments have been build from the ground up towards AI play, with simplified controls, rewards, and graphics.

Each taxes an AI’s abilities in a different way. For instance in one game there may be no penalty for sitting still and observing the game environment for a few seconds, while in others it may place the agent in danger. In some the AI must explore the environment, in others it may be focused on a single big boss spaceship. But they’re all made to be unmistakably different games, not unlike (though obviously a bit different from) what you might find available for an Atari or NES console.

Here’s the full list, as seen in the gif below from top to bottom, left to right:

  • Ninja: Climb a tower while avoiding bombs or destroying them with throwing stars.
  • Coinrun: Get the coin at the right side of the level while avoiding traps and monsters.
  • Plunder: Fire cannonballs from the bottom of the screen to hit enemy ships and avoid friendlies.
  • Caveflyer: Navigate caves using Asteroids-style controls, shooting enemies and avoiding obstacles.
  • Jumper: Open-world platformer with a double-jumping rabbit and compass pointing towards the goal.
  • Miner: Dig through dirt to get diamonds and boulders that obey Atari-era gravity rules.
  • Maze: Navigate randomly generated mazes of various sizes.
  • Bigfish: Eat smaller fish than you to become the bigger fish, while avoiding a similar fate.
  • Chaser: Like Pac-Man, eat the dots and use power pellets strategically to eat enemies.
  • Starpilot: Gradius-like shmup focused on dodging and quick elimination of enemy ships.
  • Bossfight: 1 on 1 battle with a boss ship with randomly selected attacks and replenishing shields.
  • Heist: Navigate a maze with colored locks and corresponding keys.
  • Fruitbot: Ascend through levels while collecting fruit and avoiding non-fruit.
  • Dodgeball: Move around a room without touching walls, hitting others with balls and avoiding getting hit.
  • Climber: Climb a series of platforms collecting stars along the way and avoiding monsters.
  • Leaper: Frogger-type lane-crossing game with cars, logs, etc.

You can imagine that an AI might be created that excels at the grid-based ones like Heist, Maze, and Chaser, but loses the track in Jumper, Coinrun, and Bossfight. Just like a human — because there are different skills involved in each. But there are shared ones as well: understanding that the player character and moving objects may have consequences, or that certain areas of the play area are inaccessible. An AI that can generalize and adapt quickly will learn to dominate all these games in a shorter time than one that doesn’t generalize well.

The set of games and methods for observing and rating agent performance in them is called the ProcGen benchmark, since the environments and enemy placements in the games are procedurally generated. You can read more about them, or learn to build your own little AI arcade, at the project’s GitHub page.

Powered by WPeMatico

Sonos acquires voice assistant startup Snips, potentially to build out on-device voice control

Posted by | Amazon, Assistant, computing, consumer electronics, Gadgets, Google, hardware, ikea, M&A, machine learning, play:3, smart speakers, Snips, Sonos, Sonos Beam, TC, virtual assistant | No Comments

Sonos revealed during its quarterly earnings report that it has acquired voice assistant startup Snips in a $37 million cash deal, Variety reported on Wednesday. Snips, which had been developing dedicated smart device assistants that can operate primarily locally, instead of relying on consistently round-tripping voice data to the cloud, could help Sonos set up a voice control option for its customers that has “privacy in mind” and is focused more narrowly on music control than on being a general-purpose smart assistant.

Sonos has worked with both Amazon and Google and their voice assistants, providing support for either on their more recent products, including the Sonos Beam and Sonos One smart speakers. Both of these require an active cloud connection to work, however, and have received scrutiny from consumers and consumer protection groups recently for how they handle the data they collect from users. They’ve introduced additional controls to help users navigate their own data sharing, but Sonos CEO Patrick Spence noted in an interview with Variety that one of the things the company can do in building its own voice features is developing them “with privacy in mind.”

Notably, Sonos has introduced a version of its Sonos One that leaves out the microphone hardware altogether — the Sonos One SL introduced earlier this fall. The fact that they saw opportunity in a mic-less second version of the Sonos One suggests it’s likely there are a decent number of customers who like the option of a product that’s not round-tripping any information with a remote server. Spence also seemed quick to point out that Sonos wouldn’t seek to compete with its voice assistant partners, however, since anything they build will be focused much more specifically on music.

You can imagine how local machine learning would be able to handle commands like skipping, pausing playback and adjusting volume (and maybe an even more advanced feature like playing back a saved playlist), without having to connect to any kind of cloud service. It seems like what Spence envisions is something like that which can provide basic controls, while still allowing the option for a customer to enable one of the more full-featured voice assistants depending on their preference.

Meanwhile, partnerships continue to prove lucrative for Sonos: Its team-up with Ikea resulted in 30,000 speakers sold on launch day, the company also shared alongside its earnings. That’s a lot to move in one day, especially in this category.

Powered by WPeMatico

Ghost wants to retrofit your car so it can drive itself on highways in 2020

Posted by | Android, Argo AI, Automation, automotive, autonomous car, AV, california, controller, Emerging-Technologies, founders fund, Ghost Locomotion, gps, IBM, Keith Rabois, Khosla Ventures, Lyft, machine learning, Mike Speiser, National Highway Traffic Safety Administration, Pure Storage, robotics, self-driving cars, sutter hill ventures, TC, technology, Tesla, transport, Transportation, Uber, unmanned ground vehicles, waymo, zoox | No Comments

A new autonomous vehicle company is on the streets — and unbeknownst to most, has been since 2017. Unlike the majority in this burgeoning industry, this new entrant isn’t trying to launch a robotaxi service or sell a self-driving system to suppliers and automakers. It’s not aiming for autonomous delivery, either.

Ghost Locomotion, which emerged Thursday from stealth with $63.7 million in investment from Keith Rabois at Founders Fund, Vinod Khosla at Khosla Ventures and Mike Speiser at Sutter Hill Ventures, is targeting your vehicle.

Ghost is developing a kit that will allow privately owned passenger vehicles to drive autonomously on highways. And the company says it will deliver in 2020. A price has not been set, but the company says it will be less than what Tesla charges for its Autopilot package that includes “full self-driving” or FSD. FSD currently costs $7,000.

This kit isn’t going to give a vehicle a superior advanced driving assistance system. The kit will let human drivers hand control of their vehicle over to a computer, allowing them to do other activities such as look at their phone or even doze off.

The idea might sound similar to what Comma.ai is working on, Tesla hopes to achieve or even the early business model of Cruise. Ghost CEO and co-founder John Hayes says what they’re doing is different.

A different approach

The biggest players in the industry — companies like Waymo, Cruise, Zoox and Argo AI — are trying to solve a really hard problem, which is driving in urban areas, Hayes told TechCrunch in a recent interview.

“It didn’t seem like anyone was actually trying to solve driving on the highways,” said Hayes, who previously founded Pure Storage in 2009. “At the time, we were told that this is so easy that surely the automakers will solve this any day now. And that really hasn’t happened.”

Hayes noted that automakers have continued to make progress in advanced driver assistance systems. The more advanced versions of these systems provide what the SAE describes as Level 2 automation, which means two primary control functions are automated. Tesla’s Autopilot system is a good example of this; when engaged, it automatically steers and has traffic-aware cruise control, which maintains the car’s speed in relation to surrounding traffic. But like all Level 2 systems, the driver is still in the loop.

Ghost wants to take the human out of the loop when they’re driving on highways.

“We’re taking, in some ways, a classic startup attitude to this, which is ‘what is the simplest product that we can perfect, that will put self driving in the hands of ordinary consumers?’ ” Hayes said. “And so we take people’s existing cars and we make them self-driving cars.”

The kit

Ghost is tackling that challenge with software and hardware.

The kit involves hardware like sensors and a computer that is installed in the trunk and connected to the controller area network (CAN) of the vehicle. The CAN bus is essentially the nervous system of the car and allows various parts to communicate with each other.

Vehicles must have a CAN bus and electronic steering to be able to use the kit.

The camera sensors are distributed throughout the vehicle. Cameras are integrated into what looks like a license plate holder at the back of the vehicle, as well as another set that are embedded behind the rearview mirror.

A third device with cameras is attached to the frame around the window of the door (see below).

Initially, this kit will be an aftermarket product; the company is starting with the 20 most popular car brands and will expand from there.

Ghost intends to set up retail spaces where a car owner can see the product and have it installed. But eventually, Hayes said, he believes the kit will become part of the vehicle itself, much like GPS or satellite radio has evolved.

While hardware is the most visible piece of Ghost, the company’s 75 employees have dedicated much of their time on the driving algorithm. It’s here, Hayes says, where Ghost stands apart.

How Ghost is building a driver

Ghost is not testing its self-driving system on public roads, an approach nearly every other AV company has taken. There are 63 companies in California that have received permits from the Department of Motor Vehicles to test autonomous vehicle technology (always with a human safety driver behind the wheel) on public roads.

Ghost’s entire approach is based on an axiom that the human driver is fundamentally correct. It begins by collecting mass amounts of video data from kits that are installed on the cars of high-mileage drivers. Ghost then uses models to figure out what’s going on in the scene and combines that with other data, including how the person is driving by measuring the actions they take.

It doesn’t take long or much data to model ordinary driving, actions like staying in a lane, braking and changing lanes on a highway. But that doesn’t “solve” self-driving on highways because the hard part is how to build a driver that can handle the odd occurrences, such as swerving, or correct for those bad behaviors.

Ghost’s system uses machine learning to find more interesting scenarios in the reams of data it collects and builds training models based on them.

The company’s kits are already installed on the cars of high-mileage drivers like Uber and Lyft drivers and commuters. Ghost has recruited dozens of drivers and plans to have its kits in hundreds of cars by the end of the year. By next year, Hayes says the kits will be in thousands of cars, all for the purpose of collecting data.

The background of the executive team, including co-founder and CTO Volkmar Uhlig, as well as the rest of their employees, provides some hints as to how they’re approaching the software and its integration with hardware.

Employees are data scientists and engineers, not roboticists. A dive into their resumes on LinkedIn and not one comes from another autonomous vehicle company, which is unusual in this era of talent poaching.

For instance, Uhlig, who started his career at IBM Watson Research, co-founded Adello and was the architect behind the company’s programmatic media trading platform. Before that, he built Teza Technologies, a high-frequency trading platform. While earning his PhD in computer science he was part of a team that architected the L4 Pistachio microkernel, which is commercially deployed in more than 3 billion mobile Apple and Android devices.

If Ghost is able to validate its system — which Hayes says is baked into its entire approach — privately owned self-driving cars could be on the highways by next year. While the National Highway Traffic Safety Administration could potentially step in, Ghost’s approach, like Tesla, hits a sweet spot of non-regulation. It’s a space, that Hayes notes, where the government has not yet chosen to regulate.

Powered by WPeMatico

Microsoft’s HoloLens 2 starts shipping

Posted by | augmented reality, Australia, barcelona, Canada, China, Computer Vision, computing, France, Gadgets, Germany, hardware, head-mounted displays, holography, hololens 2, ireland, Japan, machine learning, Microsoft, microsoft hardware, Microsoft HoloLens, Microsoft Ignite 2019, mixed reality, New Zealand, United Kingdom, Windows 10 | No Comments

Earlier this year, at Mobile World Congress in Barcelona, Microsoft announced the second generation of its HoloLens augmented reality visor. Today, the $3,500 HoloLens 2 is going on sale in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand, the same countries where it was previously available for pre-order.

Ahead of the launch, I got to spend some time with the latest model after a brief demo in Barcelona earlier this year. Users will immediately notice the larger field of view, which still doesn’t cover your full field of view, but offers a far better experience compared to the first version (where you often felt like you were looking at the virtual objects through a stamp-sized window).

The team also greatly enhanced the overall feel of wearing the device. It’s not light, at 1.3 pounds, but with the front visor that flips up and the new mounting system it is far more comfortable.

In regular use, existing users will also immediately notice the new gestures for opening the Start menu (this is Windows 10, after all). Instead of a “bloom” gesture, which often resulted in false positives, you now simply tap on the palm of your hand, where a Microsoft logo now appears when you look at it.

Eye tracking, too, has been greatly improved and works well, even over large distances, and the new machine learning model also does a far better job at tracking all of your fingers. All of this is powered by a lot of custom hardware, including Microsoft’s second-generation “holographic processing unit.”

Microsoft has also enhanced some of the cloud tools it built for HoloLens, including Azure Spatial Anchors, which allow for persistent holograms in a given space that anybody else who is using a holographic app can then see in the same spot.

Taken together, all of the changes result in a more comfortable and smarter device, with reduced latency when you look at the various objects around you and interact with them.

Powered by WPeMatico

The 7 most important announcements from Microsoft Ignite

Posted by | Android, Assistant, AWS, Bing, chromium, cloud computing, cloud infrastructure, computing, Cortana, Developer, Enterprise, GitHub, Google, google cloud, linux, machine learning, Microsoft, Microsoft Ignite 2019, microsoft windows, San Francisco, Satya Nadella, TC, voice assistant, Windows 10, Windows Phone | No Comments

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.

Powered by WPeMatico

Bosch’s new ‘ear’ for the Space Station’s Astrobee robot will let it ‘hear’ potential mechanical issues

Posted by | aerospace, artificial intelligence, astrobotic, Bosch, Gadgets, international space station, machine learning, northrop grumman, outer space, Space, spaceflight, TC | No Comments

Bosch is set to launch a new AI-based sensor system to the International Space Station that could change the way astronauts and ground crew monitor the ISS’s continued healthy operation. The so-called “SoundSee” module will be roughly the size of a lunch box, and will make its way to the ISS via Northrop Grumman’s forthcoming CRS-12 resupply mission, which is currently set for a November 2 launch.

The SoundSee module combines microphones with machine learning to perform analysis of sounds it picks up from the station, which it can use to effectively establish a healthy baseline, and then continually use new audio data to compare in order to get advance notice of potential mechanical issues via changes that could signal problems.

SoundSee will be mobile via installation on Astrobee, an autonomous floating cube-shaped robot that took its first totally self-guided flight in reduced gravity in June this year. Astrobee’s roving role is a perfect way for Bosch’s SoundSee tech, which it developed in partnership with Astrobotic and NASA, to work on and develop its autonomous sensing tech, which it will eventually use to provide info about how systems are currently performing on the ISS, and when specific systems might need maintenance or repairs — ideally before it becomes an issue.

The first autonomous flight of Astrobee took place in June, 2019 on the ISS

As with other things that Astrobee is designed to help with, SoundSee’s ultimate aim is to automate things that the astronaut crew of the ISS currently have to do manually. Already, SoundSee has been undergoing extensive ground testing here on Earth in a simulated environment similar to what it will experience on the ISS, but once in space, it’ll face the real test of its intended use scenario.

Powered by WPeMatico

This prosthetic arm combines manual control with machine learning

Posted by | artificial intelligence, EPFL, Gadgets, hardware, machine learning, Prosthetics, robotics, science | No Comments

Prosthetic limbs are getting better every year, but the strength and precision they gain doesn’t always translate to easier or more effective use, as amputees have only a basic level of control over them. One promising avenue being investigated by Swiss researchers is having an AI take over where manual control leaves off.

To visualize the problem, imagine a person with their arm amputated above the elbow controlling a smart prosthetic limb. With sensors placed on their remaining muscles and other signals, they may fairly easily be able to lift their arm and direct it to a position where they can grab an object on a table.

But what happens next? The many muscles and tendons that would have controlled the fingers are gone, and with them the ability to sense exactly how the user wants to flex or extend their artificial digits. If all the user can do is signal a generic “grip” or “release,” that loses a huge amount of what a hand is actually good for.

Here’s where researchers from École polytechnique fédérale de Lausanne (EPFL) take over. Being limited to telling the hand to grip or release isn’t a problem if the hand knows what to do next — sort of like how our natural hands “automatically” find the best grip for an object without our needing to think about it. Robotics researchers have been working on automatic detection of grip methods for a long time, and it’s a perfect match for this situation.

epfl roboarm

Prosthesis users train a machine learning model by having it observe their muscle signals while attempting various motions and grips as best they can without the actual hand to do it with. With that basic information the robotic hand knows what type of grasp it should be attempting, and by monitoring and maximizing the area of contact with the target object, the hand improvises the best grip for it in real time. It also provides drop resistance, being able to adjust its grip in less than half a second should it start to slip.

The result is that the object is grasped strongly but gently for as long as the user continues gripping it with, essentially, their will. When they’re done with the object, having taken a sip of coffee or moved a piece of fruit from a bowl to a plate, they “release” the object and the system senses this change in their muscles’ signals and does the same.

It’s reminiscent of another approach, by students in Microsoft’s Imagine Cup, in which the arm is equipped with a camera in the palm that gives it feedback on the object and how it ought to grip it.

It’s all still very experimental, and done with a third-party robotic arm and not particularly optimized software. But this “shared control” technique is promising and could very well be foundational to the next generation of smart prostheses. The team’s paper is published in the journal Nature Machine Intelligence.

Powered by WPeMatico

Google Travel adds flight price notifications and a limited-time flight price guarantee

Posted by | Android, computing, Google, google search, google travel, Google-Maps, machine learning, Pricing, TC, Transportation, United States, world wide web | No Comments

tp animation full no zoom alpha 1Google is building out its travel product with more features to convince you to use it to book flights and plan trips directly, instead of having to go anywhere else. The company is adding more sophisticated pricing features, including historical price comparison for specific itineraries — and notifications about when a price is likely to spike or when it’s at the absolute lowest. It’s also offering a pricing guarantee for bookings made in the next couple of weeks, so you’ll get be refunded the difference if Google says a flight price won’t drop and it subsequently does.

For any flights booked through Google that originate in the U.S. (regardless of destination) between August 13 and September 2, for which Google sends you an alert notifying you that the price is predicted to be at its lowest, the company will alert you if it does drop and then send you a refund on the price difference between what it predicted (i.e. what you paid) and the lowest actual fare.

It’s an attractive deal, and the limited-time offer is probably only even available because this is new and Google wants to make sure people feel absolutely comfortable trusting their predictions. The company likely has the most readily available cross-airline information about flight availability, route popularity and price in the world, however, backed by some of the most sophisticated machine learning on the planet, so it sounds like it’s probably a pretty safe bet for them to make.

Google Travel is also adding a number of features once you actually book you trip — it’ll suggest next steps for planning your trip, and then help you find the best neighborhoods, hotels, restaurants and stuff to do. Plus, reservations and other trip details will automatically carry over to the Google Maps app on your iOS or Android.

Overall, it’s clear that Google is making an aggressive play to own your overall travel and trip planning — and it has the advantage of having more data, better engineering and a whole lot more in the way of design skills when compared to just about every dedicated travel booking company out there.

Powered by WPeMatico

Week-in-Review: Alexa’s indefinite memory and NASA’s otherworldly plans for GPS

Posted by | 4th of July, AI assistant, alex wong, Amazon, Andrew Kortina, Android, andy rubin, appeals court, Apple, apple inc, artificial intelligence, Assistant, China, enterprise software, Getty-Images, gps, here, iPhone, machine learning, Online Music Stores, operating systems, Sam Lessin, social media, Speech Recognition, TC, Tim Cook, Twitter, United States, Venmo, voice assistant | No Comments

Hello, weekenders. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about the cult of Ive and the degradation of Apple design. On Sunday night, The Wall Street Journal published a report on how Ive had been moving away from the company, to the dismay of many on the design team. Tim Cook didn’t like the report very much. Our EIC gave a little breakdown on the whole saga in a nice piece.

Apple sans Ive


Amazon Buys Whole Foods For Over 13 Billion

The big story

This week was a tad restrained in its eventfulness; seems like the newsmakers went on 4th of July vacations a little early. Amazon made a bit of news this week when the company confirmed that Alexa request logs are kept indefinitely.

Last week, an Amazon public policy exec answered some questions about Alexa in a letter sent to U.S. Senator Coons. His office published the letter on its site a few days ago and most of the details aren’t all that surprising, but the first answer really sets the tone for how Amazon sees Alexa activity:

Q: How long does Amazon store the transcripts of user voice recordings?

A: We retain customers’ voice recordings and transcripts until the customer chooses to delete them.

What’s interesting about this isn’t that we’re only now getting this level of straightforward dialogue from Amazon on how long data is kept if not specifically deleted, but it makes one wonder why it is useful or feasible for them to keep it indefinitely. (This assumes that they actually are keeping it indefinitely; it seems likely that most of it isn’t, and that by saying this they’re protecting themselves legally, but I’m just going off the letter.)

After several years of “Hey Alexa,” the company doesn’t seem all that close to figuring out what it is.

Alexa seems to be a shit solution for commerce, so why does Amazon have 10,000 people working on it, according to a report this week in The Information? All signs are pointing to the voice assistant experiment being a short-term failure in terms of the short-term ambitions, though AI advances will push the utility.

Training data is a big deal across AI teams looking to educate models on data sets of relevant information. The company seems to say as much. “Our speech recognition and natural language understanding systems use machine learning to adapt to customers’ speech patterns and vocabulary, informed by the way customers use Alexa in the real world. To work well, machine learning systems need to be trained using real world data.”

The company says it doesn’t anonymize any of this data because it has to stay associated with a user’s account in order for them to delete it. I’d feel a lot better if Amazon just effectively anonymized the data in the first place and used on-device processing the build a profile on my voice. What I’m more afraid of is Amazon having such a detailed voiceprint of everyone who has ever used an Alexa device.

If effortless voice-based e-commerce isn’t really the product anymore, what is? The answer is always us, but I don’t like the idea of indefinitely leaving Amazon with my data until they figure out the answer.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • NASA’s GPS moonshot
    The U.S. government really did us a solid inventing GPS, but NASA has some bigger ideas on the table for the positioning platform, namely, taking it to the Moon. It might be a little complicated, but, unsurprisingly, scientists have some ideas here. Read more.
  • Apple has your eyes
    Most of the iOS beta updates are bug fixes, but the latest change to iOS 13 brought a very strange surprise: changing the way the eyes of users on iPhone XS or XS Max look to people on the other end of the call. Instead of appearing that you’re looking below the camera, some software wizardry will now make it look like you’re staring directly at the camera. Apple hasn’t detailed how this works, but here’s what we do know
  • Trump is having a Twitter party
    Donald Trump’s administration declared a couple of months ago that it was launching an exploratory survey to try to gain a sense of conservative voices that had been silenced on social media. Now @realdonaldtrump is having a get-together and inviting his friends to chat about the issue. It’s a real who’s who; check out some of the people attending here.
Amazon CEO And Blue Origin Founder Jeff Bezos Speaks At Air Force Association Air, Space And Cyber Conference

(Photo by Alex Wong/Getty Images)

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Amazon is responsible for what it sells:
    [Appeals court rules Amazon can be held liable for third-party products]
  2. Android co-creator gets additional allegations filed:
    [Newly unsealed court documents reveal additional allegations against Andy Rubin]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. TechCrunch reporter Kate Clark did a great interview with the ex-Facebook, ex-Venmo founding team behind Fin and how they’re thinking about the consumerization of the enterprise.

Sam Lessin and Andrew Kortina on their voice assistant’s workplace pivot

“…The thing is, developing an AI assistant capable of booking flights, arranging trips, teaching users how to play poker, identifying places to purchase specific items for a birthday party and answering wide-ranging zany questions like “can you look up a place where I can milk a goat?” requires a whole lot more human power than one might think. Capital-intensive and hard-to-scale, an app for “instantly offloading” chores wasn’t the best business. Neither Lessin nor Kortina will admit to failure, but Fin‘s excursion into B2B enterprise software eight months ago suggests the assistant technology wasn’t a billion-dollar idea.…”

Here are some of our other top reads this week for premium subscribers. This week, we talked a bit about asking for money and the future of China’s favorite tech platform:

Want more TechCrunch newsletters? Sign up here.

Powered by WPeMatico

At last, a camera app that automatically removes all people from your photos

Posted by | Apps, Art, artificial intelligence, machine learning, Mobile, Photography | No Comments

As a misanthrope living in a vibrant city, I’m never short of things to complain about. And in particular the problem of people crowding into my photos, whatever I happen to shoot, is a persistent one. That won’t be an issue any more with Bye Bye Camera, an app that simply removes any humans from photos you take. Finally!

It’s an art project, though a practical one (art can be practical!), by Do Something Good. The collective, in particular the artist damjanski, has worked on a variety of playful takes on the digital era, such as a CAPTCHA that excludes humans, and setting up a dialogue between two Google conversational agents.

The new app, damjanski told Artnome, is “an app for the post-human era… The app takes out the vanity of any selfie and also the person.” Fortunately, it leaves dogs intact.

Of course it’s all done in a self-conscious, arty way — are humans necessary? What defines one? What will the world be like without us? You can ponder those questions or not; fortunately, the app doesn’t require it of you.

Bye Bye Camera works using some of the AI tools that are already out there for the taking in the world of research. It uses YOLO (You Only Look Once), a very efficient object classifier that can quickly denote the outline of a person, and then a separate tool that performs what Adobe has called “context-aware fill.” Between the two of them a person is reliably — if a bit crudely — deleted from any picture you take and credibly filled in by background.

It’s a fun project (though the results are a mixed bag) and it speaks not only to the issues it supposedly raises about the nature of humanity, but also the accessibility of tools under the broad category of “AI” and what they can and should be used for.

You can download Bye Bye Camera for $3 on the iOS App Store.

Powered by WPeMatico