science

OpenAI Five crushes Dota2 world champs, and soon you can lose to it too

Posted by | artificial intelligence, Gaming, machine learning, OpenAI, science | No Comments

Dota2 is one of the most popular, and complex, online games in the world, but an AI has once again shown itself to supersede human skill. In matches over the weekend, OpenAI’s “Five” system defeated two pro teams soundly, and soon you’ll be able to test your own mettle against — or alongside — the ruthless agent.

In a blog post, OpenAI detailed how its game-playing agent has progressed from its younger self — it seems wrong to say previous version, since it really is the same extensive neural network as many months ago, but with much more training.

The version that played at Dota2’s premiere tournament, The International, gets schooled by the new version 99 percent of the time. And it’s all down to more practice:

In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day.

To the best of our knowledge, this is the first time an RL [reinforcement learning] agent has been trained using such a long-lived training run.

One is tempted to cry foul at a data center-spanning intelligence being allowed to train for 600 human lifespans. But really it’s more of a compliment to human cognition that we can accomplish the same thing with a handful of months or years, while still finding time to eat, sleep, socialize (well, some of us) and so on.

Dota2 is an intense and complex game with some rigid rules but a huge amount of fluidity, and representing it in a way that makes sense to a computer isn’t easy (which likely accounts partly for the volume of training required). Controlling five “heroes” at once on a large map with so much going on at any given time is enough to tax a team of five human brains. But teams work best when they’re acting as a single unit, which is more or less what Five was doing from the start. Rather than five heroes, it was more like five fingers of a hand to the AI.

Interestingly, OpenAI also discovered lately that Five is capable of playing cooperatively with humans as well as in competition. This was far from a sure thing — the whole system might have frozen up or misbehaved if it had a person in there gumming up the gears. But in fact it works pretty well.

You can watch the replays or get the pro commentary on the games if you want to hear exactly how the AI won (I’ve played but I’m far from good. I’m not even bad yet). I understand they had some interesting buy-back tactics and were very aggressive. Or, if you’re feeling masochistic, you can take on the AI yourself in a limited-time event later this week.

We’re launching OpenAI Five Arena, a public experiment where we’ll let anyone play OpenAI Five in both competitive and cooperative modes. We’d known that our 1v1 bot would be exploitable through clever strategies; we don’t know to what extent the same is true of OpenAI Five, but we’re excited to invite the community to help us find out!

Although a match against pros would mean all-out war using traditional tactics, low-stakes matches against curious players might reveal interesting patterns or exploits that the AI’s creators aren’t aware of. Results will be posted publicly, so be ready for that.

You’ll need to sign up ahead of time, though: The system will only be available to play from Thursday night at 6 PM to the very end of Sunday, Pacific time. They need to reserve the requisite amount of computing resources to run the thing, so sign up now if you want to be sure to get a spot.

OpenAI’s team writes that this is the last we’ll hear of this particular iteration of the system; it’s done competing (at least in tournaments) and will be described more thoroughly in a paper soon. They’ll continue to work in the Dota2 environment because it’s interesting, but what exactly the goals, means or limitations will be are yet to be announced.

Powered by WPeMatico

Talk all things robotics and AI with TechCrunch writers

Posted by | articles, artificial intelligence, Automation, conference calls, deep learning, Emerging-Technologies, events, Extra Crunch Conference Call, Extra Crunch members, Gadgets, hardware, robotics, science, Startups, TC, tc sessions, TC Sessions: Robotics + AI 2019, technology, uc-berkeley | No Comments

This Thursday, we’ll be hosting our third annual Robotics + AI TechCrunch Sessions event at UC Berkeley’s Zellerbach Hall. The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists.

The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s Brian Heater spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside Lucas Matney, who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with Extra Crunch members (on a conference call) what they saw and what excited them most.

Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? It’s not too late to get tickets.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Powered by WPeMatico

Israel’s Beresheet spacecraft is lost during historic lunar landing attempt

Posted by | beresheet, Gadgets, Israel, lunar landing, science, Space, SpaceX | No Comments

Israel’s SpaceIL almost made history today as its Beresheet spacecraft came within an ace of landing on the surface of the Moon, but suffered a last-minute failure during descent. Israel missed out on the chance to be the fourth country to make a controlled lunar landing, but getting 99 percent of the way there is still an extraordinary achievement for private spaceflight.

Beresheet (“Genesis”) launched in February as secondary payload aboard a SpaceX Falcon 9 rocket, and after a month and a half spiraling outward, entered lunar orbit a week ago. Today’s final maneuver was an engine burn meant to bring down its relative velocity to the Moon, then brake to a soft landing in the Mare Serenitatis, or Sea of Serenity.

Everything was working fine up until the final moments, as is often the case in space. The craft, having made it perfectly to its intended point of descent, determined that all systems were ready and the landing process would go ahead as planned.

They lost telemetry for a bit, and had to reset the craft to get the main engine back online… and then communication dropped while only a handful of kilometers from the surface. The “selfie” image above was taken from 22 km above the surface, just a few minutes before that. The spacecraft was announced as lost shortly afterwards.

Clearly disappointed but also exhilarated, the team quickly recovered its composure, saying “the achievement of getting to where we got is tremendous and we can be proud,” and of course, “if at first you don’t succeed… try, try again.”

The project began as an attempt to claim the Google Lunar Xprize, announced more than a decade ago, but which proved too difficult for teams to attempt in the time frame specified. Although the challenge and its prize money lapsed, Israel’s SpaceIL team continued its work, bolstered by the support of Israel Aerospace Industries, the state-owned aviation concern there.

It’s worth noting that although Beresheet did enjoy considerable government support in this way, it’s a far cry from any other large-scale government-run mission, and can safely be considered “private” for all intents and purposes. The ~50-person team and $200 million budget are laughably small compared to practically any serious mission, let alone a lunar landing.

I spoke with Xprize’s founder and CEO, Peter Diamandis and Anousheh Ansari, respectively, just before the landing attempt. Both were extremely excited and made it clear that the mission was already considered a huge success.

“What I’m seeing here is an incredible ‘Who’s Who’ from science, education and government who have gathered to watch this miracle take place,” Diamandis said. “We launched this competition now 11 years ago to inspire and educate engineers, and despite the fact that it ran out of time it has achieved 100 percent of its goal. Even if it doesn’t make it onto the ground fully intact it has ignited a level of electricity and excitement that reminds me of the Ansari Xprize 15 years ago.”

He’s not the only one. Ansari, who funded the famous spaceflight Xprize that bore her name, and who has herself visited space as one of the first tourist-astronauts above the International Space Station, felt a similar vibe.

“It’s an amazing moment, bringing so many great memories up,” she told me. “It reminds me of when we were all out in the Mojave waiting for the launch of Spaceship One.”

Ansari emphasized the feeling the landing evoked of moving forward as a people.

“Imagine, over the last 50 years only 500 people out of seven billion have been to space — that number will be thousands soon,” she said. “We believe there’s so much more that can be done in this area of technology, a lot of real business opportunities that benefit civilization but also humanity.”

Congratulations to the SpaceIL team for their achievement, and here’s hoping the next attempt makes it all the way down.

Powered by WPeMatico

Flying taxis could be more efficient than gas and electric cars on long-distance trips

Posted by | automotive, flying cars, flying taxis, Gadgets, science, Transportation, University of Michigan | No Comments

Flying cars definitely sound cool, but whether they’re actually a good idea is up for debate. Fortunately they do seem to have some surefire benefits, among which you can now count improved efficiency — in theory, and on long trips. But it’s something!

Air travel takes an enormous amount of energy, since you have to lift something heavy into the air and keep it there for a good while. This is often faster but rarely more efficient than ground transportation, which lets gravity do the hard work.

Of course, once an aircraft gets up to altitude, it cruises at high speed with little friction to contend with, and whether you’re going 100 feet or 50 miles you only have to take off once. So University of Michigan researchers thought there might be a sweet spot where taking a flying car might actually save energy. Turns out there is… kind of. The team published their results today in Nature Communications.

The U-M engineers made an efficiency model for both ground transport and for electric vertical take-off and landing (VTOL) aircraft, based on specs from aerospace companies working on them.

“Our model represents general trends in the VTOL space and uses parameters from multiple studies and aircraft designs to specify weight, lift-to-drag ratio and battery-specific energy,” said study co-author Noah Furbush in a U-M news release.

They looked at how these various theoretical vehicles performed when taking various numbers of people various distances, comparing energy consumed.

As you might imagine, flying isn’t very practical for going a mile or two, since you use up all that energy getting to altitude and then have to come right back down. But at the 100-kilometer mark (about 62 miles) things look a little different.

For a 100 km trip, a single passenger in a flying car uses 35 percent less energy than a gas-powered car, but still 28 percent more than an electric vehicle. In fact, the flying car is better than the gas one starting at around 40 km. But it never really catches up with the EVs for efficiency, though it gets close. Do you like charts?

ICEV: Internal combustion engine vehicle; VTOL: Vertical takeoff and landing; BEV: Battery electric vehicle. The vertical axis is emissions.

To make it better, they had to juice the numbers a bit bit, making the assumption that flying taxis would be more likely to operate at full capacity, with a pilot and three passengers, while ground vehicles were unlikely to have their average occupancy of 1.5 people change much. With that in mind, they found that a 100 km trip with three passengers just barely beats the per-person efficiency of EVs.

That may seem like a bit of a thin victory, but keep in mind that the flying car would be making the trip in likely a quarter of the time, unaffected by traffic and other issues. Plus there’s the view.

It’s all theoretical right now, naturally, but studies like this help companies looking to get into this business decide how their service will be organized and marketed. Reality might look a little different from theory, but I’ll take any reality with flying cars.

Powered by WPeMatico

MIT’s ‘cyber-agriculture’ optimizes basil flavors

Posted by | agriculture, artificial intelligence, food, Gadgets, GreenTech, hardware, hydroponics, machine learning, MIT, science | No Comments

The days when you could simply grow a basil plant from a seed by placing it on your windowsill and watering it regularly are gone — there’s no point now that machine learning-optimized hydroponic “cyber-agriculture” has produced a superior plant with more robust flavors. The future of pesto is here.

This research didn’t come out of a desire to improve sauces, however. It’s a study from MIT’s Media Lab and the University of Texas at Austin aimed at understanding how to both improve and automate farming.

In the study, published today in PLOS ONE, the question being asked was whether a growing environment could find and execute a growing strategy that resulted in a given goal — in this case, basil with stronger flavors.

Such a task is one with numerous variables to modify — soil type, plant characteristics, watering frequency and volume, lighting and so on — and a measurable outcome: concentration of flavor-producing molecules. That means it’s a natural fit for a machine learning model, which from that variety of inputs can make a prediction as to which will produce the best output.

“We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” explained MIT’s Caleb Harper in a news release. The better you understand those interactions, the better you can design the plant’s lifecycle, perhaps increasing yield, improving flavor or reducing waste.

In this case the team limited the machine learning model to analyzing and switching up the type and duration of light experienced by the plants, with the goal of increasing flavor concentration.

A first round of nine plants had light regimens designed by hand based on prior knowledge of what basil generally likes. The plants were harvested and analyzed. Then a simple model was used to make similar but slightly tweaked regimens that took the results of the first round into account. Then a third, more sophisticated model was created from the data and given significantly more leeway in its ability to recommend changes to the environment.

To the researchers’ surprise, the model recommended a highly extreme measure: Keep the plant’s UV lights on 24/7.

Naturally this isn’t how basil grows in the wild, since, as you may know, there are few places where the sun shines all day long and all night strong. And the arctic and antarctic, while fascinating ecosystems, aren’t known for their flavorful herbs and spices.

Nevertheless, the “recipe” of keeping the lights on was followed (it was an experiment, after all), and incredibly, this produced a massive increase in flavor molecules, doubling the amount found in control plants.

“You couldn’t have discovered this any other way,” said co-author John de la Parra. “Unless you’re in Antarctica, there isn’t a 24-hour photoperiod to test in the real world. You had to have artificial circumstances in order to discover that.”

But while a more flavorful basil is a welcome result, it’s not really the point. The team is more happy that the method yielded good data, validating the platform and software they used.

“You can see this paper as the opening shot for many different things that can be applied, and it’s an exhibition of the power of the tools that we’ve built so far,” said de la Parra. “With systems like ours, we can vastly increase the amount of knowledge that can be gained much more quickly.”

If we’re going to feed the world, it’s not going to be done with amber waves of grain, i.e. with traditional farming methods. Vertical, hydroponic, computer-optimized — we’ll need all these advances and more to bring food production into the 21st century.

Powered by WPeMatico

Apple sells wireless charging AirPods, cancels charger days later

Posted by | Drama, Gadgets, hardware, Opinion, science, TC | No Comments

“Works with AirPower mat”. Apparently not. It looks to me like Apple doesn’t treat customers with the same “high standard” of care it apparently reserves for its hardware quality. Nine days after launching its $199 wireless charging AirPods headphones that touted compatibility with the forthcoming Apple AirPower inductive charger mat, Apple has just scrapped AirPower entirely. It’s an uncharacteristically sloppy move for the “it just works” company. This time it didn’t.

Given how soon after the launch this cancellation came, there is a question about whether Apple  knew AirPower was viable before launching the new AirPods wireless charging case on March 20th. Failing to be transparent about that is an abuse of customer trust. That’s especially damaging for a company constantly asking us to order newly announced products we haven’t touched when there’s always another iteration around the corner. It should really find some way to make it up to people, especially given it has $245 billion in cash on hand.

TechCrunch broke the news of AirPower’s demise. “After much effort, we’ve concluded AirPower will not achieve our high standards and we have cancelled the project. We apologize to those customers who were looking forward to this launch. We continue to believe that the future is wireless and are committed to push the wireless experience forward,” said Dan Riccio, Apple’s senior vice president of Hardware Engineering in an emailed statement today.

That comes as a pretty sour surprise for people who bought the $199 wireless charging AirPods that mention AirPower compatibility or the $79 standalone charging case with a full-on diagram of how to use AirPower drawn on the box.

Apple first announced the AirPower mat in 2017 saying it would arrive the next year along with a wireless charging case for AirPods. 2018 came and went. But when the new AirPods launched March 20th with no mention of AirPower in the press release, suspicions mounted. Now we know that issues with production, reportedly due to overheating, have caused it to be canceled. Apple decided not to ship what could become the next Galaxy Note 7 fire hazard.

The new AirPods with wireless charging case even had a diagram of AirPower on the box. Image via Ryan Jones

There are plenty of other charging mats that work with AirPods, and maybe Apple will release a future iPhone or MacBook that can wirelessly pass power to the pods. But anyone hoping to avoid janky third-party brands and keep it in the Apple family is out of luck for now.

Thankfull, some who bought the new AirPods with wireless charging case are still eligible for a refund. But typically if you get an Apple product personalized with an engraving (I had my phone number laser-etched on my AirPods since I constantly lose them), there are no refunds allowed. And then there are all the people who bought Apple Watches, or iPhone 8 or later models who were anxiously awaiting AirPower. We’ve asked Apple if it will grant any return exceptions.

Combined with an apology for the disastrously fragile keyboards on newer MacBooks, an apology over the Mac Pro, an apology for handling the iPhone slowdown messaging wrong, Apple’s recent vaporware services event where it announced Apple TV+ and Arcade despite them being months from launch, and now an AirPower apology and cancellation, the world’s cash-richest company looks like a mess. Apple risks looking as unreliable as Android if it can’t get its act together.

Powered by WPeMatico

Mars helicopter bound for the Red Planet takes to the air for the first time

Posted by | drones, Gadgets, Government, hardware, jpl, mars 2020, NASA, robotics, science, Space, TC, UAVs | No Comments

The Mars 2020 mission is on track for launch next year, and nesting inside the high-tech new rover heading that direction is a high-tech helicopter designed to fly in the planet’s nearly non-existent atmosphere. The actual aircraft that will fly on the Martian surface just took its first flight and its engineers are over the moon.

“The next time we fly, we fly on Mars,” said MiMi Aung, who manages the project at JPL, in a news release. An engineering model that was very close to final has over an hour of time in the air, but these two brief test flights were the first and last time the tiny craft will take flight until it does so on the distant planet (not counting its “flight” during launch).

“Watching our helicopter go through its paces in the chamber, I couldn’t help but think about the historic vehicles that have been in there in the past,” she continued. “The chamber hosted missions from the Ranger Moon probes to the Voyagers to Cassini, and every Mars rover ever flown. To see our helicopter in there reminded me we are on our way to making a little chunk of space history as well.”

Artist’s impression of how the helicopter will look when it’s flying on Mars

A helicopter flying on Mars is much like a helicopter flying on Earth, except of course for the slight differences that the other planet has a third less gravity and 99 percent less air. It’s more like flying at 100,000 feet, Aung suggested.

It has its own solar panel so it can explore more or less on its own

The test rig they set up not only produces a near-vacuum, replacing the air with a thin, Mars-esque CO2 mix, but a “gravity offload” system simulates lower gravity by giving the helicopter a slight lift via a cable.

It flew at a whopping two inches of altitude for a total of a minute in two tests, which was enough to show the team that the craft (with all its 1,500 parts and four pounds) was ready to package up and send to the Red Planet.

“It was a heck of a first flight,” said tester Teddy Tzanetos. “The gravity offload system performed perfectly, just like our helicopter. We only required a 2-inch hover to obtain all the data sets needed to confirm that our Mars helicopter flies autonomously as designed in a thin Mars-like atmosphere; there was no need to go higher.”

A few months after the Mars 2020 rover has landed, the helicopter will detach and do a few test flights of up to 90 seconds. Those will be the first heavier-than-air flights on another planet — powered flight, in other words, rather than, say, a balloon filled with gaseous hydrogen.

The craft will operate mostly autonomously, since the half-hour round trip for commands would be far too long for an Earth-based pilot to operate it. It has its own solar cells and batteries, plus little landing feet, and will attempt flights of increasing distance from the rover over a 30-day period. It should go about three meters in the air and may eventually get hundreds of meters away from its partner.

Mars 2020 is estimated to be ready to launch next summer, arriving at its destination early in 2021. Of course, in the meantime, we’ve still got Curiosity and Insight up there, so if you want the latest from Mars, you’ve got plenty of options to choose from.

Powered by WPeMatico

Ocean drone startup merger spawns Sofar, the DJI of the sea

Posted by | drones, eCommerce, Exit, funding, Fundings & Exits, Gadgets, GreenTech, hardware, M&A, openrov, Recent Funding, science, Startups, TC, underwater drone | No Comments

What lies beneath the murky depths? SolarCity co-founder Peter Rive wants to help you and the scientific community find out. He’s just led a $7 million Series A for Sofar Ocean Technologies, a new startup formed from a merger he orchestrated between underwater drone maker OpenROV and sea sensor developer Spoondrift. Together, they’re teaming up their 1080p Trident drone and solar-powered Spotter sensor to let you collect data above and below the surface. They can help you shoot awesome video footage, track waves and weather, spot fishing and diving spots, inspect boats or infrastructure for damage, monitor acquaculture sites or catch smugglers.

Sofar’s Trident drone (left) and Spotter sensor (right)

“Aerial drones give us a different perspective of something we know pretty well. Ocean drones give us a view at something we don’t really know at all,” former Spoondrift and now Sofar CEO Tim Janssen tells me. “The Trident drone was created for field usage by scientists and is now usable by anyone. This is pushing the barrier towards the unknown.”

But while Rive has a soft spot for the ecological potential of DIY ocean exploration, the sea is crowded with competing drones. There are more expensive professional research-focused devices like the Saildrone, DeepTrekker and SeaOtter-2, as well as plenty of consumer-level devices like the $800 Robosea Biki, $1,000 Fathom ONE and $5,000 iBubble. The $1,700 Sofar Trident, which requires a cord to a surface buoy to power its three hours of dive time and two meters per second speed, sits in the middle of the pack, but Sofar co-founder David Lang things Trident can win with simplicity, robustness and durability. The question is whether Sofar can become the DJI of the water, leading the space, or if it will become just another commoditized hardware maker drowning in knock-offs.

From left: Peter Rive (chairman of Sofar), David Lang (co-founder of OpenROV) and Tim Janssen (co-founder and CEO of Sofar)

Spoondrift launched in 2016 and raised $350,000 to build affordable ocean sensors that can produce climate-tracking data. “These buoys (Spotters) are surprisingly easy to deploy, very light and easy to handle, and can be lowered in the water by hand using a line. As a result, you can deploy them in almost any kind of conditions,” says Dr. Aitana Forcén-Vázquez of MetOcean Solutions.

OpenROV (it stands for Remotely Operated Vehicle) started seven years ago and raised $1.3 million in funding from True Ventures and National Geographic, which was also one of its biggest Trident buyers. “Everyone who has a boat should have an underwater drone for hull inspection. Any dock should have its own weather station with wind and weather sensors,” Sofar’s new chairman Rive declares.

Spotter could unlock data about the ocean at scale

Sofar will need to scale to accomplish Rive’s mission to get enough sensors in the sea to give us more data on the progress of climate change and other ecological issues. “We know very little about our oceans since we have so little data, because putting systems in the ocean is extremely expensive. It can cost millions for sensors and for boats,” he tells me. We gave everyone GPS sensors and cameras and got better maps. The ability to put low-cost sensors on citizens’ rooftops unlocked tons of weather forecasting data. That’s more feasible with Spotter, which costs $4,900 compared to $100,000 for some sea sensors.

Sofar hardware owners do not have to share data back to the startup, but Rive says many customers are eager to. They’ve requested better data portability so they can share with fellow researchers. The startup believes it can find ways to monetize that data in the future, which is partly what attracted the funding from Rive and fellow investors True Ventures and David Sacks’ Craft Ventures. The funding will build up that data business and also help Sofar develop safeguards to make sure its Trident drones don’t go where they shouldn’t. That’s obviously important, given London’s Gatwick airport shutdown due to a trespassing drone.

Spotter can relay weather conditions and other climate data to your phone

“The ultimate mission of the company is to connect humanity to the ocean as we’re mostly conservationists at heart,” Rive concludes. “As more commercialization and business opportunities arise, we’ll have to have conversations about whether those are directly benefiting the ocean. It will be important to have our moral compass facing in the right direction to protect the earth.”

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Gates-backed Lumotive upends lidar conventions using metamaterials

Posted by | accelerator, automotive, autonomous vehicles, Bill Gates, Gadgets, hardware, Intellectual Ventures, lasers, Lidar, Lumotive, robotics, science, self-driving cars, TC, Transportation | No Comments

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Powered by WPeMatico