autonomous vehicles

Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, hardware, Intel, Mobileye, nvidia, robotics, self-driving cars, TC, Transportation | No Comments

While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by Mobileye two years ago were just too much for the latter company’s CEO to take politely.

Amnon Shashua, in a blog post on parent company Intel’s news feed cheekily titled “Innovation Requires Originality, openly mocks Nvidia’s “Safety Force Field,” pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017.

He writes:

It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety.

Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate.

Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least.

The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his):

RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold.

SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.”  Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.”

This is followed by numerous other close parallels, and just when you think it’s done, he includes a whole separate document (PDF) showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again.

Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches.

You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.”

Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.”

Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it.

It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility.

I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.

Powered by WPeMatico

Gates-backed Lumotive upends lidar conventions using metamaterials

Posted by | accelerator, automotive, autonomous vehicles, Bill Gates, Gadgets, hardware, Intellectual Ventures, lasers, Lidar, Lumotive, robotics, science, self-driving cars, TC, Transportation | No Comments

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Powered by WPeMatico

Don’t worry, this rocket-launching Chinese robo-boat is strictly for science

Posted by | artificial intelligence, Asia, autonomous vehicles, Boats, China, climate change, Gadgets, hardware, robotics, science, TC | No Comments

It seems inevitable that the high seas will eventually play host to a sort of proxy war as automated vessels clash over territory for the algae farms we’ll soon need to feed the growing population. But this rocket-launching robo-boat is a peacetime vessel concerned only with global weather patterns.

The craft is what’s called an unmanned semi-submersible vehicle, or USSV, and it functions as a mobile science base — and now, a rocket launch platform. For meteorological sounding rockets, of course, nothing scary.

It solves a problem we’ve seen addressed by other seagoing robots like the Saildrone: that the ocean is very big, and very dangerous — so monitoring it properly is equally big and dangerous. You can’t have a crew out in the middle of nowhere all the time, even if it would be critical to understanding the formation of a typhoon or the like. But you can have a fleet of robotic ships systematically moving around the ocean.

In fact this is already done in a variety of ways and by numerous countries and organizations, but much of the data collection is both passive and limited in range. A solar-powered buoy drifting on the currents is a great resource, but you can’t exactly steer it, and it’s limited to sampling the water around it. And weather balloons are nice, too, if you don’t mind flying it out to where it needs to be first.

A robotic boat, on the other hand, can go where you need it and deploy instruments in a variety of ways, dropping or projecting them deep into the water or, in the case of China’s new USSV, firing them 20,000 feet into the air.

“Launched from a long-duration unmanned semi-submersible vehicle, with strong mobility and large coverage of the sea area, rocketsonde can be used under severe sea conditions and will be more economical and applicable in the future,” said Jun Li, a researcher at the Chinese Academy of Sciences, in a news release.

The 24-foot craft, which has completed a handful of near-land cruises in Bohai Bay, was announced in the paper. You may wonder what “semi-submersible” means. Essentially they put as much of the craft as possible under the water, with only instruments, hatches and other necessary items aboveboard. That minimizes the effect of rough weather on the craft — but it is still self-righting in case it capsizes in major wave action.

The USSV’s early travels

It runs on a diesel engine, so it’s not exactly the latest tech there, but for a large craft going long distances, solar is still a bit difficult to manage. The diesel on board will last it about 10 days and take it around 3,000 km, or 1,800 miles.

The rocketsondes are essentially small rockets that shoot up to a set altitude and then drop a “driftsonde,” a sensor package attached to a balloon, parachute or some other descent-slowing method. The craft can carry up to 48 of these, meaning it could launch one every few hours for its entire 10-day cruise duration.

The researchers’ findings were published in the journal Advances in Atmospheric Sciences. This is just a prototype, but its success suggests we can expect a few more at the very least to be built and deployed. I’ve asked Li a few questions about the craft and will update this post if I hear back.

Powered by WPeMatico

The new era in mobile

Posted by | Apple, autonomous vehicles, Column, Google, Mobile, TC, Tesla | No Comments
Joe Apprendi
Contributor

Joe Apprendi is a general partner at Revel Partners.
More posts by this contributor

A future dominated by autonomous vehicles (AVs) is, for many experts, a foregone conclusion. Declarations that the automobile will become the next living room are almost as common — but, they are imprecise. In our inevitable driverless future, the more apt comparison is to the mobile device. As with smartphones, operating systems will go a long way toward determining what autonomous vehicles are and what they could be. For mobile app companies trying to seize on the coming AV opportunity, their future depends on how the OS landscape shapes up.

By most measures, the mobile app economy is still growing, yet the time people spend using their apps is actually starting to dip. A recent study reported that overall app session activity grew only 6 percent in 2017, down from the 11 percent growth it reported in 2016. This trend suggests users are reaching a saturation point in terms of how much time they can devote to apps. The AV industry could reverse that. But just how mobile apps will penetrate this market and who will hold the keys in this new era of mobility is still very much in doubt.

When it comes to a driverless future, multiple factors are now converging. Over the last few years, while app usage showed signs of stagnation, the push for driverless vehicles has only intensified. More cities are live-testing driverless software than ever, and investments in autonomous vehicle technology and software by tech giants like Google and Uber (measured in the billions) are starting to mature. And, after some reluctance, automakers have now embraced this idea of a driverless future. Expectations from all sides point to a “passenger economy” of mobility-as-a-service, which, by some estimates, may be worth as much as $7 trillion by 2050.

For mobile app companies this suggests several interesting questions: Will smart cars, like smartphones before them, be forced to go “exclusive” with a single OS of record (Google, Apple, Microsoft, Amazon/AGL), or will they be able to offer multiple OS/platforms of record based on app maturity or functionality? Or, will automakers simply step in to create their own closed loop operating systems, fragmenting the market completely?

Automakers and tech companies clearly recognize the importance of “connected mobility.”

Complicating the picture even further is the potential significance of an OS’s ability to support multiple Digital Assistants of Record (independent of the OS), as we see with Google Assistant now working on iOS. Obviously, voice NLP/U will be even more critical for smart car applications as compared to smart speakers and phones. Even in those established arenas the battle for OS dominance is only just beginning. Opening a new front in driverless vehicles could have a fascinating impact. Either way, the implications for mobile app companies are significant.

Looking at the driverless landscape today there are several indications as to which direction the OSes in AVs will ultimately go. For example, after some initial inroads developing their own fleet of autonomous vehicles, Google has now focused almost all its efforts on autonomous driving software while striking numerous partnership deals with traditional automakers. Some automakers, however, are moving forward developing their own OSes. Volkswagen, for instance, announced that vw.OS will be introduced in VW brand electric cars from 2020 onward, with an eye toward autonomous driving functions. (VW also plans to launch a fleet of autonomous cars in 2019 to rival Uber.) Tesla, a leader in AV, is building its own unified hardware-software stack. Companies like Udacity, however, are building an “open-source” self-driving car tech. Mobileye and Baidu have a partnership in place to provide software for automobile manufacturers.

Clearly, most smartphone apps would benefit from native integration, but there are several categories beyond music, voice and navigation that require significant hardware investment to natively integrate. Will automakers be interested in the Tesla model? If not, how will smart cars and apps (independent of OS/voice assistant) partner up? Given the hardware requirements necessary to enable native app functionality and optimal user experience, how will this force smart car manufacturers to work more seamlessly with platforms like AGL to ensure competitive advantage and differentiation? And, will this commoditize the OS dominance we see in smartphones today?

It’s clearly still early days and — at least in the near term — multiple OS solutions will likely be employed until preferred solutions rise to the top. Regardless, automakers and tech companies clearly recognize the importance of “connected mobility.” Connectivity and vehicular mobility will very likely replace traditional auto values like speed, comfort and power. The combination of Wi-Fi hotspot and autonomous vehicles (let alone consumer/business choice of on-demand vehicles) will propel instant conversion/personalization of smart car environments to passenger preferences. And, while questions remain around the how and the who in this new era in mobile, it’s not hard to see the why.

Americans already spend an average of 293 hours per year inside a car, and the average commute time has jumped around 20 percent since 1980. In a recent survey (conducted by Ipsos/GenPop) researchers found that in a driverless future people would spend roughly a third of the time communicating with friends and family or for business and online shopping. By 2030, it’s estimated the autonomous cars “will free up a mind-blowing 1.9 trillion minutes for passengers.” Another analysis suggested that even with just 10 percent adoption, driverless cars could account for $250 billion in driver productivity alone.

Productivity in this sense extends well beyond personal entertainment and commerce and into the realm of business productivity. Use of integrated display (screen and heads-up) and voice will enable business multi-tasking from video conferencing, search, messaging, scheduling, travel booking, e-commerce and navigation. First-mover advantage goes to the mobile app companies that first bundle into a single compelling package information density, content access and mobility. An app company that can claim 10 to 15 percent of this market will be a significant player.

For now, investors are throwing lots of money at possible winners in the autonomous automotive race, who, in turn, are beginning to define the shape of the mobile app landscape in a driverless future. In fact, what we’re seeing now looks a lot like the early days of smartphones with companies like Tesla, for example, applying an Apple -esque strategy for smart car versus smartphone. Will these OS/app marketplaces be dominated by a Tesla — or Google (for that matter) — and command a 30 percent revenue share from apps, or will auto manufacturers with proprietary platforms capitalize on this opportunity? Questions like these — while at the same time wondering just who the winners and losers in AV will be — mean investment and entrepreneurship in the mobile app sector is an extremely lucrative but risky gamble.

Powered by WPeMatico

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, Government, robotics, Transportation, waymo | No Comments

Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.

The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.

California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.

The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.

Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.

No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Powered by WPeMatico

Massterly aims to be the first full-service autonomous marine shipping company

Posted by | artificial intelligence, autonomous vehicles, Europe, Gadgets, GreenTech, hardware, Logistics, massterly, robotics, Shipping, TC, Transportation | No Comments

Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.

“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.

These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating and unloading without a crew

(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)

Each has major radar and lidar units, visible light and IR cameras, satellite connectivity and so on.

Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.

At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”

The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Powered by WPeMatico

Finnish autonomous car goes for a leisurely cruise in the driving snow

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, TC, Transportation, VTT Technical Research Center | No Comments

 It’s one thing for an autonomous car to strut its stuff on a smooth, warm California tarmac, and quite another to do so on the frozen winter mix of northern Finland. Martti, a self-driving vehicle system homegrown in Finland, demonstrated just this in a record-setting drive along a treacherous (to normal drivers) Laplandish road. Read More

Powered by WPeMatico

Driverless shuttle in Las Vegas gets in fender bender within an hour

Posted by | AAA, artificial intelligence, automotive, autonomous vehicles, driverless cars, Gadgets, Las Vegas, TC, Transportation | No Comments

 A driverless shuttle set free in downtown Las Vegas was involved in a minor accident less than an hour after it hit the streets, reported the local NBC affiliate KSNV. Not really the kind of publicity you want, or that self-driving cars need. Read More

Powered by WPeMatico

Uber shows off its autonomous driving program’s snazzy visualization tools

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, self-driving cars, TC, Transportation, Uber | No Comments

 Uber’s engineering blog has just posted an interesting piece on the company’s web-based tool for exploring and visualizing data from self-driving car research. It’s a smart look at an impressive platform, and definitely has nothing to do with a long piece published last week lauding a similar platform in use by one of Uber’s most serious rivals, Waymo. Read More

Powered by WPeMatico

Crunch Report | Uber Responds to iPhone Tracking Report

Posted by | Amazon, Apple, Apps, autonomous vehicles, dji, Gadgets, LinkedIn, Media, Mobile, Snapchat, Social, TC, The New York Times, Uber | No Comments

Crunch Report April 24 Today’s Stories  Uber responds to report that it tracked devices after its app was deleted LinkedIn hits 500M member milestone for its social network for the working world Amazon’s driverless tech team focuses not on building it, but on how to use it DJI’s new FPV goggles let you control your drone with head movements The NYT brings its news – and a mini crossword… Read More

Powered by WPeMatico