automotive

Luminar eyes production vehicles with $100M round and new Iris lidar platform

Posted by | artificial intelligence, automotive, autonomous vehicles, funding, Gadgets, hardware, Lidar, Luminar, robotics, self-driving cars, Transportation | No Comments

Luminar is one of the major players in the new crop of lidar companies that have sprung up all over the world, and it’s moving fast to outpace its peers. Today the company announced a new $100 million funding round, bringing its total raised to more than $250 million — as well as a perception platform and a new, compact lidar unit aimed at inclusion in actual cars. Big day!

The new hardware, called Iris, looks to be about a third of the size of the test unit Luminar has been sticking on vehicles thus far. That one was about the size of a couple hardbacks stacked up, and Iris is more like a really thick sandwich.

Size is very important, of course, as few cars just have caverns of unused space hidden away in prime surfaces like the corners and windshield area. Other lidar makers have lowered the profiles of their hardware in various ways; Luminar seems to have compactified in a fairly straightforward fashion, getting everything into a package smaller in every dimension.

Luminar IRIS AND TEST FLEET LiDARS

Test model, left, Iris on the right.

Photos of Iris put it in various positions: below the headlights on one car, attached to the rear-view mirror in another and high up atop the cabin on a semi truck. It’s small enough that it won’t have to displace other components too much, although of course competitors are aiming to make theirs even more easy to integrate. That won’t matter, Luminar founder and CEO Austin Russell told me recently, if they can’t get it out of the lab.

“The development stage is a huge undertaking — to actually move it towards real-world adoption and into true series production vehicles,” he said (among many other things). The company that gets there first will lead the industry, and naturally he plans to make Luminar that company.

Part of that is of course the production process, which has been vastly improved over the last couple of years. These units can be made quickly enough that they can be supplied by the thousands rather than dozens, and the cost has dropped precipitously — by design.

Iris will cost less than $1,000 per unit for production vehicles seeking serious autonomy, and for $500 you can get a more limited version for more limited purposes like driver assistance, or ADAS. Luminar says Iris is “slated to launch commercially on production vehicles beginning in 2022,” but that doesn’t mean necessarily that they’re shipping to customers right now. The company is negotiating more than a billion dollars in contracts at present, a representative told me, and 2022 would be the earliest that vehicles with Iris could be made available.

LUMINAR IRIS TRAFFIC JAM PILOT

The Iris units are about a foot below the center of the headlight units here. Note that this is not a production vehicle, just a test one.

Another part of integration is software. The signal from the sensor has to go somewhere, and while some lidar companies have indicated they plan to let the carmaker or whoever deal with it their own way, others have opted to build up the tech stack and create “perception” software on top of the lidar. Perception software can be a range of things: something as simple as drawing boxes around objects identified as people would count, as would a much richer process that flags intentions, gaze directions, characterizes motions and suspected next actions and so on.

Luminar has opted to build into perception, or rather has revealed that it has been working on it for some time. It now has 60 people on the task split between Palo Alto and Orlando, and hired a new VP of Software, former robo-taxi head at Daimler, Christoph Schroder.

What exactly will be the nature and limitations of Luminar’s perception stack? There are dangers waiting if you decide to take it too far, because at some point you begin to compete with your customers, carmakers that have their own perception and control stacks that may or may not overlap with yours. The company gave very few details as to what specifically would be covered by its platform, but no doubt that will become clearer as the product itself matures.

Last and certainly not least is the matter of the $100 million in additional funding. This brings Luminar to a total of over a quarter of a billion dollars in the last few years, matching its competitor Innoviz, which has made similar decisions regarding commercialization and development.

The list of investors has gotten quite long, so I’ll just quote Luminar here:

G2VP, Moore Strategic Ventures, LLC, Nick Woodman, The Westly Group, 1517 Fund / Peter Thiel, Canvas Ventures, along with strategic investors Corning Inc, Cornes, and Volvo Cars Tech Fund.

The board has also grown, with former Broadcom exec Scott McGregor and G2VP’s Ben Kortlang joining the table.

We may have already passed “peak lidar” as far as sheer number of deals and startups in the space, but that doesn’t mean things are going to cool down. If anything, the opposite, as established companies battle over lucrative partnerships and begin eating one another to stay competitive. Seems like Luminar has no plans on becoming a meal.

Powered by WPeMatico

Waymo has now driven 10 billion autonomous miles in simulation

Posted by | automotive, california, Companies, CTO, Dmitri Dolgov, electric vehicles, Emerging-Technologies, Google, Mobile, san francisco bay area, self-driving cars, simulation, TC, TC Sessions: Mobility 2019, waymo, X | No Comments

Alphabet’s Waymo autonomous driving company announced a new milestone at TechCrunch Sessions: Mobility on Wednesday: 10 billion miles driving in simulation. This is a significant achievement for the company, because all those simulated miles on the road for its self-driving software add up to considerable training experience.

Waymo also probably has the most experience when it comes to actual, physical road miles driven — the company is always quick to point out that it’s been doing this far longer than just about anyone else working in autonomous driving, thanks to its head start as Google’s self-driving car moonshot project.

“At Waymo, we’ve driven more than 10 million miles in the real world, and over 10 billion miles in simulation,” Waymo CTO Dmitri Dolgov told TechCrunch’s Kirsten Korosec on the Sessions: Mobility stage. “And the amount of driving you do in both of those is really a function of the maturity of your system, and the capability of your system. If you’re just getting started, it doesn’t matter – you’re working on the basics, you can drive a few miles or a few thousand or tens of thousands of miles in the real world, and that’s plenty to tell you and give you information that you need to know to improve your system.”

Dolgov’s point is that the more advanced your autonomous driving system becomes, the more miles you actually need to drive to have impact, because you’ve handled the basics and are moving on to edge cases, advanced navigation and ensuring that the software works in any and every scenario it encounters. Plus, your simulation becomes more sophisticated and more accurate as you accumulate real-world driving miles, which means the results of your virtual testing is more reliable for use back in your cars driving on actual roads.

This is what leads Dolgov to the conclusion that Waymo’s simulation is likely better than a lot of comparable simulation training at other autonomous driving companies.

“I think what makes it a good simulator, and what makes it powerful is two things,” Dolgov said onstage. “One [is] fidelity. And by fidelity, I mean, not how good it looks. It’s how well it behaves, and how representative it is of what you will encounter in the real world. And then second is scale.”

In other words, experience isn’t beneficial in terms of volume — it’s about sophistication, maturity and readiness for commercial deployment.

Powered by WPeMatico

Startups at the speed of light: Lidar CEOs put their industry in perspective

Posted by | artificial intelligence, automotive, autonomous vehicles, Congruent Ventures, Gadgets, hardware, Innoviz, Lidar, Luminar, Lumotive, robotics, self-driving cars, sense photonics, Startups, TC, Transportation | No Comments

As autonomous cars and robots loom over the landscapes of cities and jobs alike, the technologies that empower them are forming sub-industries of their own. One of those is lidar, which has become an indispensable tool to autonomy, spawning dozens of companies and attracting hundreds of millions in venture funding.

But like all industries built on top of fast-moving technologies, lidar and the sensing business is by definition built somewhat upon a foundation of shifting sands. New research appears weekly advancing the art, and no less frequently are new partnerships minted, as car manufacturers like Audi and BMW scramble to keep ahead of their peers in the emerging autonomy economy.

To compete in the lidar industry means not just to create and follow through on difficult research and engineering, but to be prepared to react with agility as the market shifts in response to trends, regulations, and disasters.

I talked with several CEOs and investors in the lidar space to find out how the industry is changing, how they plan to compete, and what the next few years have in store.

Their opinions and predictions sometimes synced up and at other times diverged completely. For some, the future lies manifestly in partnerships they have already established and hope to nurture, while others feel that it’s too early for automakers to commit, and they’re stringing startups along one non-exclusive contract at a time.

All agreed that the technology itself is obviously important, but not so important that investors will wait forever for engineers to get it out of the lab.

And while some felt a sensor company has no business building a full-stack autonomy solution, others suggested that’s the only way to attract customers navigating a strange new market.

It’s a flourishing market but one, they all agreed, that will experience a major consolidation in the next year. In short, it’s a wild west of ideas, plentiful money, and a bright future — for some.

The evolution of lidar

I’ve previously written an introduction to lidar, but in short, lidar units project lasers out into the world and measure how they are reflected, producing a 3D picture of the environment around them.

Powered by WPeMatico

Simone Giertz’s converted Tesla Model 3 pickup truck is wonderful

Posted by | automotive, Elon Musk, Gadgets, GreenTech, hardware, pickup truck, simone giertz, TC, Tesla, tesla model 3, Transportation | No Comments

YouTuber Simone Giertz, celebrated DIY inventor, roboticist and general maker of cool stuff, decided not to wait for Tesla’s forthcoming pickup truck. Instead, she bought a Tesla Model 3 direct from the company new and then used elbow grease, ingenuity, some help from friends and power tools to turn it into a two-seater with a flatbed.

The amazing thing is, unlike some of the robots Giertz is famous for making, the final product looks terrific — both in terms of the detail work and in terms of its functionality. Giertz also installed a cage over the truck bed, and a tailgate that can double as a work bench. Plus, as you can see from this fake commercial for the so-called “Truckla,” the thing still rips both on and off-road.

Along with her crew, Giertz rented a dedicated workshop to do the build, which took around two weeks and a lot of sawing at the metal chassis. The team had to rebuild crucial components like the roll cage to ensure that the finished product was still safe.

There’s still work to be done in terms of waterproofing, lifting up the vehicle, giving it a paint makeover and more, per Giertz, but the finished product looks amazing, and potentially better than whatever sci-fi nightmare Elon Musk is putting together for the actual Tesla pickup.

Powered by WPeMatico

Tesla’s in-car touchscreens are getting YouTube support

Posted by | Android, automotive, cars, e3 2019, electric vehicles, Elon Musk, in-car navigation, Louisiana, Media, streaming video, TC, Tesla, tesla model 3, Tesla Model S, YouTube | No Comments

Tesla has consistently been adding software to its in-car touchscreen infotainment displays — including sometimes things that probably leave a lot of people scratching their heads. During a special Q&A today at annual gaming event E3 in LA, Tesla CEO Elon Musk revealed that Tesla’s in-car display will support YouTube someday soon.

This isn’t the first time the Tesla CEO has suggested YouTube might one day have a home in the company’s cars: In response to a fan’s question on Twitter last August he noted that version 10 of the company’s in-car software would provide support for third-party video streaming. The company debuted its Software Version 9.0 last year.

Musk specifically said YouTube would be coming to cars during the E3 event today, at which he revealed that Bethesda’s Fallout 3 would be coming to the infotainment displays, and unveiled a demo video of Android game Beach Buggy Racer running on a display in a Tesla Model 3.

On a recent podcast, the Tesla CEO also said the company would consider opening the platform more broadly to third-party developers for both apps and games. The company has done a lot on its own to add software “Easter Eggs” to the dash display, but turning it into a true platform is a much more ambitious vision.

On its face, adding to a car attention-heavy apps like streaming video services definitely seems counterintuitive, but to be fair to Tesla, a large number of drivers today use their phones for in-car navigation and those can also all technically display YouTube at any time. It does seem like a case of Musk’s mind racing ahead to a day when his cars are fully autonomous, something he recently reiterated he expects to happen within the next couple of years.

Powered by WPeMatico

Tesla is bringing the ‘Fallout Shelter’ game to its cars

Posted by | arcade games, asteroids, Atari, automotive, Bethesda Games, e3 2019, Elon Musk, fallout, Gaming, missile command, TC, Tesla, video games, video gaming, ZeniMax Media | No Comments

As part of the gaming option for Tesla’s cars, Todd Howard, the director of Bethesda Games, said that the company’s “Fallout Shelter” game will be coming to Tesla displays.

Elon Musk is a huge fan of the Fallout series, saying in an interview at the E3 gaming conference that he’d explored “every inch” of Fallout 3.

Earlier this year, Tesla announced that it was adding “2048” and “Atari’s Super Breakout” to the list of games that drivers and passengers can play on the company’s dashboard display.

The company added Atari games to its slate of apps and services last August via a software update. At the time, the initial slate of games included “Missile Command,” “Asteroids,” “Lunar Lander” and “Centipede.”

Powered by WPeMatico

Sense Photonics flashes onto the lidar scene with a new approach and $26M

Posted by | automotive, autonomous vehicles, Gadgets, hardware, Lidar, robotics, self-driving cars, sense photonics, Startups, TC | No Comments

Lidar is a critical part of many autonomous cars and robotic systems, but the technology is also evolving quickly. A new company called Sense Photonics just emerged from stealth mode today with a $26M A round, touting a whole new approach that allows for an ultra-wide field of view and (literally) flexible installation.

Still in prototype phase but clearly enough to attract eight figures of investment, Sense Photonics’ lidar doesn’t look dramatically different from others at first, but the changes are both under the hood and, in a way, on both sides of it.

Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.

All these systems are “scanning” systems in that they sweep a beam, column, or spot of light across the scene in some structured fashion — faster than we can perceive, but still piece by piece. Few companies, however, have managed to implement what’s called “flash” lidar, which illuminates the whole scene with one giant, well, flash.

That’s what Sense has created, and it claims to have avoided the usual shortcomings of such systems — namely limited resolution and range. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.

I talked with CEO and co-founder Scott Burroughs, a veteran engineer of laser systems, about what makes Sense’s lidar a different animal from the competition.

“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.”

These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically.

“We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”

Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case.

These are just prototype units, but they work and you can see they’re more than just renders.

Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.

The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.

The depth data and traditional camera image correspond pixel-to-pixel right out of the system.

Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.

Sense Photonics’ unit also can output a point cloud, as you see here.

The benefits of Sense’s system are manifest, but of course right now the company is still working on getting the first units to production. To that end it has of course raised the $26 million A round, “co-led by Acadia Woods and Congruent Ventures, with participation from a number of other investors, including Prelude Ventures, Samsung Ventures and Shell Ventures,” as the press release puts it.

Cash on hand is always good. But it has also partnered with Infineon and others, including an unnamed tier-1 automotive company, which is no doubt helping shape the first commercial Sense Photonics product. The details will have to wait until later this year when that offering solidifies, and production should start a few months after that — no hard timeline yet, but expect this all before the end of the year.

“We are very appreciative of this strong vote of investor confidence in our team and our technology,” Burroughs said in the press release. “The demand we’ve encountered – even while operating in stealth mode – has been extraordinary.”

Powered by WPeMatico

Google Assistant comes to Waze navigation app

Posted by | Android, Android Auto, Apps, Assistant, automotive, computing, Google, Google-Maps, Lyft, smartphones, Software, TC, Transportation, Uber, United States, waze | No Comments

Ever since Google acquired Waze back in 2013, features from each have been slowly making their way back and forth between it and Google Maps — and today Waze gets a big upgrade with Google Assistant integration, which means you can use the smart voice companion within the app.

Google Assistant in Waze will provide access to your usual Assistant features, like playback of music and podcasts, but it’ll also offer access to many Waze-specific abilities, including letting you ask it to report traffic conditions, or specifying that you want to avoid tolls when routing to your destination.

Google has done a good job of rolling out support for Assistant in its own Android Auto in-car software, and even brought it to Google Maps on Apple’s competing CarPlay system earlier this year. The benefits of having Assistant work natively within Waze are many, but the number one might be its potential to reduce distractions while on the road.

Waze remains a top choice among drivers, and anecdotally most Uber and Lyft drivers I encounter still swear by its supremacy over the competition, including Google’s other own-branded Maps solution.

Google Assistant will be available via a rollout starting today in the U.S., in English only to start and on Android smartphones. Expect that availability to expand over time.

Powered by WPeMatico

Audi proves two little screens are better than one big screen

Posted by | Audi, automotive, Gadgets, Tesla | No Comments

I’m spending some time in the new Audi Q8, and the car company equipped the crossover with its latest infotainment system. I love it, fingerprints, dust and all.

The grimy screens are part of the story. I could have cleaned up the screens for the photos, but I thought it was essential to show the screens after a couple of weeks of use.

There are two screens placed in the center stack of the Q8. The top one features controls for the radio, mapping system and vehicle settings. The bottom screen is for climate controls and additional controls like garage door opener and the vehicle’s cameras. Both have haptic feedback, so the buttons feel nearly real.

Both screens are tilted at the right angle, and the shifter is built in a way that provides a handy spot to rest your wrist, steadying it as you hit the screens.

Car companies are turning to touchscreens over physical buttons. It makes sense on some level, as screens are less expensive and scalable across vehicles. With screens, car companies do not need to design and manufacture knobs, buttons and sliders but instead create a software user interface.

Tesla took it to the next level with the debut of the Model S in 2012. The car company stuck a massive touchscreen in the center stack. It’s huge. I’m not a fan. I find the large screen uncomfortable and impractical to use while driving. Other car companies must agree, as few have included similar touchscreens in their vehicles. Instead of a single touchscreen, most car makers are using a combination of a touchscreen with physical knobs and buttons. For the most part, this is an excellent compromise, as the knobs and buttons are used for functions that will always be needed, like climate control.

Audi is using a similar thought in its latest infotainment system. The bottom screen is always on and always displays the climate control. There’s a button that reveals shortcuts, too, so if the top screen is turned off, the driver can still change the radio to a preset. The top screen houses buttons for the radio, mapping and lesser-used settings.

The user interface uses a dark theme. The black levels are fantastic, even in direct sunlight, and this color scheme makes it easy to use during the day or night.

The touchscreens have downsides but none that are not present on other touchscreens. Glare is often an issue, and these screens are fingerprint magnets. I also found the screen to run hot to the touch after a few minutes in the sun.

Apple CarPlay remains a source of frustration. The Q8 has the latest CarPlay option, which allows an iPhone to run CarPlay wirelessly. It only works sometimes. And sometimes, when it does work, various apps like Spotify do not work in their typical fashion. Thankfully, Apple just announced a big update for CarPlay that will hopefully improve the connectivity and stability.

The infotainment system is now a critical component. Automakers must build a system that’s competent and feels natural to the driver and yet able to evolve as features are added to vehicles through over-the-air updates. Automakers must build a system that works today and continues to work years from now.

Audi’s latest infotainment system is impressive. It does everything right: it’s not a distraction, it’s easy to use and features fantastic haptic feedback.

Powered by WPeMatico

Google refreshes Android Auto with new features and a darker look

Posted by | Android, Android Auto, Apple, automotive, CarPlay, Google, Google I/O 2019, linux, Polestar, smartphones, Transportation | No Comments

Android Auto — the in-car platform that brings the look and functions of a smartphone to the vehicle’s central screen — is getting a new look and improved navigation and communication features that will roll out this summer.

The improvements and new look were revealed Monday during Google I/O 2019, the annual developer conference.

The most noticeable change might be the overall look of Android Auto. It now has a dark theme, new fonts and color accents designed to make it easier for drivers to quickly and more easily see the content on the car’s central screen.

The new version of Android Auto has also improved its notifications. Drivers can choose to view, listen and respond to messages and calls more easily.

Engineers have updated the software to make it more seamless. The system, if properly enabled, would pop up on the car’s screen once the vehicle was turned on. However, the user would still have to restart their media or navigation option. Now, Android Auto will continue playing the media and navigation app of the driver’s choice. Drivers can tap on a suggested location or say “Hey Google” to navigate to a new place.

The navigation bar on Android Auto has changed, as well. Drivers will be able to see their turn-by-turn directions and control apps and phone on the same screen.

Finally, the platform has been adjusted so it will fit various sized-screens. Android Auto now maximizes the in-car display to show more information, like next-turn directions, playback controls and ongoing calls.

Android Auto is not an operating system. It’s a secondary interface — or HMI layer — that sits on top of an operating system. Google released Android Auto in 2015. Rival Apple introduced its own in-car platform, Apple CarPlay, that same year.

Automakers that wanted to give consumers a better in-car experience without giving Google or Apple total access quickly adopted the platform. Even some holdouts, such as Toyota, have come around. Today, Android Auto is available in more than 500 car models from 50 different brands, according to Android Auto product manager Rod Lopez.

Google has since developed an operating system called Android Automotive OS that’s modeled after its open-source mobile operating system that runs on Linux. Instead of running smartphones and tablets, Google modified it so it could be used in cars. Polestar, Volvo’s standalone performance electric car brand, is going to produce a new vehicle, the Polestar 2, that has an infotainment system powered by Android Automotive OS.

Powered by WPeMatico