Lidar

Gates-backed Lumotive upends lidar conventions using metamaterials

Posted by | accelerator, automotive, autonomous vehicles, Bill Gates, Gadgets, hardware, Intellectual Ventures, lasers, Lidar, Lumotive, robotics, science, self-driving cars, TC, Transportation | No Comments

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Powered by WPeMatico

How aerial lidar illuminated a Mayan megalopolis

Posted by | Gadgets, History, Lidar, science, TC | No Comments

Archaeology may not be the most likely place to find the latest in technology — AI and robots are of dubious utility in the painstaking fieldwork involved — but lidar has proven transformative. The latest accomplishment using laser-based imaging maps thousands of square kilometers of an ancient Mayan city once millions strong, but the researchers make it clear that there’s no technological substitute for experience and a good eye.

The Pacunam Lidar Initiative began two years ago, bringing together a group of scholars and local authorities to undertake the largest-yet survey of a protected and long-studied region in Guatemala. Some 2,144 square kilometers of the Maya Biosphere Reserve in Petén were scanned, inclusive of and around areas known to be settled, developed or otherwise of importance.

Preliminary imagery and data illustrating the success of the project were announced earlier this year, but the researchers have now performed their actual analyses on the data, and the resulting paper summarizing their wide-ranging results has been published in the journal Science.

The areas covered by the initiative, as you can see, spread over perhaps a fifth of the country.

“We’ve never been able to see an ancient landscape at this scale all at once. We’ve never had a data set like this. But in February really we hadn’t done any analysis, really, in a quantitative sense,” co-author Francisco Estrada-Belli, of Tulane University, told me. He worked on the project with numerous others, including Tulane colleague Marcello Canuto. “Basically we announced we had found a huge urban sprawl, that we had found agricultural features on a grand scale. After another nine months of work we were able to quantify all that and to get some numerical confirmations for the impressions we’d gotten.”

“It’s nice to be able to confirm all our claims,” he said. “They may have seemed exaggerated to some.”

The lidar data was collected not by self-driving cars, which seem to be the only vehicles bearing lidar we ever hear about, nor even by drones, but by traditional airplane. That may sound cumbersome, but the distances and landscapes involved permitted nothing else.

“A drone would never have worked — it could never have covered that area,” Estrada-Belli explained. “In our case it was actually a twin-engine plane flown down from Texas.”

The plane made dozens of passes over a given area, a chosen “polygon” perhaps 30 kilometers long and 20 wide. Mounted underneath was “a Teledyne Optech Titan MultiWave multichannel, multi-spectral, narrow-pulse width lidar system,” which pretty much says it all: this is a heavy-duty instrument, the size of a refrigerator. But you need that kind of system to pierce the canopy and image the underlying landscape.

The many overlapping passes were then collated and calibrated into a single digital landscape of remarkable detail.

“It identified features that I had walked over — a hundred times!” he laughed. “Like a major causeway, I walked over it, but it was so subtle, and it was covered by huge vegetation, underbrush, trees, you know, jungle — I’m sure that in another 20 years I wouldn’t have noticed it.”

But these structures don’t identify themselves. There’s no computer labeling system that looks at the 3D model and says, “this is a pyramid, this is a wall,” and so on. That’s a job that only archaeologists can do.

“It actually begins with manipulating the surface data,” Estrada-Belli said. “We get these surface models of the natural landscape; each pixel in the image is basically the elevation. Then we do a series of filters to simulate light being projected on it from various angles to enhance the relief, and we combine these visualizations with transparencies and different ways of sharpening or enhancing them. After all this process, basically looking at the computer screen for a long time, then we can start digitizing it.”

“The first step is to visually identify features. Of course, pyramids are easy, but there are subtler features that, even once you identify them, it’s hard to figure out what they are.”

The lidar imagery revealed, for example, lots of low linear features that could be man-made or natural. It’s not always easy to tell the difference, but context and existing scholarship fill in the gaps.

“Then we proceeded to digitize all these features… there were 61,000 structures, and everything had to be done manually,” Estrada-Belli said — in case you were wondering why it took nine months. “There’s really no automation because the digitizing has to be done based on experience. We looked into AI, and we hope that maybe in the near future we’ll be able to apply that, but for now an experienced archaeologist’s eye can discern the features better than a computer.”

You can see the density of the annotations on the maps. It should be noted that many of these features had by this point been verified by field expeditions. By consulting existing maps and getting ground truth in person, they had made sure that these weren’t phantom structures or wishful thinking. “We’re confident that they’re all there,” he told me.

“Next is the quantitative step,” he continued. “You measure the length and the areas and you put it all together, and you start analyzing them like you’d analyze other data set: the structure density of some area, the size of urban sprawl or agricultural fields. Finally we even figured a way to quantify the potential production of agriculture.”

This is the point where the imagery starts to go from point cloud to academic study. After all, it’s well known that the Maya had a large city in this area; it’s been intensely studied for decades. But the Pacunam (which stands for Patrimonio Cultural y Natural Maya) study was meant to advance beyond the traditional methods employed previously.

“It’s a huge data set. It’s a huge cross-section of the Maya lowlands,” Estrada-Belli said. “Big data is the buzzword now, right? You truly can see things that you would never see if you only looked at one site at a time. We could never have put together these grand patterns without lidar.”

“For example, in my area, I was able to map 47 square kilometers over the course of 15 years,” he said, slightly wistfully. “And in two weeks the lidar produced 308 square kilometers, to a level of detail that I could never match.”

As a result the paper includes all kinds of new theories and conclusions, from population and economy estimates, to cultural and engineering knowledge, to the timing and nature of conflicts with neighbors.

The resulting report doesn’t just advance the knowledge of Mayan culture and technology, but the science of archaeology itself. It’s iterative, of course, like everything else — Estrada-Belli noted that they were inspired by work done by colleagues in Belize and Cambodia; their contribution, however, exemplifies new approaches to handling large areas and large data sets.

The more experiments and field work, the more established these methods will become, and the greater they will be accepted and replicated. Already they have proven themselves invaluable, and this study is perhaps the best example of lidar’s potential in the field.

“We simply would not have seen these massive fortifications. Even on the ground, many of their details remain unclear. Lidar makes most human-made features clear, coherent, understandable,” explained co-author Stephen Houston, of Brown University, in an email. “AI and pattern recognition may help to refine the detection of features, and drones may, we hope, bring down the cost of this technology.”

“These technologies are important not only for discovery, but also for conservation,” pointed out co-author, Ithaca College’s Thomas Garrison, in an email. “3D scanning of monuments and artifacts provide detailed records and also allow for the creation of replicas via 3D printing.”

Lidar imagery can also show the extent of looting, he wrote, and help cultural authorities provide against it by being aware of relics and sites before the looters are.

The researchers are already planning a second, even larger set of flyovers, founded on the success of the first experiment. Perhaps by the time the initial physical work is done the trendier tools of the last few years will make themselves applicable.

“I doubt the airplanes are going to get less expensive but the instruments will be more powerful,” Estrada-Belli suggested. “The other line is the development of artificial intelligence that can speed up the project; at least it can rule out areas, so we don’t waste any time, and we can zero in on the areas with the greatest potential.”

He’s also excited by the idea of putting the data online so citizen archaeologists can help pore over it. “Maybe they don’t have the same experience we do, but like artificial intelligence they can certainly generate a lot of good data in a short time,” he said.

But as his colleagues point out, even years in this line of work are necessarily preliminary.

“We have to emphasize: it’s a first step, leading to innumerable ideas to test. Dozens of doctoral dissertations,” wrote Houston. “Yet there must always be excavation to look under the surface and to extract clear dates from the ruins.”

“Like many disciplines in the social sciences and humanities, archaeology is embracing digital technologies. Lidar is just one example,” wrote Garrison. “At the same time, we need to be conscious of issues in digital archiving (particularly the problem of obsolete file formatting) and be sure to use technology as a complement to, and not a replacement for methods of documentation that have proven tried and true for over a century.”

The researchers’ paper was published today in Science; you can learn about their conclusions (which are of more interest to the archaeologists and anthropologists among our readers) there, and follow other work being undertaken by the Fundación Pacunam at its website.

Powered by WPeMatico

‘Scanner Sombre’ arms you with lidar for a gorgeous, creepy explore-’em-up

Posted by | games, Gaming, Introversion Software, Lidar, Scanner Sombre, TC | No Comments

 Something about the polychromatic pointillism of lidar imagery has always intrigued me, but as a writer I can’t say I have many opportunities to use the technology. That’s why I’m excited about a lovely looking new game from the creators of Prison Architect in which you are exploring a pitch-black cave with nothing but lidar. Read More

Powered by WPeMatico

WTF is lidar?

Posted by | Gadgets, Lidar, TC, WTF is | No Comments

wtf-is-lidar Long ago, people believed that the eye emitted invisible rays that struck the world outside, causing it to become visible to the beholder. That’s not the case, of course, but that doesn’t mean it wouldn’t be a perfectly good way to see. In fact, it’s the basic idea behind lidar, a form of digital imaging that’s proven very useful in everything from archaeology… Read More

Powered by WPeMatico

New LIDAR package makes it easier to add smarts to your smart car

Posted by | diode, economy, Europe, Gadgets, laser, Lidar, Light, Mobile, optics, TC | No Comments

mjgyotizoq Osram Opto Semiconductors has announced the availability of a LIDAR package – essentially the spinning laser array found on self-driving and mapping vehicles – that costs $5 and works as well as $70,000 tower systems and hockey-puck sized $8,000 systems. This mini-LIDAR has four laser diodes connected together to ensure accuracy without tuning. The kit also includes tiny mirrors… Read More

Powered by WPeMatico