Lidar

Voyant Photonics raises $4.3M to fit lidar on the head of a pin

Posted by | contour venture partners, DARPA, funding, Gadgets, hardware, LDV Capital, Lidar, science, Startups, voyant photonics | No Comments

Lidar is a critical method by which robots and autonomous vehicles sense the world around them, but the lasers and sensors generally take up a considerable amount of space. Not so with Voyant Photonics, which has created a lidar system that you really could conceivably balance on the head of a pin.

Before getting into the science, it’s worth noting why this is important. Lidar is most often used as a way for a car to sense things at a medium distance — far away, radar can outperform it, and up close, ultrasonics and other methods are more compact. But from a few feet to a couple hundred feed out, lidar is very useful.

Unfortunately, even the most compact lidar solutions today are still, roughly, the size of a hand, and the ones ready for use in production vehicles are still larger. A very small lidar unit that could be hidden on every corner of a car, or even inside the cabin, could provide rich positional data about everything in and around the car with little power and no need to disrupt the existing lines and design. (And that’s not getting into the many, many other industries that could use this.)

Lidar began with the idea of, essentially, a single laser being swept across a scene multiple times per second, its reflection carefully measured to track the distances of objects. But mechanically steered lasers are bulky, slow and prone to failure, so newer companies are attempting other techniques, like illuminating the whole scene at once (flash lidar) or steering the beam with complex electronic surfaces (metamaterials) instead.

One discipline that seems primed to join in the fun is silicon photonics, which is essentially the manipulation of light on a chip for various purposes — for instance, to replace electricity in logic gates to provide ultra-fast, low-heat processing. Voyant, however, has pioneered a technique to apply silicon photonics to lidar.

In the past, attempts in chip-based photonics to send out a coherent laser-like beam from a surface of lightguides (elements used to steer light around or emit it) have been limited by a low field of view and power because the light tends to interfere with itself at close quarters.

Voyant’s version of these “optical phased arrays” sidesteps that problem by carefully altering the phase of the light traveling through the chip. The result is a strong beam of non-visible light that can be played over a wide swathe of the environment at high speed with no moving parts at all — yet it emerges from a chip dwarfed by a fingertip.

LIDAR Fingertip Crop

“This is an enabling technology because it’s so small,” said Voyant co-founder Steven Miller. “We’re talking cubic centimeter volumes. There’s a lot of electronics that can’t accommodate a lidar the size of a softball — think about drones and things that are weight-sensitive, or robotics, where it needs to be on the tip of its arm.”

Lest you think this is just a couple yahoos who think they’ve one-upped years of research, Miller and co-founder Chris Phare came out of the Lipson Nanophotonics Group at Columbia University.

“This lab basically invented silicon photonics,” said Phare. “We’re all deeply ingrained with the physics and devices-level stuff. So we were able to step back and look at lidar, and see what we needed to fix and make better to make this a reality.”

The advances they’ve made frankly lie outside my area of expertise, so I won’t attempt to characterize them too closely, except that it solves the interference issues and uses a frequency modulated continuous wave technique, which lets it measure velocity as well as distance (Blackmore does this as well). At any rate, their unique approach to moving and emitting light from the chip lets them create a device that is not only compact, but combines transmitter and receiver in one piece, and has good performance — not just good for its size, they claim, but good.

“It’s a misconception that small lidars need to be low-performance,” explained Phare. “The silicon photonic architecture we use lets us build a very sensitive receiver on-chip that would be difficult to assemble in traditional optics. So we’re able to fit a high-performance lidar into that tiny package without any additional or exotic components. We think we can achieve specs comparable to lidars out there, but just make them that much smaller.”

photonics testbed

The chip-based lidar in its test bed.

It’s even able to be manufactured in a normal fashion like other photonics chips. That’s a huge plus when you’re trying to move from research to product development.

With this first round of funding, the team plans to expand and get this tech out of the lab and into the hands of engineers and developers. The exact specs, dimensions, power requirements and so on are all very different depending on the application and industry, so Voyant can make decisions based on feedback from people in other fields.

In addition to automotive (“It’s such a big application that no one can make lidar and not look at that space,” Miller said), the team is in talks with numerous potential partners.

Although being at this stage while others are raising nine-figure rounds might seem daunting, Voyant has the advantage that it has created something totally different from what’s out there, a product that can safely exist alongside popular big lidars from companies like Innoviz and Luminar.

“We’re definitely talking to big players in a lot of these places, drones and robotics, perhaps augmented reality. We’re trying to suss out exactly where this is most interesting to people,” said Phare. “We see the evolution here being something like bringing room-size computers down to chips.”

The $4.3 million raised by Voyant comes from Contour Venture Partners, LDV Capital and DARPA, which naturally would be interested in something like this.

Powered by WPeMatico

Luminar eyes production vehicles with $100M round and new Iris lidar platform

Posted by | artificial intelligence, automotive, autonomous vehicles, funding, Gadgets, hardware, Lidar, Luminar, robotics, self-driving cars, Transportation | No Comments

Luminar is one of the major players in the new crop of lidar companies that have sprung up all over the world, and it’s moving fast to outpace its peers. Today the company announced a new $100 million funding round, bringing its total raised to more than $250 million — as well as a perception platform and a new, compact lidar unit aimed at inclusion in actual cars. Big day!

The new hardware, called Iris, looks to be about a third of the size of the test unit Luminar has been sticking on vehicles thus far. That one was about the size of a couple hardbacks stacked up, and Iris is more like a really thick sandwich.

Size is very important, of course, as few cars just have caverns of unused space hidden away in prime surfaces like the corners and windshield area. Other lidar makers have lowered the profiles of their hardware in various ways; Luminar seems to have compactified in a fairly straightforward fashion, getting everything into a package smaller in every dimension.

Luminar IRIS AND TEST FLEET LiDARS

Test model, left, Iris on the right.

Photos of Iris put it in various positions: below the headlights on one car, attached to the rear-view mirror in another and high up atop the cabin on a semi truck. It’s small enough that it won’t have to displace other components too much, although of course competitors are aiming to make theirs even more easy to integrate. That won’t matter, Luminar founder and CEO Austin Russell told me recently, if they can’t get it out of the lab.

“The development stage is a huge undertaking — to actually move it towards real-world adoption and into true series production vehicles,” he said (among many other things). The company that gets there first will lead the industry, and naturally he plans to make Luminar that company.

Part of that is of course the production process, which has been vastly improved over the last couple of years. These units can be made quickly enough that they can be supplied by the thousands rather than dozens, and the cost has dropped precipitously — by design.

Iris will cost less than $1,000 per unit for production vehicles seeking serious autonomy, and for $500 you can get a more limited version for more limited purposes like driver assistance, or ADAS. Luminar says Iris is “slated to launch commercially on production vehicles beginning in 2022,” but that doesn’t mean necessarily that they’re shipping to customers right now. The company is negotiating more than a billion dollars in contracts at present, a representative told me, and 2022 would be the earliest that vehicles with Iris could be made available.

LUMINAR IRIS TRAFFIC JAM PILOT

The Iris units are about a foot below the center of the headlight units here. Note that this is not a production vehicle, just a test one.

Another part of integration is software. The signal from the sensor has to go somewhere, and while some lidar companies have indicated they plan to let the carmaker or whoever deal with it their own way, others have opted to build up the tech stack and create “perception” software on top of the lidar. Perception software can be a range of things: something as simple as drawing boxes around objects identified as people would count, as would a much richer process that flags intentions, gaze directions, characterizes motions and suspected next actions and so on.

Luminar has opted to build into perception, or rather has revealed that it has been working on it for some time. It now has 60 people on the task split between Palo Alto and Orlando, and hired a new VP of Software, former robo-taxi head at Daimler, Christoph Schroder.

What exactly will be the nature and limitations of Luminar’s perception stack? There are dangers waiting if you decide to take it too far, because at some point you begin to compete with your customers, carmakers that have their own perception and control stacks that may or may not overlap with yours. The company gave very few details as to what specifically would be covered by its platform, but no doubt that will become clearer as the product itself matures.

Last and certainly not least is the matter of the $100 million in additional funding. This brings Luminar to a total of over a quarter of a billion dollars in the last few years, matching its competitor Innoviz, which has made similar decisions regarding commercialization and development.

The list of investors has gotten quite long, so I’ll just quote Luminar here:

G2VP, Moore Strategic Ventures, LLC, Nick Woodman, The Westly Group, 1517 Fund / Peter Thiel, Canvas Ventures, along with strategic investors Corning Inc, Cornes, and Volvo Cars Tech Fund.

The board has also grown, with former Broadcom exec Scott McGregor and G2VP’s Ben Kortlang joining the table.

We may have already passed “peak lidar” as far as sheer number of deals and startups in the space, but that doesn’t mean things are going to cool down. If anything, the opposite, as established companies battle over lucrative partnerships and begin eating one another to stay competitive. Seems like Luminar has no plans on becoming a meal.

Powered by WPeMatico

Startups at the speed of light: Lidar CEOs put their industry in perspective

Posted by | artificial intelligence, automotive, autonomous vehicles, Congruent Ventures, Gadgets, hardware, Innoviz, Lidar, Luminar, Lumotive, robotics, self-driving cars, sense photonics, Startups, TC, Transportation | No Comments

As autonomous cars and robots loom over the landscapes of cities and jobs alike, the technologies that empower them are forming sub-industries of their own. One of those is lidar, which has become an indispensable tool to autonomy, spawning dozens of companies and attracting hundreds of millions in venture funding.

But like all industries built on top of fast-moving technologies, lidar and the sensing business is by definition built somewhat upon a foundation of shifting sands. New research appears weekly advancing the art, and no less frequently are new partnerships minted, as car manufacturers like Audi and BMW scramble to keep ahead of their peers in the emerging autonomy economy.

To compete in the lidar industry means not just to create and follow through on difficult research and engineering, but to be prepared to react with agility as the market shifts in response to trends, regulations, and disasters.

I talked with several CEOs and investors in the lidar space to find out how the industry is changing, how they plan to compete, and what the next few years have in store.

Their opinions and predictions sometimes synced up and at other times diverged completely. For some, the future lies manifestly in partnerships they have already established and hope to nurture, while others feel that it’s too early for automakers to commit, and they’re stringing startups along one non-exclusive contract at a time.

All agreed that the technology itself is obviously important, but not so important that investors will wait forever for engineers to get it out of the lab.

And while some felt a sensor company has no business building a full-stack autonomy solution, others suggested that’s the only way to attract customers navigating a strange new market.

It’s a flourishing market but one, they all agreed, that will experience a major consolidation in the next year. In short, it’s a wild west of ideas, plentiful money, and a bright future — for some.

The evolution of lidar

I’ve previously written an introduction to lidar, but in short, lidar units project lasers out into the world and measure how they are reflected, producing a 3D picture of the environment around them.

Powered by WPeMatico

Sense Photonics flashes onto the lidar scene with a new approach and $26M

Posted by | automotive, autonomous vehicles, Gadgets, hardware, Lidar, robotics, self-driving cars, sense photonics, Startups, TC | No Comments

Lidar is a critical part of many autonomous cars and robotic systems, but the technology is also evolving quickly. A new company called Sense Photonics just emerged from stealth mode today with a $26M A round, touting a whole new approach that allows for an ultra-wide field of view and (literally) flexible installation.

Still in prototype phase but clearly enough to attract eight figures of investment, Sense Photonics’ lidar doesn’t look dramatically different from others at first, but the changes are both under the hood and, in a way, on both sides of it.

Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.

All these systems are “scanning” systems in that they sweep a beam, column, or spot of light across the scene in some structured fashion — faster than we can perceive, but still piece by piece. Few companies, however, have managed to implement what’s called “flash” lidar, which illuminates the whole scene with one giant, well, flash.

That’s what Sense has created, and it claims to have avoided the usual shortcomings of such systems — namely limited resolution and range. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.

I talked with CEO and co-founder Scott Burroughs, a veteran engineer of laser systems, about what makes Sense’s lidar a different animal from the competition.

“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.”

These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically.

“We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”

Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case.

These are just prototype units, but they work and you can see they’re more than just renders.

Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.

The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.

The depth data and traditional camera image correspond pixel-to-pixel right out of the system.

Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.

Sense Photonics’ unit also can output a point cloud, as you see here.

The benefits of Sense’s system are manifest, but of course right now the company is still working on getting the first units to production. To that end it has of course raised the $26 million A round, “co-led by Acadia Woods and Congruent Ventures, with participation from a number of other investors, including Prelude Ventures, Samsung Ventures and Shell Ventures,” as the press release puts it.

Cash on hand is always good. But it has also partnered with Infineon and others, including an unnamed tier-1 automotive company, which is no doubt helping shape the first commercial Sense Photonics product. The details will have to wait until later this year when that offering solidifies, and production should start a few months after that — no hard timeline yet, but expect this all before the end of the year.

“We are very appreciative of this strong vote of investor confidence in our team and our technology,” Burroughs said in the press release. “The demand we’ve encountered – even while operating in stealth mode – has been extraordinary.”

Powered by WPeMatico

Gates-backed Lumotive upends lidar conventions using metamaterials

Posted by | accelerator, automotive, autonomous vehicles, Bill Gates, Gadgets, hardware, Intellectual Ventures, lasers, Lidar, Lumotive, robotics, science, self-driving cars, TC, Transportation | No Comments

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Powered by WPeMatico

How aerial lidar illuminated a Mayan megalopolis

Posted by | Gadgets, History, Lidar, science, TC | No Comments

Archaeology may not be the most likely place to find the latest in technology — AI and robots are of dubious utility in the painstaking fieldwork involved — but lidar has proven transformative. The latest accomplishment using laser-based imaging maps thousands of square kilometers of an ancient Mayan city once millions strong, but the researchers make it clear that there’s no technological substitute for experience and a good eye.

The Pacunam Lidar Initiative began two years ago, bringing together a group of scholars and local authorities to undertake the largest-yet survey of a protected and long-studied region in Guatemala. Some 2,144 square kilometers of the Maya Biosphere Reserve in Petén were scanned, inclusive of and around areas known to be settled, developed or otherwise of importance.

Preliminary imagery and data illustrating the success of the project were announced earlier this year, but the researchers have now performed their actual analyses on the data, and the resulting paper summarizing their wide-ranging results has been published in the journal Science.

The areas covered by the initiative, as you can see, spread over perhaps a fifth of the country.

“We’ve never been able to see an ancient landscape at this scale all at once. We’ve never had a data set like this. But in February really we hadn’t done any analysis, really, in a quantitative sense,” co-author Francisco Estrada-Belli, of Tulane University, told me. He worked on the project with numerous others, including Tulane colleague Marcello Canuto. “Basically we announced we had found a huge urban sprawl, that we had found agricultural features on a grand scale. After another nine months of work we were able to quantify all that and to get some numerical confirmations for the impressions we’d gotten.”

“It’s nice to be able to confirm all our claims,” he said. “They may have seemed exaggerated to some.”

The lidar data was collected not by self-driving cars, which seem to be the only vehicles bearing lidar we ever hear about, nor even by drones, but by traditional airplane. That may sound cumbersome, but the distances and landscapes involved permitted nothing else.

“A drone would never have worked — it could never have covered that area,” Estrada-Belli explained. “In our case it was actually a twin-engine plane flown down from Texas.”

The plane made dozens of passes over a given area, a chosen “polygon” perhaps 30 kilometers long and 20 wide. Mounted underneath was “a Teledyne Optech Titan MultiWave multichannel, multi-spectral, narrow-pulse width lidar system,” which pretty much says it all: this is a heavy-duty instrument, the size of a refrigerator. But you need that kind of system to pierce the canopy and image the underlying landscape.

The many overlapping passes were then collated and calibrated into a single digital landscape of remarkable detail.

“It identified features that I had walked over — a hundred times!” he laughed. “Like a major causeway, I walked over it, but it was so subtle, and it was covered by huge vegetation, underbrush, trees, you know, jungle — I’m sure that in another 20 years I wouldn’t have noticed it.”

But these structures don’t identify themselves. There’s no computer labeling system that looks at the 3D model and says, “this is a pyramid, this is a wall,” and so on. That’s a job that only archaeologists can do.

“It actually begins with manipulating the surface data,” Estrada-Belli said. “We get these surface models of the natural landscape; each pixel in the image is basically the elevation. Then we do a series of filters to simulate light being projected on it from various angles to enhance the relief, and we combine these visualizations with transparencies and different ways of sharpening or enhancing them. After all this process, basically looking at the computer screen for a long time, then we can start digitizing it.”

“The first step is to visually identify features. Of course, pyramids are easy, but there are subtler features that, even once you identify them, it’s hard to figure out what they are.”

The lidar imagery revealed, for example, lots of low linear features that could be man-made or natural. It’s not always easy to tell the difference, but context and existing scholarship fill in the gaps.

“Then we proceeded to digitize all these features… there were 61,000 structures, and everything had to be done manually,” Estrada-Belli said — in case you were wondering why it took nine months. “There’s really no automation because the digitizing has to be done based on experience. We looked into AI, and we hope that maybe in the near future we’ll be able to apply that, but for now an experienced archaeologist’s eye can discern the features better than a computer.”

You can see the density of the annotations on the maps. It should be noted that many of these features had by this point been verified by field expeditions. By consulting existing maps and getting ground truth in person, they had made sure that these weren’t phantom structures or wishful thinking. “We’re confident that they’re all there,” he told me.

“Next is the quantitative step,” he continued. “You measure the length and the areas and you put it all together, and you start analyzing them like you’d analyze other data set: the structure density of some area, the size of urban sprawl or agricultural fields. Finally we even figured a way to quantify the potential production of agriculture.”

This is the point where the imagery starts to go from point cloud to academic study. After all, it’s well known that the Maya had a large city in this area; it’s been intensely studied for decades. But the Pacunam (which stands for Patrimonio Cultural y Natural Maya) study was meant to advance beyond the traditional methods employed previously.

“It’s a huge data set. It’s a huge cross-section of the Maya lowlands,” Estrada-Belli said. “Big data is the buzzword now, right? You truly can see things that you would never see if you only looked at one site at a time. We could never have put together these grand patterns without lidar.”

“For example, in my area, I was able to map 47 square kilometers over the course of 15 years,” he said, slightly wistfully. “And in two weeks the lidar produced 308 square kilometers, to a level of detail that I could never match.”

As a result the paper includes all kinds of new theories and conclusions, from population and economy estimates, to cultural and engineering knowledge, to the timing and nature of conflicts with neighbors.

The resulting report doesn’t just advance the knowledge of Mayan culture and technology, but the science of archaeology itself. It’s iterative, of course, like everything else — Estrada-Belli noted that they were inspired by work done by colleagues in Belize and Cambodia; their contribution, however, exemplifies new approaches to handling large areas and large data sets.

The more experiments and field work, the more established these methods will become, and the greater they will be accepted and replicated. Already they have proven themselves invaluable, and this study is perhaps the best example of lidar’s potential in the field.

“We simply would not have seen these massive fortifications. Even on the ground, many of their details remain unclear. Lidar makes most human-made features clear, coherent, understandable,” explained co-author Stephen Houston, of Brown University, in an email. “AI and pattern recognition may help to refine the detection of features, and drones may, we hope, bring down the cost of this technology.”

“These technologies are important not only for discovery, but also for conservation,” pointed out co-author, Ithaca College’s Thomas Garrison, in an email. “3D scanning of monuments and artifacts provide detailed records and also allow for the creation of replicas via 3D printing.”

Lidar imagery can also show the extent of looting, he wrote, and help cultural authorities provide against it by being aware of relics and sites before the looters are.

The researchers are already planning a second, even larger set of flyovers, founded on the success of the first experiment. Perhaps by the time the initial physical work is done the trendier tools of the last few years will make themselves applicable.

“I doubt the airplanes are going to get less expensive but the instruments will be more powerful,” Estrada-Belli suggested. “The other line is the development of artificial intelligence that can speed up the project; at least it can rule out areas, so we don’t waste any time, and we can zero in on the areas with the greatest potential.”

He’s also excited by the idea of putting the data online so citizen archaeologists can help pore over it. “Maybe they don’t have the same experience we do, but like artificial intelligence they can certainly generate a lot of good data in a short time,” he said.

But as his colleagues point out, even years in this line of work are necessarily preliminary.

“We have to emphasize: it’s a first step, leading to innumerable ideas to test. Dozens of doctoral dissertations,” wrote Houston. “Yet there must always be excavation to look under the surface and to extract clear dates from the ruins.”

“Like many disciplines in the social sciences and humanities, archaeology is embracing digital technologies. Lidar is just one example,” wrote Garrison. “At the same time, we need to be conscious of issues in digital archiving (particularly the problem of obsolete file formatting) and be sure to use technology as a complement to, and not a replacement for methods of documentation that have proven tried and true for over a century.”

The researchers’ paper was published today in Science; you can learn about their conclusions (which are of more interest to the archaeologists and anthropologists among our readers) there, and follow other work being undertaken by the Fundación Pacunam at its website.

Powered by WPeMatico

‘Scanner Sombre’ arms you with lidar for a gorgeous, creepy explore-’em-up

Posted by | games, Gaming, Introversion Software, Lidar, Scanner Sombre, TC | No Comments

 Something about the polychromatic pointillism of lidar imagery has always intrigued me, but as a writer I can’t say I have many opportunities to use the technology. That’s why I’m excited about a lovely looking new game from the creators of Prison Architect in which you are exploring a pitch-black cave with nothing but lidar. Read More

Powered by WPeMatico

WTF is lidar?

Posted by | Gadgets, Lidar, TC, WTF is | No Comments

wtf-is-lidar Long ago, people believed that the eye emitted invisible rays that struck the world outside, causing it to become visible to the beholder. That’s not the case, of course, but that doesn’t mean it wouldn’t be a perfectly good way to see. In fact, it’s the basic idea behind lidar, a form of digital imaging that’s proven very useful in everything from archaeology… Read More

Powered by WPeMatico

New LIDAR package makes it easier to add smarts to your smart car

Posted by | diode, economy, Europe, Gadgets, laser, Lidar, Light, Mobile, optics, TC | No Comments

mjgyotizoq Osram Opto Semiconductors has announced the availability of a LIDAR package – essentially the spinning laser array found on self-driving and mapping vehicles – that costs $5 and works as well as $70,000 tower systems and hockey-puck sized $8,000 systems. This mini-LIDAR has four laser diodes connected together to ensure accuracy without tuning. The kit also includes tiny mirrors… Read More

Powered by WPeMatico