science

Bumblebees bearing high-tech backpacks act as a living data collection platform

Posted by | bees, biotech, Gadgets, hardware, Internet of Things, IoT, science, TC, university of washington, Wearables | No Comments

There’s lots of research going into tiny drones, but one of the many hard parts is keeping them in the air for any real amount of time. Why not hitch a ride on something that already flies all day? That’s the idea behind this project that equips bumblebees with sensor-filled backpacks that charge wirelessly and collect data on the fields they visit.

A hive full of these cyber-bees could help monitor the health of a field by checking temperature and humidity, as well as watching for signs of rot or distress in the crops. A lot of this is done manually now, and of course drones are being set to work doing it, but if the bees are already there, why not get them to help out?

The “Living IoT” backpack, a tiny wafer loaded with electronics and a small battery, was designed by University of Washington engineers led by Shyam Gollakotta. He’s quick to note that although the research does to a certain extent take advantage of these clumsy, fuzzy creatures, they were careful to “follow best methods for care and handling.”

Part of that is minimizing the mass of the pack; other experiments have put RFID antennas and such on the backs of bees and other insects, but this is much more sophisticated.

The chip has sensors and an integrated battery that lets it run for seven hours straight, yet weighs just 102 milligrams. A full-grown bumblebee, for comparison, could weigh anywhere from two to six times that.

They’re strong fliers, if not graceful ones, and can carry three-quarters of their body weight in pollen and nectar when returning to the hive. So the backpack, while far from unnoticeable, is still well within their capabilities; the team checked with biologists in the know first, of course.

“We showed for the first time that it’s possible to actually do all this computation and sensing using insects in lieu of drones,” explained Gollakotta in a UW news release. “We decided to use bumblebees because they’re large enough to carry a tiny battery that can power our system, and they return to a hive every night where we could wirelessly recharge the batteries.”

The backpacks can track location passively by monitoring the varying strengths of signals from nearby antennas, up to a range of about 80 meters. The data they collect is transferred while they’re in the hive via an energy-efficient backscatter method that Gollakotta has used in other projects.

The applications are many and various, though obviously limited to what can be observed while the bees go about their normal business. It could even help keep the bees themselves healthy.

“It would be interesting to see if the bees prefer one region of the farm and visit other areas less often,” said co-author Sawyer Fuller. “Alternatively, if you want to know what’s happening in a particular area, you could also program the backpack to say: ‘Hey bees, if you visit this location, take a temperature reading.’ ”

It is of course just in prototype form right now, but one can easily imagine the tech being deployed by farmers in the near future, or perhaps in a more sinister way by three-letter agencies wanting to put a bee on the wall near important conversations. The team plans to present their work (PDF) at the ACM MobiCom conference next year.

Powered by WPeMatico

Voyager 2 joins its twin in interstellar space

Posted by | Gadgets, NASA, science, Space, voyager, voyager 2 | No Comments

Voyager 2, the multi-planetary exploratory probe launched in 1977, has finally entered interstellar space, some six years after its twin, Voyager 1, did the same. It’s now about 11 billion miles from Earth, the second-farthest-out human-made object in space.

Interstellar space starts where the sun’s “heliosphere” ends — the big ball of radiation and plasma in which the planets bathe and by which they are protected. Both Voyagers have instruments on board that monitor all this stuff, and both have shown a major drop-off in electrical and plasma activity, suggesting they’ve crossed over.

The exact border of interstellar space is a matter of debate, a great deal of which occurred while Voyager 1 was on the very edge and scientists were arguing whether it was out or not. A consensus was reached, however, and most agree that both probes have now left the heliosphere.

They have not, however, left the solar system, defined more or less by the extent of the Oort cloud, an enormous collection of dust and small objects caught in the sun’s gravity (but just barely). Until the Voyagers leave that, in perhaps 30,000 years, they’re still technically in-system.

Interestingly, although Voyager 2 was the second to enter interstellar space, it was actually the first to launch. The risk of failure for these complex, ambitious probes was high enough that NASA felt it should build two and send them out one after another, and it so happened that Voyager 2 launched 16 days before Voyager 1. However, the latter’s trajectory caused it to exit the ecliptic (the flat disk in which most of the solar system’s objects are found) earlier and at a different angle.

That makes Voyager 2 NASA’s longest-running mission (though not the object in space for longest — early satellites are still floating around up there), and those working on it couldn’t be happier.

“I think we’re all happy and relieved that the Voyager probes have both operated long enough to make it past this milestone,” said Voyager project manager Suzanne Dodd at JPL, in a NASA news release. “This is what we’ve all been waiting for. Now we’re looking forward to what we’ll be able to learn from having both probes outside the heliopause.”

Both Voyagers should continue operating for at least a few more years; their power sources are likely to go out around 2025. At that point they’ll have been in space sending back data for nearly 50 years. Congratulations to the team and, really, to humanity, for doing something so amazing.

Powered by WPeMatico

Rolling, hopping robots explore Earthly analogs of distant planets

Posted by | esa, Europe, Gadgets, hardware, mars, robotics, science, Space | No Comments

Before we send any planet-trotting robot to explore the landscape of Mars or Venus, we need to test it here on Earth. Two such robotic platforms being developed for future missions are undergoing testing at European Space Agency facilities: one that rolls, and one that hops.

The rolling one is actually on the books to head to the Red Planet as part of the ESA’s Mars 2020 program. It’s just wrapped a week of testing in the Spanish desert, just one of many Mars analogs the space program uses. It looks nice. The gravity’s a little different, of course, and there’s a bit more atmosphere, but it’s close enough to test a few things.

The team controlling Charlie, which is what they named the prototype, was doing so from hundreds of miles away, in the U.K. — not quite an interplanetary distance, but they did of course think to simulate the delay operators would encounter if the rover were actually on Mars. It would also have a ton more instruments on board.

Exploration and navigation was still done entirely using information collected by the rover via radar and cameras, and the rover’s drill was also put to work. It rained one day, which is extraordinarily unlikely to happen on Mars, but the operators presumably pretended it was a dust storm and rolled with it.

Another Earth-analog test is scheduled for February in Chile’s Atacama desert. You can learn more about the ExoMars rover and the Mars 2020 mission here.

The other robot that the ESA publicized this week isn’t theirs but was developed by ETH Zurich: the SpaceBok —  you know, like springbok. The researchers there think that hopping around like that well-known ungulate could be a good way to get around on other planets.

It’s nice to roll around on stable wheels, sure, but it’s no use when you want to get to the far side of some boulder or descend into a ravine to check out an interesting mineral deposit. SpaceBok is meant to be a highly stable jumping machine that can traverse rough terrain or walk with a normal quadrupedal gait as needed (well, normal for robots).

“This is not particularly useful on Earth,” admits SpaceBok team member Elias Hampp, but “it could reach a height of four meters on the Moon. This would allow for a fast and efficient way of moving forward.”

It was doing some testing at the ESA’s “Mars Yard sandbox,” a little pen filled with Mars-like soil and rocks. The team is looking into improving autonomy with better vision — the better it can see where it lands, the better SpaceBok can stick that landing.

Interplanetary missions are very much in vogue now, and we may soon even see some private trips to the Moon and Mars. So even if NASA or the ESA doesn’t decide to take SpaceBok (or some similarly creative robot) out into the solar system, perhaps a generous sponsor will.

Powered by WPeMatico

Mars Lander InSight sends the first of many selfies after a successful touchdown

Posted by | Gadgets, Insight, mars, NASA, robotics, science, Space | No Comments

Last night’s 10 minutes of terror as the InSight Mars Lander descended to the Martian surface at 12,300 MPH were a nail-biter for sure, but now the robotic science platform is safe and sound — and has sent pics back to prove it.

The first thing it sent was a couple pictures of its surroundings: Elysium Planitia, a rather boring-looking, featureless plane that is nevertheless perfect for InSight’s drilling and seismic activity work.

The images, taken with its Instrument Context Camera, are hardly exciting on their own merits — a dirty landscape viewed through a dusty tube. But when you consider that it’s of an unexplored territory on a distant planet, and that it’s Martian dust and rubble occluding the lens, it suddenly seems pretty amazing!

Decelerating from interplanetary velocity and making a perfect landing was definitely the hard part, but it was by no means InSight’s last challenge. After touching down, it still needs to set itself up and make sure that none of its many components and instruments were damaged during the long flight and short descent to Mars.

And the first good news arrived shortly after landing, relayed via NASA’s Odyssey spacecraft in orbit: a partial selfie showing that it was intact and ready to roll. The image shows, among other things, the large mobile arm folded up on top of the lander, and a big copper dome covering some other components.

Telemetry data sent around the same time show that InSight has also successfully deployed its solar panels and is collecting power with which to continue operating. These fragile fans are crucial to the lander, of course, and it’s a great relief to hear they’re working properly.

These are just the first of many images the lander will send, though unlike Curiosity and the other rovers, it won’t be traveling around taking snapshots of everything it sees. Its data will be collected from deep inside the planet, offering us insight into the planet’s — and our solar system’s — origins.

Powered by WPeMatico

That night, a forest flew: DroneSeed is planting trees from the air

Posted by | artificial intelligence, Computer Vision, drones, Gadgets, GreenTech, hardware, robotics, science, Startups, TC, UAVs | No Comments

Wildfires are consuming our forests and grasslands faster than we can replace them. It’s a vicious cycle of destruction and inadequate restoration rooted, so to speak, in decades of neglect of the institutions and technologies needed to keep these environments healthy.

DroneSeed is a Seattle-based startup that aims to combat this growing problem with a modern toolkit that scales: drones, artificial intelligence and biological engineering. And it’s even more complicated than it sounds.

Trees in decline

A bit of background first. The problem of disappearing forests is a complex one, but it boils down to a few major factors: climate change, outdated methods and shrinking budgets (and as you can imagine, all three are related).

Forest fires are a natural occurrence, of course. And they’re necessary, as you’ve likely read, to sort of clear the deck for new growth to take hold. But climate change, monoculture growth, population increases, lack of control burns and other factors have led to these events taking place not just more often, but more extensively and to more permanent effect.

On average, the U.S. is losing 7 million acres a year. That’s not easy to replace to begin with — and as budgets for the likes of national and state forest upkeep have shrunk continually over the last half century, there have been fewer and fewer resources with which to combat this trend.

The most effective and common reforestation technique for a recently burned woodland is human planters carrying sacks of seedlings and manually selecting and placing them across miles of landscapes. This back-breaking work is rarely done by anyone for more than a year or two, so labor is scarce and turnover is intense.

Even if the labor was available on tap, the trees might not be. Seedlings take time to grow in nurseries and a major wildfire might necessitate the purchase and planting of millions of new trees. It’s impossible for nurseries to anticipate this demand, and the risk associated with growing such numbers on speculation is more than many can afford. One missed guess could put the whole operation underwater.

Meanwhile, if nothing gets planted, invasive weeds move in with a vengeance, claiming huge areas that were once old growth forests. Lacking the labor and tree inventory to stem this possibility, forest keepers resort to a stopgap measure: use helicopters to drench the area in herbicides to kill weeds, then saturate it with fast-growing cheatgrass or the like. (The alternative to spraying is, again, the manual approach: machetes.)

At least then, in a year, instead of a weedy wasteland, you have a grassy monoculture — not a forest, but it’ll do until the forest gets here.

One final complication: helicopter spraying is a horrendously dangerous profession. These pilots are flying at sub-100-foot elevations, performing high-speed maneuvers so that their sprays reach the very edge of burn zones but they don’t crash head-on into the trees. This is an extremely dangerous occupation: 80 to 100 crashes occur every year in the U.S. alone.

In short, there are more and worse fires and we have fewer resources — and dated ones at that — with which to restore forests after them.

These are facts anyone in forest ecology and logging are familiar with, but perhaps not as well known among technologists. We do tend to stay in areas with cell coverage. But it turns out that a boost from the cloistered knowledge workers of the tech world — specifically those in the Emerald City — may be exactly what the industry and ecosystem require.

Simple idea, complex solution

So what’s the solution to all this? Automation, right?

Automation, especially via robotics, is proverbially suited for jobs that are “dull, dirty, and dangerous.” Restoring a forest is dirty and dangerous to be sure. But dull isn’t quite right. It turns out that the process requires far more intelligence than anyone was willing, it seems, to apply to the problem — with the exception of those planters. That’s changing.

Earlier this year, DroneSeed was awarded the first multi-craft, over-55-pounds unmanned aerial vehicle license ever issued by the FAA. Its custom UAV platforms, equipped with multispectral camera arrays, high-end lidar, six-gallon tanks of herbicide and proprietary seed dispersal mechanisms have been hired by several major forest management companies, with government entities eyeing the service as well.

These drones scout a burned area, mapping it down to as high as centimeter accuracy, including objects and plant species, fumigate it efficiently and autonomously, identify where trees would grow best, then deploy painstakingly designed seed-nutrient packages to those locations. It’s cheaper than people, less wasteful and dangerous than helicopters and smart enough to scale to national forests currently at risk of permanent damage.

I met with the company’s team at their headquarters near Ballard, where complete and half-finished drones sat on top of their cases and the air was thick with capsaicin (we’ll get to that).

The idea for the company began when founder and CEO Grant Canary burned through a few sustainable startup ideas after his last company was acquired, and was told, in his despondency, that he might have to just go plant trees. Canary took his friend’s suggestion literally.

“I started looking into how it’s done today,” he told me. “It’s incredibly outdated. Even at the most sophisticated companies in the world, planters are superheroes that use bags and a shovel to plant trees. They’re being paid to move material over mountainous terrain and be a simple AI and determine where to plant trees where they will grow — microsites. We are now able to do both these functions with drones. This allows those same workers to address much larger areas faster without the caloric wear and tear.”

It may not surprise you to hear that investors are not especially hot on forest restoration (I joked that it was a “growth industry” but really because of the reasons above it’s in dire straits).

But investors are interested in automation, machine learning, drones and especially government contracts. So the pitch took that form. With the money DroneSeed secured, it has built its modestly sized but highly accomplished team and produced the prototype drones with which is has captured several significant contracts before even announcing that it exists.

“We definitely don’t fit the mold or metrics most startups are judged on. The nice thing about not fitting the mold is people double take and then get curious,” Canary said. “Once they see we can actually execute and have been with 3 of the 5 largest timber companies in the U.S. for years, they get excited and really start advocating hard for us.”

The company went through Techstars, and Social Capital helped them get on their feet, with Spero Ventures joining up after the company got some groundwork done.

If things go as DroneSeed hopes, these drones could be deployed all over the world by trained teams, allowing spraying and planting efforts in nurseries and natural forests to take place exponentially faster and more efficiently than they are today. It’s genuine change-the-world-from-your-garage stuff, which is why this article is so long.

Hunter (weed) killers

The job at hand isn’t simple or even straightforward. Every landscape differs from every other, not just in the shape and size of the area to be treated but the ecology, native species, soil type and acidity, type of fire or logging that cleared it and so on. So the first and most important task is to gather information.

For this, DroneSeed has a special craft equipped with a sophisticated imaging stack. This first pass is done using waypoints set on satellite imagery.

The information collected at this point is really far more detailed than what’s actually needed. The lidar, for instance, collects spatial information at a resolution much beyond what’s needed to understand the shape of the terrain and major obstacles. It produces a 3D map of the vegetation as well as the terrain, allowing the system to identify stumps, roots, bushes, new trees, erosion and other important features.

This works hand in hand with the multispectral camera, which collects imagery not just in the visible bands — useful for identifying things — but also in those outside the human range, which allows for in-depth analysis of the soil and plant life.

The resulting map of the area is not just useful for drone navigation, but for the surgical strikes that are necessary to make this kind of drone-based operation worth doing in the first place. No doubt there are researchers who would love to have this data as well.

Now, spraying and planting are very different tasks. The first tends to be done indiscriminately using helicopters, and the second by laborers who burn out after a couple of years — as mentioned above, it’s incredibly difficult work. The challenge in the first case is to improve efficiency and efficacy, while in the second case is to automate something that requires considerable intelligence.

Spraying is in many ways simpler. Identifying invasive plants isn’t easy, exactly, but it can be done with imagery like that the drones are collecting. Having identified patches of a plant to be eliminated, the drones can calculate a path and expend only as much herbicide is necessary to kill them, instead of dumping hundreds of gallons indiscriminately on the entire area. It’s cheaper and more environmentally friendly. Naturally, the opposite approach could be used for distributing fertilizer or some other agent.

I’m making it sound easy again. This isn’t a plug and play situation — you can’t buy a DJI drone and hit the “weedkiller” option in its control software. A big part of this operation was the creation not only of the drones themselves, but the infrastructure with which to deploy them.

Conservation convoy

The drones themselves are unique, but not alarmingly so. They’re heavy-duty craft, capable of lifting well over the 57 pounds of payload they carry (the FAA limits them to 115 pounds).

“We buy and gut aircraft, then retrofit them,” Canary explained simply. Their head of hardware, would probably like to think there’s a bit more to it than that, but really the problem they’re solving isn’t “make a drone” but “make drones plant trees.” To that end, Canary explained, “the most unique engineering challenge was building a planting module for the drone that functions with the software.” We’ll get to that later.

DroneSeed deploys drones in swarms, which means as many as five drones in the air at once — which in turn means they need two trucks and trailers with their boxes, power supplies, ground stations and so on. The company’s VP of operations comes from a military background where managing multiple aircraft onsite was part of the job, and she’s brought her rigorous command of multi-aircraft environments to the company.

The drones take off and fly autonomously, but always under direct observation by the crew. If anything goes wrong, they’re there to take over, though of course there are plenty of autonomous behaviors for what to do in case of, say, a lost positioning signal or bird strike.

They fly in patterns calculated ahead of time to be the most efficient, spraying at problem areas when they’re over them, and returning to the ground stations to have power supplies swapped out before returning to the pattern. It’s key to get this process down pat, since efficiency is a major selling point. If a helicopter does it in a day, why shouldn’t a drone swarm? It would be sad if they had to truck the craft back to a hangar and recharge them every hour or two. It also increases logistics costs like gas and lodging if it takes more time and driving.

This means the team involves several people, as well as several drones. Qualified pilots and observers are needed, as well as people familiar with the hardware and software that can maintain and troubleshoot on site — usually with no cell signal or other support. Like many other forms of automation, this one brings its own new job opportunities to the table.

AI plays Mother Nature

The actual planting process is deceptively complex.

The idea of loading up a drone with seeds and setting it free on a blasted landscape is easy enough to picture. Hell, it’s been done. There are efforts going back decades to essentially load seeds or seedlings into guns and fire them out into the landscape at speeds high enough to bury them in the dirt: in theory this combines the benefits of manual planting with the scale of carpeting the place with seeds.

But whether it was slapdash placement or the shock of being fired out of a seed gun, this approach never seemed to work.

Forestry researchers have shown the effectiveness of finding the right “microsite” for a seed or seedling; in fact, it’s why manual planting works as well as it does. Trained humans find perfect spots to put seedlings: in the lee of a log; near but not too near the edge of a stream; on the flattest part of a slope, and so on. If you really want a forest to grow, you need optimal placement, perfect conditions and preventative surgical strikes with pesticides.

Although it’s difficult, it’s also the kind of thing that a machine learning model can become good at. Sorting through messy, complex imagery and finding local minima and maxima is a specialty of today’s ML systems, and the aerial imagery from the drones is rich in relevant data.

The company’s CTO led the creation of an ML model that determines the best locations to put trees at a site — though this task can be highly variable depending on the needs of the forest. A logging company might want a tree every couple of feet, even if that means putting them in sub-optimal conditions — but a few inches to the left or right may make all the difference. On the other hand, national forests may want more sparse deployments or specific species in certain locations to curb erosion or establish sustainable firebreaks.

Once the data has been crunched, the map is loaded into the drones’ hive mind and the convoy goes to the location, where the craft are loaded with seeds instead of herbicides.

But not just any old seeds! You see, that’s one more wrinkle. If you just throw a sagebrush seed on the ground, even if it’s in the best spot in the world, it could easily be snatched up by an animal, roll or wash down to a nearby crevasse, or simply fail to find the right nutrients in time despite the planter’s best efforts.

That’s why DroneSeed’s head of Planting and his team have been working on a proprietary seed packet that they were unbelievably reticent to detail.

From what I could gather, they’ve put a ton of work into packaging the seeds into nutrient-packed little pucks held together with a biodegradable fiber. The outside is dusted with capsaicin, the chemical that makes spicy food spicy (and also what makes bear spray do what it does). If they hadn’t told me, I might have guessed, since the workshop area was hazy with it, leading us all to cough and tear up a little. If I were a marmot, I’d learn to avoid these things real fast.

The pucks, or “seed vessels,” can and must be customized for the location and purpose — you have to match the content and acidity of the soil, things like that. DroneSeed will have to make millions of these things, but it doesn’t plan to be the manufacturer.

Finally these pucks are loaded in a special puck-dispenser which, closely coordinating with the drone, spits one out at the exact moment and speed needed to put it within a few centimeters of the microsite.

All these factors should improve the survival rate of seedlings substantially. That means that the company’s methods will not only be more efficient, but more effective. Reforestation is a numbers game played at scale, and even slight improvements — and DroneSeed is promising more than that — are measured in square miles and millions of tons of biomass.

Proof of life

DroneSeed has already signed several big contracts for spraying, and planting is next. Unfortunately, the timing on their side meant they missed this year’s planting season, though by doing a few small sites and showing off the results, they’ll be in pole position for next year.

After demonstrating the effectiveness of the planting technique, the company expects to expand its business substantially. That’s the scaling part — again, not easy, but easier than hiring another couple thousand planters every year.

Ideally the hardware can be assigned to local teams that do the on-site work, producing loci of activity around major forests from which jobs can be deployed at large or small scales. A set of five or six drones does the work of one helicopter, roughly speaking, so depending on the volume requested by a company or forestry organization, you may need dozens on demand.

That’s all yet to be explored, but DroneSeed is confident that the industry will see the writing on the wall when it comes to the old methods, and identify them as a solution that fits the future.

If it sounds like I’m cheerleading for this company, that’s because I am. It’s not often in the world of tech startups that you find a group of people not just attempting to solve a serious problem — it’s common enough to find companies hitting this or that issue — but who have spent the time, gathered the expertise and really done the dirty, boots-on-the-ground work that needs to happen so it goes from great idea to real company.

That’s what I felt was the case with DroneSeed, and here’s hoping their work pays off — for their sake, sure, but mainly for ours.

Powered by WPeMatico

Limiting social media use reduced loneliness and depression in new experiment

Posted by | depression, Facebook, Health, instagram, Mobile, psychology, science, Snapchat, Social, social media | No Comments

The idea that social media can be harmful to our mental and emotional well-being is not a new one, but little has been done by researchers to directly measure the effect; surveys and correlative studies are at best suggestive. A new experimental study out of Penn State, however, directly links more social media use to worse emotional states, and less use to better.

To be clear on the terminology here, a simple survey might ask people to self-report that using Instagram makes them feel bad. A correlative study would, for example, find that people who report more social media use are more likely to also experience depression. An experimental study compares the results from an experimental group with their behavior systematically modified, and a control group that’s allowed to do whatever they want.

This study, led by Melissa Hunt at Penn State’s psychology department, is the latter — which despite intense interest in this field and phenomenon is quite rare. The researchers only identified two other experimental studies, both of which only addressed Facebook use.

One hundred and forty-three students from the school were monitored for three weeks after being assigned to either limit their social media use to about 10 minutes per app (Facebook, Snapchat and Instagram) per day or continue using it as they normally would. They were monitored for a baseline before the experimental period and assessed weekly on a variety of standard tests for depression, social support and so on. Social media usage was monitored via the iOS battery use screen, which shows app use.

The results are clear. As the paper, published in the latest Journal of Social and Clinical Psychology, puts it:

The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.

Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.

It’s not the final word in this, however. Some scores did not see improvement, such as self-esteem and social support. And later follow-ups to see if feelings reverted or habit changes were less than temporary were limited because most of the subjects couldn’t be compelled to return. (Psychology, often summarized as “the study of undergraduates,” relies on student volunteers who have no reason to take part except for course credit, and once that’s given, they’re out.)

That said, it’s a straightforward causal link between limiting social media use and improving some aspects of emotional and social health. The exact nature of the link, however, is something at which Hunt could only speculate:

Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.

When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.

The researchers acknowledge the limited nature of their study and suggest numerous directions for colleagues in the field to take it from here. A more diverse population, for instance, or including more social media platforms. Longer experimental times and comprehensive follow-ups well after the experiment would help, as well.

The 30-minute limit was chosen as a conveniently measurable one, but the team does not intend to say that it is by any means the “correct” amount. Perhaps half or twice as much time would yield similar or even better results, they suggest: “It may be that there is an optimal level of use (similar to a dose response curve) that could be determined.”

Until then, we can use common sense, Hunt suggested: “In general, I would say, put your phone down and be with the people in your life.”

Powered by WPeMatico

Subterranean drone mapping startup Emesent raises $2.5M to autonomously delve the deep

Posted by | artificial intelligence, Australia, Automation, csiro, drones, funding, Fundings & Exits, Gadgets, hardware, robotics, science, Startups, TC | No Comments

Seemingly every industry is finding ways to use drones in some way or another, but deep underground it’s a different story. In the confines of a mine or pipeline, with no GPS and little or no light, off-the-shelf drones are helpless — but an Australian startup called Emesent is giving them the spatial awareness and intelligence to navigate and map those spaces autonomously.

Drones that work underground or in areas otherwise inaccessible by GPS and other common navigation techniques are being made possible by a confluence of technology and computing power, explained Emesent CEO and co-founder Stefan Hrabar. The work they would take over from people is the epitome of “dull, dirty, and dangerous” — the trifecta for automation.

The mining industry is undoubtedly the most interested in this sort of thing; mining is necessarily a very systematic process and one that involves repeated measurements of areas being blasted, cleared, and so on. Frequently these measurements must be made manually and painstakingly in dangerous circumstances.

One mining technique has ore being blasted from the vertical space between two tunnels; the resulting cavities, called “stopes,” have to be inspected regularly to watch for problems and note progress.

“The way they scan these stopes is pretty archaic,” said Hrabar. “These voids can be huge, like 40-50 meters horizontally. They have to go to the edge of this dangerous underground cliff and sort of poke this stick out into it and try to get a scan. It’s very sparse information and from only one point of view, there’s a lot of missing data.”

Emesent’s solution, Hovermap, involves equipping a standard DJI drone with a powerful lidar sensor and a powerful onboard computing rig that performs simultaneous location and mapping (SLAM) work fast enough that the craft can fly using it. You put it down near the stope and it takes off and does its thing.

“The surveyors aren’t at risk and the data is orders of magnitude better. Everything is running onboard the drone in real time for path planning — that’s our core IP,” Hrabar said. “The dev team’s background is in drone autonomy, collision avoidance, terrain following — basically the drone sensing its environment and doing the right thing.”

As you can see in the video below, the drone can pilot itself through horizontal tunnels (imagine cave systems or transportation infrastructure) or vertical ones (stopes and sinkholes), slowly working its way along and returning minutes later with the data necessary to build a highly detailed map. I don’t know about you, but if I could send a drone ahead into the inky darkness to check for pits and other scary features, I wouldn’t think twice.

The idea is to sell the whole stack to mining companies as a plug-and-play solution, but work on commercializing the SLAM software separately for those who want to license and customize it. A data play is also in the works, naturally:

“At the end of the day, mining companies don’t want a point cloud, they want a report. So it’s not just collecting the data but doing the analytics as well,” said Hrabar.

Emesent emerged from Data61, the tech arm of Commonwealth Scientific and Industrial Research Organisation, or CSIRO, an Australian agency not unlike our national lab system. Hrabar worked there for over a decade on various autonomy projects, and three years ago started on what would become this company, eventually passing through the agency’s “ON” internal business accelerator.

Data collected from a pass through a cave system.

“Just last week, actually, is when we left the building,” Hrabar noted. “We’ve raised the funding we need for 18 months of runway with no revenue. We really are already generating revenue, though.”

The $3.5 million (Australian) round comes largely from a new $200M CSIRO Innovation fund managed by Main Sequence Ventures. Hrabar suggested that another round might be warranted in a year or two when the company decides to scale and expand into other verticals.

DARPA will be making its own contribution after a fashion through its Subterranean Challenge, should (as seemly likely) Emesent achieve success in it (they’re already an approved participant). Hrabar was confident. “It’s pretty fortuitous,” he said. “We’ve been doing underground autonomy for years, and then DARPA announces this challenge on exactly what we’re doing.”

We’ll be covering the challenge and its participants separately. You can read more about Emesent at its website.

Powered by WPeMatico

Reef-rejuvenating LarvalBot spreads coral babies by the millions

Posted by | climate change, conservation, Gadgets, GreenTech, robotics, science | No Comments

The continuing die-off of the world’s coral reefs is a depressing reminder of the reality of climate change, but it’s also something we can actively push back on. Conservationists have a new tool to do so with LarvalBot, an underwater robot platform that may greatly accelerate efforts to re-seed old corals with healthy new polyps.

The robot has a history going back to 2015, when a prototype known as COTSbot was introduced, capable of autonomously finding and destroying the destructive crown of thorns starfish (hence the name). It has since been upgraded and revised by the team at the Queensland University of Technology, and in its hunter-killer form is known as the RangerBot.

But the same systems that let it safely navigate and monitor corals for invasive fauna also make it capable of helping these vanishing ecosystems more directly.

Great Barrier Reef coral spawn yearly in a mass event that sees the waters off north Queensland filled with eggs and sperm. Researchers at Southern Cross University have been studying how to reap this harvest and sow a new generation of corals. They collect the eggs and sperm and sequester them in floating enclosures, where they are given a week or so to develop into viable coral babies (not my term, but I like it). These coral babies are then transplanted carefully to endangered reefs.

LarvalBot comes into play in that last step.

“We aim to have two or three robots ready for the November spawn. One will carry about 200,000 larvae and the other about 1.2 million,” explained QUT’s Matthew Dunbabin in a news release. “During operation, the robots will follow preselected paths at constant altitude across the reef and a person monitoring will trigger the release of the larvae to maximise the efficiency of the dispersal.”

It’s something a diver would normally have to do, so the robot acts as a force multiplier — one that doesn’t require food or oxygen, as well. A few of these could do the work of dozens of rangers or volunteers.

“The surviving corals will start to grow and bud and form new colonies which will grow large enough after about three years to become sexually reproductive and complete the life cycle,” said Southern Cross’s Peter Harrison, who has been developing the larval restoration technique.

It’s not a quick fix by any means, but this artificial spreading of corals could vastly improve the chances of a given reef or area surviving the next few years and eventually becoming self-sufficient again.

Powered by WPeMatico

Watch this little robot transform to get the job done

Posted by | artificial intelligence, Gadgets, hardware, robotics, robots, science | No Comments

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Powered by WPeMatico

Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Posted by | drones, Gadgets, robotics, science, stanford, Stanford University, UAVs | No Comments

If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.

The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.

“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.

Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.

As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”

Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Powered by WPeMatico