robotics

How Squishy Robotics created a robot that can be safely dropped out of a helicopter

Posted by | Gadgets, hardware, robotics, Squishy Robotics, TC, TC Sessions: Robotics + AI | No Comments

If you want to build a robot that can fall hundreds of feet and be no worse the wear, legs are pretty much out of the question. The obvious answer, then, is a complex web of cable-actuated rods. Obvious to Squishy Robotics, anyway, whose robots look delicate but are in fact among the most durable out there.

The startup has been operating more or less in stealth mode, emerging publicly today onstage at our Robotics + AI Sessions event in Berkeley, Calif. It began, co-founder and CEO Alice Agogino told me, as a project connected to NASA Ames a few years back.

“The original idea was to have a robot that could be dropped from a spacecraft and survive the fall,” said Agogino. “But I could tell this tech had earthly applications.”

Her reason for thinking so was learning that first responders were losing their lives due to poor situational awareness in areas they were being deployed. It’s hard to tell without actually being right there that a toxic gas is lying close to the ground, or that there is a downed electrical line hidden under a fallen tree, and so on.

Robots are well-suited to this type of reconnaissance, but it’s a bit of a Catch-22: You have to get close to deploy a robot, but you need the robot there to get close enough in the first place. Unless, of course, you can somehow deploy the robot from the air. This is already done, but it’s rather clumsy: picture a wheeled bot floating down under a parachute, missing its mark by a hundred feet due to high winds or getting tangled in its own cords.

“We interviewed a number of first responders,” said Agogino. “They told us they want us to deploy ground sensors before they get there, to know what they’re getting into; then when they get there they want something to walk in front of them.”

Squishy’s solution can’t quite be dropped from orbit, as the original plan was for exploring Saturn’s moon Titan, but they can fall from 600 feet, and likely much more than that, and function perfectly well afterwards. It’s all because of the unique “tensegrity structure,” which looks like a game of pick-up-sticks crossed with cat’s cradle. (Only use the freshest references for you, reader.)

If it looks familiar, you’re probably thinking of the structures famously studied by Buckminster Fuller, and they’re related but quite different. This one had to be engineered not just to withstand great force from dropping, but to shift in such a way that it can walk or crawl along the ground and even climb low obstacles. That’s a nontrivial shift away from the buckyball and other geodesic types.

“We looked at lots of different tensegrity structures — there are an infinite number,” Agogino said. “It has six compressive elements, which are the bars, and 24 other elements, which are the cables or wires. But they could be shot out of a cannon and still protect the payload. And they’re so compliant, you could throw them at children, basically.” (That’s not the mission, obviously. But there are in fact children’s toys with tensegrity-type designs.)

Inside the bars are wires that can be pulled or slackened to cause to move the various points of contact with the ground, changing the center of gravity and causing the robot to roll or spin in the desired direction. A big part of the engineering work was making the tiny motors to control the cables, and then essentially inventing a method of locomotion for this strange shape.

“On the one hand it’s a relatively simple structure, but it’s complicated to control,” said Agogino. “To get from A to B there are any number of solutions, so you can just play around — we even had kids do it. But to do it quickly and accurately, we used machine learning and AI techniques to come up with an optimum technique. First we just created lots of motions and observed them. And from those we found patterns, different gaits. For instance if it has to squeeze between rocks, it has to change its shape to be able to do that.”

The mobile version would be semi-autonomous, meaning it would be controlled more or less directly but figure out on its own the best way to accomplish “go forward” or “go around this wall.” The payload can be customized to have various sensors and cameras, depending on the needs of the client — one being deployed at a chemical spill needs a different loadout than one dropping into a radioactive area, for instance.

To be clear, these things aren’t going to win in an all-out race against a Spot or a wheeled robot on unbroken pavement. But for one thing, those are built specifically for certain environments and there’s room for more all-purpose, adaptable types. And for another, neither one of those can be dropped from a helicopter and survive. In fact, almost no robots at all can.

“No one can do what we do,” Agogino preened. At a recent industry demo day where robot makers showed off air-drop models, “we were the only vendor that was able to do a successful drop.”

And although the tests only went up to a few hundred feet, there’s no reason that Squishy’s bots shouldn’t be able to be dropped from 1,000, or for that matter 50,000 feet up. They hit terminal velocity after a relatively short distance, meaning they’re hitting the ground as hard as they ever will, and working just fine afterwards. That has plenty of parties interested in what Squishy is selling.

The company is still extremely small and has very little funding: mainly a $500,000 grant from NASA and $225,000 from the National Science Foundation’s SBIR fund. But they’re also working from UC Berkeley’s Skydeck accelerator, which has already put them in touch with a variety of resources and entrepreneurs, and the upcoming May 14 demo day will put their unique robotics in front of hundreds of VCs eager to back the latest academic spin-offs.

You can keep up with the latest from the company at its website, or of course this one.

Powered by WPeMatico

Talk all things robotics and AI with TechCrunch writers

Posted by | articles, artificial intelligence, Automation, conference calls, deep learning, Emerging-Technologies, events, Extra Crunch Conference Call, Extra Crunch members, Gadgets, hardware, robotics, science, Startups, TC, tc sessions, TC Sessions: Robotics + AI 2019, technology, uc-berkeley | No Comments

This Thursday, we’ll be hosting our third annual Robotics + AI TechCrunch Sessions event at UC Berkeley’s Zellerbach Hall. The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists.

The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s Brian Heater spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside Lucas Matney, who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with Extra Crunch members (on a conference call) what they saw and what excited them most.

Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? It’s not too late to get tickets.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Powered by WPeMatico

Mars helicopter bound for the Red Planet takes to the air for the first time

Posted by | drones, Gadgets, Government, hardware, jpl, mars 2020, NASA, robotics, science, Space, TC, UAVs | No Comments

The Mars 2020 mission is on track for launch next year, and nesting inside the high-tech new rover heading that direction is a high-tech helicopter designed to fly in the planet’s nearly non-existent atmosphere. The actual aircraft that will fly on the Martian surface just took its first flight and its engineers are over the moon.

“The next time we fly, we fly on Mars,” said MiMi Aung, who manages the project at JPL, in a news release. An engineering model that was very close to final has over an hour of time in the air, but these two brief test flights were the first and last time the tiny craft will take flight until it does so on the distant planet (not counting its “flight” during launch).

“Watching our helicopter go through its paces in the chamber, I couldn’t help but think about the historic vehicles that have been in there in the past,” she continued. “The chamber hosted missions from the Ranger Moon probes to the Voyagers to Cassini, and every Mars rover ever flown. To see our helicopter in there reminded me we are on our way to making a little chunk of space history as well.”

Artist’s impression of how the helicopter will look when it’s flying on Mars

A helicopter flying on Mars is much like a helicopter flying on Earth, except of course for the slight differences that the other planet has a third less gravity and 99 percent less air. It’s more like flying at 100,000 feet, Aung suggested.

It has its own solar panel so it can explore more or less on its own

The test rig they set up not only produces a near-vacuum, replacing the air with a thin, Mars-esque CO2 mix, but a “gravity offload” system simulates lower gravity by giving the helicopter a slight lift via a cable.

It flew at a whopping two inches of altitude for a total of a minute in two tests, which was enough to show the team that the craft (with all its 1,500 parts and four pounds) was ready to package up and send to the Red Planet.

“It was a heck of a first flight,” said tester Teddy Tzanetos. “The gravity offload system performed perfectly, just like our helicopter. We only required a 2-inch hover to obtain all the data sets needed to confirm that our Mars helicopter flies autonomously as designed in a thin Mars-like atmosphere; there was no need to go higher.”

A few months after the Mars 2020 rover has landed, the helicopter will detach and do a few test flights of up to 90 seconds. Those will be the first heavier-than-air flights on another planet — powered flight, in other words, rather than, say, a balloon filled with gaseous hydrogen.

The craft will operate mostly autonomously, since the half-hour round trip for commands would be far too long for an Earth-based pilot to operate it. It has its own solar cells and batteries, plus little landing feet, and will attempt flights of increasing distance from the rover over a 30-day period. It should go about three meters in the air and may eventually get hundreds of meters away from its partner.

Mars 2020 is estimated to be ready to launch next summer, arriving at its destination early in 2021. Of course, in the meantime, we’ve still got Curiosity and Insight up there, so if you want the latest from Mars, you’ve got plenty of options to choose from.

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

Posted by | artificial intelligence, automotive, autonomous vehicles, Gadgets, hardware, Intel, Mobileye, nvidia, robotics, self-driving cars, TC, Transportation | No Comments

While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by Mobileye two years ago were just too much for the latter company’s CEO to take politely.

Amnon Shashua, in a blog post on parent company Intel’s news feed cheekily titled “Innovation Requires Originality, openly mocks Nvidia’s “Safety Force Field,” pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017.

He writes:

It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety.

Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate.

Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least.

The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his):

RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold.

SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.”  Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.”

This is followed by numerous other close parallels, and just when you think it’s done, he includes a whole separate document (PDF) showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again.

Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches.

You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.”

Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.”

Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it.

It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility.

I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.

Powered by WPeMatico

Gates-backed Lumotive upends lidar conventions using metamaterials

Posted by | accelerator, automotive, autonomous vehicles, Bill Gates, Gadgets, hardware, Intellectual Ventures, lasers, Lidar, Lumotive, robotics, science, self-driving cars, TC, Transportation | No Comments

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Powered by WPeMatico

Tiny claws let drones perch like birds and bats

Posted by | artificial intelligence, biomimesis, biomimetic, drones, Gadgets, hardware, robotics, science | No Comments

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.

Powered by WPeMatico

Prototype prosthesis proffers proper proprioceptive properties

Posted by | EPFL, Gadgets, hardware, Health, medtech, prosthesis, Prosthetics, robotics, science, TC | No Comments

Researchers have created a prosthetic hand that offers its users the ability to feel where it is and how the fingers are positioned — a sense known as proprioception. The headline may be in jest, but the advance is real and may help amputees more effectively and naturally use their prostheses.

Prosthesis rejection is a real problem for amputees, and many choose to simply live without these devices, electronic or mechanical, as they can complicate as much as they simplify. Part of that is the simple fact that, unlike their natural limbs, artificial ones have no real sensation — or if there is any, it’s nowhere near the level someone had before.

Touch and temperature detection are important, of course, but what’s even more critical to ordinary use is simply knowing where your limb is and what it’s doing. If you close your eyes, you can tell where each digit is, how many you’re holding up, whether they’re gripping a small or large object and so on. That’s currently impossible with a prosthesis, even one that’s been integrated with the nervous system to provide feedback — meaning users have to watch what they’re doing at all times. (That is, if the arm isn’t watching for you.)

This prosthesis, built by Swiss, Italian and German neurologists and engineers, is described in a recent issue of Science Robotics. It takes the existing concept of sending touch information to the brain through electrodes patched into the nerves of the arm, and adapts it to provide real-time proprioceptive feedback.

“Our study shows that sensory substitution based on intraneural stimulation can deliver both position feedback and tactile feedback simultaneously and in real time. The brain has no problem combining this information, and patients can process both types in real time with excellent results,” explained Silvestro Micera, of the École Polytechnique Fédérale de Lausanne, in a news release.

It’s been the work of a decade to engineer and demonstrate this possibility, which could be of enormous benefit. Having a natural, intuitive understanding of the position of your hand, arm or leg would likely make prostheses much more useful and comfortable for their users.

Essentially the robotic hand relays its telemetry to the brain through the nerve pathways that would normally be bringing touch to that area. Unfortunately it’s rather difficult to actually recreate the proprioceptive pathways, so the team used what’s called sensory substitution instead. This uses other pathways, like ordinary touch, as ways to present different sense modalities.

(Diagram modified from original to better fit, and to remove some rather bloody imagery.)

A simple example would be a machine that touched your arm in a different location depending on where your hand is. In the case of this research it’s much finer, but still essentially presenting position data as touch data. It sounds weird, but our brains are actually really good at adapting to this kind of thing.

As evidence, witness that after some training two amputees using the system were able to tell the difference between four differently shaped objects being grasped, with their eyes closed, with 75 percent accuracy. Chance would be 25 percent, of course, meaning the sensation of holding objects of different sizes came through loud and clear — clear enough for a prototype, anyway. Amazingly, the team was able to add actual touch feedback to the existing pathways and the users were not overly confused by it. So there’s precedent now for multi-modal sensory feedback from an artificial limb.

The study has well-defined limitations, such as the number and type of fingers it was able to relay information from, and the granularity and type of that data. And the “installation” process is still very invasive. But it’s pioneering work nevertheless: this type of research is very iterative and global, progressing by small steps until, all of a sudden, prosthetics as a science has made huge strides. And the people who use prosthetic limbs will be making strides, as well.

Powered by WPeMatico

This robotics museum in Korea will construct itself (in theory)

Posted by | architecture, artificial intelligence, design, Gadgets, hardware, korea, robotics, robots | No Comments

The planned Robot Science Museum in Seoul will have a humdinger of a first exhibition: its own robotic construction. It’s very much a publicity stunt, though a fun one — but who knows? Perhaps robots putting buildings together won’t be so uncommon in the next few years, in which case Korea will just be an early adopter.

The idea for robotic construction comes from Melike Altinisik Architects, the Turkish firm that won a competition to design the museum. Their proposal took the form of an egg-like shape covered in panels that can be lifted into place by robotic arms.

“From design, manufacturing to construction and services robots will be in charge,” wrote the firm in the announcement that they had won the competition. Now, let’s be honest: this is obviously an exaggeration. The building has clearly been designed by the talented humans at MAA, albeit with a great deal of help from computers. But it has been designed with robots in mind, and they will be integral to its creation.The parts will all be designed digitally, and robots will “mold, assemble, weld and polish” the plates for the outside, according to World Architecture, after which of course they will also be put in place by robots. The base and surrounds will be produced by an immense 3D printer laying down concrete.

So while much of the project will unfortunately have to be done by people, it will certainly serve as a demonstration of those processes that can be accomplished by robots and computers.

Construction is set to begin in 2020, with the building opening its (likely human-installed) doors in 2022 as a branch of the Seoul Metropolitan Museum. Though my instincts tell me that this kind of unprecedented combination of processes is more likely than not to produce significant delays. Here’s hoping the robots cooperate.

Powered by WPeMatico

Deploy the space harpoon

Posted by | airbus, Gadgets, hardware, harpoons, moby dick, robotics, science, Space, space debris, space junk | No Comments

Watch out, starwhales. There’s a new weapon for the interstellar dwellers whom you threaten with your planet-crushing gigaflippers, undergoing testing as we speak. This small-scale version may only be good for removing dangerous orbital debris, but in time it will pierce your hypercarbon hides and irredeemable sun-hearts.

Literally a space harpoon. (Credit: Airbus)

However, it would be irresponsible of me to speculate beyond what is possible today with the technology, so let a summary of the harpoon’s present capabilities suffice.

The space harpoon is part of the RemoveDEBRIS project, a multi-organization European effort to create and test methods of reducing space debris. There are thousands of little pieces of who knows what clogging up our orbital neighborhood, ranging in size from microscopic to potentially catastrophic.

There are as many ways to take down these rogue items as there are sizes and shapes of space junk; perhaps it’s enough to use a laser to edge a small piece down toward orbital decay, but larger items require more hands-on solutions. And seemingly all nautical in origin: RemoveDEBRIS has a net, a sail and a harpoon. No cannon?

You can see how the three items are meant to operate here:

The harpoon is meant for larger targets, for example full-size satellites that have malfunctioned and are drifting from their orbit. A simple mass driver could knock them toward the Earth, but capturing them and controlling descent is a more controlled technique.

While an ordinary harpoon would simply be hurled by the likes of Queequeg or Dagoo, in space it’s a bit different. Sadly it’s impractical to suit up a harpooner for EVA missions. So the whole thing has to be automated. Fortunately the organization is also testing computer vision systems that can identify and track targets. From there it’s just a matter of firing the harpoon at it and reeling it in, which is what the satellite demonstrated today.

This Airbus-designed little item is much like a toggling harpoon, which has a piece that flips out once it pierces the target. Obviously it’s a single-use device, but it’s not particularly large and several could be deployed on different interception orbits at once. Once reeled in, a drag sail (seen in the video above) could be deployed to hasten reentry. The whole thing could be done with little or no propellant, which greatly simplifies operation.

Obviously it’s not yet a threat to the starwhales. But we’ll get there. We’ll get those monsters good one day.

Powered by WPeMatico