Stanford University

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Posted by | drones, Gadgets, robotics, science, stanford, Stanford University, UAVs | No Comments

If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.

The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.

“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.

Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.

As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”

Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Powered by WPeMatico

VR optics could help old folks keep the world in focus

Posted by | accessibility, disability, Gadgets, hardware, Health, science, siggraph, stanford, Stanford University, TC, Wearables | No Comments

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Powered by WPeMatico

Autonomous cars could peep around corners via bouncing laser

Posted by | automotive, Computer Vision, Gadgets, lasers, science, stanford, Stanford University, TC | No Comments

 Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers. Read More

Powered by WPeMatico

High-speed camera rig captures 3D images of birds’ wings in flight

Posted by | aerodymics, birds, flight, Gadgets, science, Stanford University, TC | No Comments

 You don’t have to be an ornithologist to know that birds are pretty good at flying. But while we know how they do it in general, the millimeter- and microsecond-level details are difficult to pin down. Researchers at Stanford are demystifying bird flight with a custom camera/projector setup, and hoping to eventually replicate its adaptability in unpredictable air currents. Read More

Powered by WPeMatico

This 20-cent whirligig toy can replace a $1,000 medical centrifuge

Posted by | Gadgets, Health, medical tech, science, Stanford University | No Comments

paperfuge_hands Centrifuges are found in medical labs worldwide. But a good one could run you a couple grand and, of course, requires electricity — neither of which are things you’re likely to find in a rural clinic in an impoverished country. Stanford researchers have created an alternative that costs just a few cents and runs without a charge, based on a children’s toy with surprising… Read More

Powered by WPeMatico

Stanford’s ‘Jackrabbot’ robot will attempt to learn the arcane and unspoken rules of pedestrians

Posted by | AI, artificial intelligence, Gadgets, robotics, robots, science, Segway, Stanford University, TC | No Comments

jackrabbot It’s hard enough for a grown human to figure out how to navigate a crowd sometimes — so what chance does a clumsy and naive robot have? To prevent future collisions and awkward “do I go left or right” situations, Stanford researchers are hoping their “Jackrabbot” robot can learn the rules of the road. Read More

Powered by WPeMatico