Stanford University

Flexible stick-on sensors could wirelessly monitor your sweat and pulse

Posted by | Berkeley, flexible, flexible electronics, Gadgets, hardware, Health, science, stanford, Stanford University, uc-berkeley | No Comments

As people strive ever harder to minutely quantify every action they do, the sensors that monitor those actions are growing lighter and less invasive. Two prototype sensors from crosstown rivals Stanford and Berkeley stick right to the skin and provide a wealth of physiological data.

Stanford’s stretchy wireless “BodyNet” isn’t just flexible in order to survive being worn on the shifting surface of the body; that flexing is where its data comes from.

The sensor is made of metallic ink laid on top of a flexible material like that in an adhesive bandage. But unlike phones and smartwatches, which use tiny accelerometers or optical tricks to track the body, this system relies on how it is itself stretched and compressed. These movements cause tiny changes in how electricity passes through the ink, changes that are relayed to a processor nearby.

Naturally if one is placed on a joint, as some of these electronic stickers were, it can report back whether and how much that joint has been flexed. But the system is sensitive enough that it can also detect the slight changes the skin experiences during each heartbeat, or the broader changes that accompany breathing.

The problem comes when you have to get that signal off the skin. Using a wire is annoying and definitely very ’90s. But antennas don’t work well when they’re flexed in weird directions — efficiency drops off a cliff, and there’s very little power to begin with — the skin sensor is powered by harvesting RFID signals, a technique that renders very little in the way of voltage.

bodynet sticker and receiver

The second part of their work, then, and the part that is clearly most in need of further improvement and miniaturization, is the receiver, which collects and re-transmits the sensor’s signal to a phone or other device. Although they managed to create a unit that’s light enough to be clipped to clothes, it’s still not the kind of thing you’d want to wear to the gym.

The good news is that’s an engineering and design limitation, not a theoretical one — so a couple years of work and progress on the electronics front and they could have a much more attractive system.

“We think one day it will be possible to create a full-body skin-sensor array to collect physiological data without interfering with a person’s normal behavior,” Stanford professor Zhenan Bao said in a news release.

Over at Cal is a project in a similar domain that’s working to get from prototype to production. Researchers there have been working on a sweat monitor for a few years that could detect a number of physiological factors.

SensorOnForehead BN

Normally you’d just collect sweat every 15 minutes or so and analyze each batch separately. But that doesn’t really give you very good temporal resolution — what if you want to know how the sweat changes minute by minute or less? By putting the sweat collection and analysis systems together right on the skin, you can do just that.

While the sensor has been in the works for a while, it’s only recently that the team has started moving toward user testing at scale to see what exactly sweat measurements have to offer.

RollToRoll BN 768x960“The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us — I always say ‘decoding’ sweat composition. For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects,” explained Ali Javey, Berkeley professor and head of the project.

As anyone who’s working in hardware will tell you, going from a hand-built prototype to a mass-produced model is a huge challenge. So the Berkeley team tapped their Finnish friends at VTT Technical Research Center, who make a specialty of roll-to-roll printing.

For flat, relatively simple electronics, roll-to-roll is a great technique, essentially printing the sensors right onto a flexible plastic substrate that can then simply be cut to size. This way they can make hundreds or thousands of the sensors quickly and cheaply, making them much simpler to deploy at arbitrary scales.

These are far from the only flexible or skin-mounted electronics projects out there, but it’s clear that we’re approaching the point when they begin to leave the lab and head out to hospitals, gyms and homes.

The paper describing Stanford’s flexible sensor appeared this week in the journal Nature Electronics, while Berkeley’s sweat tracker was in Science Advances.

Powered by WPeMatico

KickSat-2 project launches 105 cracker-sized satellites

Posted by | femtosats, Gadgets, hardware, kicksat, science, Space, stanford, Stanford University | No Comments

Move over, Starlink. SpaceX’s global internet play might have caught the world’s attention with its 60-satellite launch last month, but little did we know that it had already been upstaged — at least in terms of sheer numbers. The KickSat-2 project put 105 tiny “femtosats” into space at once months earlier, the culmination of a years-long project begun by a grad student.

KickSat-2 was the second attempt by Zac Manchester, now a professor at Stanford, to test what he believes is an important piece of the coming new space economy: ultra-tiny satellites.

Sure, the four-inch CubeSat standard is small… and craft like Swarm Technologies’ SpaceBEEs are even smaller. But the satellites tested by Manchester are tiny. We’re talking Triscuit size here — perhaps Wheat Thin, or even Cheez-It.

The KickSat project started back in 2011, when Manchester and his colleagues did a Kickstarter to raise funds for about 300 “Sprite” satellites that would be launched to space and deployed on behalf of backers. It was a success, but unfortunately once launched a glitch caused the satellites to burn up before being deployed. Manchester was undeterred and the project continued.

He worked with Cornell University and NASA Ames to redesign the setup, and as part of that he and collaborator Andy Filo collected a prize for their clever 3D-printed deployment mechanism. The Sprites themselves are relatively simple things: essentially an unshielded bit of PCB with a solar panel, antennas and electronics on board to send and receive signals.

The “mothership” launched in November to the ISS, where it sat for several months awaiting an opportunity to be deployed. That opportunity came on March 17: all 105 Sprites were sprung out into low Earth orbit, where they began communicating with each other and (just barely) to ground stations.

Deployment would have looked like this… kind of. Probably a little slower.

This isn’t the start of a semi-permanent thousands-strong constellation, though — the satellites all burned up a few days later, as planned.

“This was mostly a test of deployment and communication systems for the Sprites,” Manchester explained in an email to TechCrunch. The satellites were testing two different signals: “Specially designed CDMA signals that enable hundreds of Sprites to simultaneously communicate with a single ground station at very long range and with very low power,” and “simpler signals for short-range networking between Sprites in orbit.”

The Cygnus spacecraft with the KickSat-2 CubeSat attached — it’s the little gold thing right by where the docking arm is attached.

This proof of concept is an important one — it seems logical and practical to pack dozens or hundreds of these things into future missions, where they can be released into controlled trajectories providing sensing or communications relay capabilities to other spacecraft. And, of course, as we’ve already seen, the smaller and cheaper the spacecraft, the easier it is for people to access space for any reason: scientific, economic or just for the heck of it.

“We’ve shown that it’s possible for swarms of cheap, tiny satellites to one day carry out tasks now done by larger, costlier satellites, making it affordable for just about anyone to put instruments or experiments into orbit,” Manchester said in a Stanford news release. With launch costs dropping, it might not be long before you’ll be able to take ownership of a Sprite of your own.

Powered by WPeMatico

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Posted by | drones, Gadgets, robotics, science, stanford, Stanford University, UAVs | No Comments

If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.

The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.

“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.

Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.

As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”

Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Powered by WPeMatico

VR optics could help old folks keep the world in focus

Posted by | accessibility, disability, Gadgets, hardware, Health, science, siggraph, stanford, Stanford University, TC, Wearables | No Comments

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Powered by WPeMatico

Autonomous cars could peep around corners via bouncing laser

Posted by | automotive, Computer Vision, Gadgets, lasers, science, stanford, Stanford University, TC | No Comments

 Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers. Read More

Powered by WPeMatico

High-speed camera rig captures 3D images of birds’ wings in flight

Posted by | aerodymics, birds, flight, Gadgets, science, Stanford University, TC | No Comments

 You don’t have to be an ornithologist to know that birds are pretty good at flying. But while we know how they do it in general, the millimeter- and microsecond-level details are difficult to pin down. Researchers at Stanford are demystifying bird flight with a custom camera/projector setup, and hoping to eventually replicate its adaptability in unpredictable air currents. Read More

Powered by WPeMatico

This 20-cent whirligig toy can replace a $1,000 medical centrifuge

Posted by | Gadgets, Health, medical tech, science, Stanford University | No Comments

paperfuge_hands Centrifuges are found in medical labs worldwide. But a good one could run you a couple grand and, of course, requires electricity — neither of which are things you’re likely to find in a rural clinic in an impoverished country. Stanford researchers have created an alternative that costs just a few cents and runs without a charge, based on a children’s toy with surprising… Read More

Powered by WPeMatico