stanford

Flexible stick-on sensors could wirelessly monitor your sweat and pulse

Posted by | Berkeley, flexible, flexible electronics, Gadgets, hardware, Health, science, stanford, Stanford University, uc-berkeley | No Comments

As people strive ever harder to minutely quantify every action they do, the sensors that monitor those actions are growing lighter and less invasive. Two prototype sensors from crosstown rivals Stanford and Berkeley stick right to the skin and provide a wealth of physiological data.

Stanford’s stretchy wireless “BodyNet” isn’t just flexible in order to survive being worn on the shifting surface of the body; that flexing is where its data comes from.

The sensor is made of metallic ink laid on top of a flexible material like that in an adhesive bandage. But unlike phones and smartwatches, which use tiny accelerometers or optical tricks to track the body, this system relies on how it is itself stretched and compressed. These movements cause tiny changes in how electricity passes through the ink, changes that are relayed to a processor nearby.

Naturally if one is placed on a joint, as some of these electronic stickers were, it can report back whether and how much that joint has been flexed. But the system is sensitive enough that it can also detect the slight changes the skin experiences during each heartbeat, or the broader changes that accompany breathing.

The problem comes when you have to get that signal off the skin. Using a wire is annoying and definitely very ’90s. But antennas don’t work well when they’re flexed in weird directions — efficiency drops off a cliff, and there’s very little power to begin with — the skin sensor is powered by harvesting RFID signals, a technique that renders very little in the way of voltage.

bodynet sticker and receiver

The second part of their work, then, and the part that is clearly most in need of further improvement and miniaturization, is the receiver, which collects and re-transmits the sensor’s signal to a phone or other device. Although they managed to create a unit that’s light enough to be clipped to clothes, it’s still not the kind of thing you’d want to wear to the gym.

The good news is that’s an engineering and design limitation, not a theoretical one — so a couple years of work and progress on the electronics front and they could have a much more attractive system.

“We think one day it will be possible to create a full-body skin-sensor array to collect physiological data without interfering with a person’s normal behavior,” Stanford professor Zhenan Bao said in a news release.

Over at Cal is a project in a similar domain that’s working to get from prototype to production. Researchers there have been working on a sweat monitor for a few years that could detect a number of physiological factors.

SensorOnForehead BN

Normally you’d just collect sweat every 15 minutes or so and analyze each batch separately. But that doesn’t really give you very good temporal resolution — what if you want to know how the sweat changes minute by minute or less? By putting the sweat collection and analysis systems together right on the skin, you can do just that.

While the sensor has been in the works for a while, it’s only recently that the team has started moving toward user testing at scale to see what exactly sweat measurements have to offer.

RollToRoll BN 768x960“The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us — I always say ‘decoding’ sweat composition. For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects,” explained Ali Javey, Berkeley professor and head of the project.

As anyone who’s working in hardware will tell you, going from a hand-built prototype to a mass-produced model is a huge challenge. So the Berkeley team tapped their Finnish friends at VTT Technical Research Center, who make a specialty of roll-to-roll printing.

For flat, relatively simple electronics, roll-to-roll is a great technique, essentially printing the sensors right onto a flexible plastic substrate that can then simply be cut to size. This way they can make hundreds or thousands of the sensors quickly and cheaply, making them much simpler to deploy at arbitrary scales.

These are far from the only flexible or skin-mounted electronics projects out there, but it’s clear that we’re approaching the point when they begin to leave the lab and head out to hospitals, gyms and homes.

The paper describing Stanford’s flexible sensor appeared this week in the journal Nature Electronics, while Berkeley’s sweat tracker was in Science Advances.

Powered by WPeMatico

WW launches Kurbo, a hotly debated ‘healthy eating’ app aimed at kids

Posted by | Apps, dieting, eating disorders, family, food, food diary, Health, kids, kurbo health, medicine, Mobile, obesity, parents, stanford, TC | No Comments

Kurbo Health, a mobile weight loss solution designed to tackle childhood obesity which was acquired for $3 million by WW (the rebranded Weight Watchers), has now relaunched as Kurbo by WW — and not without some controversy. Pre-acquisition, the startup was focused on democratizing access to research, behavior modification techniques and other tools that were previously only available through expensive programs run by hospitals or other centers.

As a WW product, however, there are concerns that parents putting kids on “diets” will lead to increased anxiety, stress and disordered eating — in other words, Kurbo will make the problem worse, rather than solving it.

*If* you are worried about your child’s health/lifestyle, give them plenty of nutritious food and make sure they get plenty of fun exercise that helps their mental health. And don’t weigh them. Don’t burden them with numbers, charts or “success/failure.” It’s a slippery slope.

— Jameela Jamil 🌈 (@jameelajamil) August 14, 2019

The Kurbo app first launched at TechCrunch Disrupt NY 2014. Founder Joanna Strober, a venture investor and board member at BlueNile and eToys, explained she was driven to develop Kurbo after struggling to help her own child. Mainly, she came across programs that cost money, were held at inconvenient times for working parents or were dubbed “obesity centers” — with which no child wanted to be associated.

Her child found eventual success with the Stanford Pediatric Weight Loss Program, but this involved in-person visits and pen-and-paper documentation.

Together with Kurbo Health’s co-founder Thea Runyan, who has a Master’s in Public Health and had worked at the Stanford center for 12 years, the team realized the opportunity to bring the research to more people by creating a mobile, data-driven program for kids and families.

They licensed Stanford’s program, which then became Kurbo Health.

FoodSystem Phone

The company raised funds from investors, including Signia Ventures, Data Collective, Bessemer Venture Partners and Promus Ventures, as well as angels like Susan Wojcicki, CEO of YouTube; Greg Badros, former VP Engineering and Product at Facebook; and Esther Dyson (EdVenture), among others.

At launch, the app was designed to encourage healthier eating patterns without parents actually being able to see the child’s food diary. Instead, parents set a reward that was doled out simply for the child’s participation. That is, the parents couldn’t see what the child ate, specifically, which allowed them to stop playing “food police.”

ProfileStreak Phone

Unlike adult-oriented apps like MyFitnessPal or Noom, kids wouldn’t see metrics like calories, sugars, carbs and fat, but instead had their food choices categorized as “red,” “yellow” and “green.” However, no foods were designated as “off limits,” as it instead encouraged fewer reds and more greens.

The program also included an option for virtual coaching.

As a WW product, the program has remained somewhat the same. There are still the color-coded food categorizations and optional live coaching, via a subscription. Parents are still involved, now with updates after coaching calls or the option to join coaching sessions. The app also now includes tools that teach meditation, recipe videos and games that focus on healthy lifestyles. Subscribers gain access to one-on-one 15-minute virtual sessions with coaches whose professional backgrounds include counseling, fitness and other nutrition-related fields.

However, there are also things like a place to track measurements, goals like “lose weight” and Snapchat-style “tracking streaks.”

Home Tracked Phone

While the original program was designed to be a solution for parents with children who would have otherwise had to seek expensive medical help for obesity issues, the association with parent company and acquirer WW has led to some backlash.

CoachingChat Phone

Today, body positivity and fat acceptance movements have gone mainstream, encouraging people to be confident in their own bodies and not hate themselves for being overweight. The general thinking is that when people respect themselves, they become more likely to care for themselves — and this will extend to making healthier food and lifestyle choices.

Meanwhile, food tracking and dieting programs often lead to failure and shame — especially when people start to think of some food as “bad” or a “cheat,” instead of just something to be eaten in moderation. And excessive tracking can even lead to disordered eating patterns for some people, studies have found.

In addition, WW has already been under fire for extending its weight loss program to teens 13-17 for free, and the launch of what’s seen as a “dieting app for kids” as part of WW’s broader family-focused agenda certainly isn’t helping the backlash.

That said, when positive reinforcement is used correctly, it can work for weight loss. As TIME reported, the red-yellow-green traffic light approach was effective in adults in one independent study by Massachusetts General Hospital and another presented at the Biennial Childhood Obesity Conference worked in children, with 84% reducing their BMI after 21 weeks.

“According to recent reports from the World Health Organization, childhood obesity is one of the most serious public health challenges of the 21st century. This is a global public health crisis that needs to be addressed at scale,” said Joanna Strober, co-founder of Kurbo, in a statement about the launch. “As a mom whose son struggled with his weight at a young age, I can personally attest to the importance and significance of having a solution like Kurbo by WW, which is inherently designed to be simple, fun and effective,” she said.

KURBO.

I thought that I hated Weight Watchers. I have not hated them as much as I do right now.

Making weight loss trendy for children is making the development of eating disorders easier and trendier. I am not here for this.

— Anna Sweeney MS, CEDRD-S (@DietitianAnna) August 13, 2019

That said, it’s one thing for a parent to work in conjunction with a doctor to help a child with a health issue, but parents who foist a food tracking app on their kids may not get the same results. In fact, they may even cause the child to develop eating disorders that weren’t present before. (And no, just because a child is overweight, that doesn’t necessarily mean they’re suffering from an “eating disorder.”)

Weight Watchers has a new dieting app for kids as young as 8 and it is truly disturbing https://t.co/GjPl4PHwSv pic.twitter.com/ZMkZmFr9X6

— Dr. Yasmin (@DoctorYasmin) August 14, 2019

There can be many other factors that could be causing a child’s unexpected weight gain, beyond just their interest in eating high-calorie foods. This includes health ailments, hormone or chemical imbalances, medication side effects, puberty and other growth spurts (which can’t always be determined through BMI changes, which are tracked in-app), genetics, and more.

Parents may also be part of the problem, by simply bringing unhealthy food into the house because it’s more affordable or because they aren’t aware of things like hidden sugars or how to avoid them. Or perhaps they’re putting money into a child’s school lunch account, without realizing the child is able to spend it on vending machine snacks, sodas or off-menu items like pizza and chips.

The child may also suffer from health problems like asthma or allergies that have become an underlying issue, making it more difficult for them to be active.

In other words, a program like this is something that parents should approach with caution. And it’s certainly one where the child’s doctor should be involved at every stage — including in determining whether or not an app is actually needed at all.

Powered by WPeMatico

KickSat-2 project launches 105 cracker-sized satellites

Posted by | femtosats, Gadgets, hardware, kicksat, science, Space, stanford, Stanford University | No Comments

Move over, Starlink. SpaceX’s global internet play might have caught the world’s attention with its 60-satellite launch last month, but little did we know that it had already been upstaged — at least in terms of sheer numbers. The KickSat-2 project put 105 tiny “femtosats” into space at once months earlier, the culmination of a years-long project begun by a grad student.

KickSat-2 was the second attempt by Zac Manchester, now a professor at Stanford, to test what he believes is an important piece of the coming new space economy: ultra-tiny satellites.

Sure, the four-inch CubeSat standard is small… and craft like Swarm Technologies’ SpaceBEEs are even smaller. But the satellites tested by Manchester are tiny. We’re talking Triscuit size here — perhaps Wheat Thin, or even Cheez-It.

The KickSat project started back in 2011, when Manchester and his colleagues did a Kickstarter to raise funds for about 300 “Sprite” satellites that would be launched to space and deployed on behalf of backers. It was a success, but unfortunately once launched a glitch caused the satellites to burn up before being deployed. Manchester was undeterred and the project continued.

He worked with Cornell University and NASA Ames to redesign the setup, and as part of that he and collaborator Andy Filo collected a prize for their clever 3D-printed deployment mechanism. The Sprites themselves are relatively simple things: essentially an unshielded bit of PCB with a solar panel, antennas and electronics on board to send and receive signals.

The “mothership” launched in November to the ISS, where it sat for several months awaiting an opportunity to be deployed. That opportunity came on March 17: all 105 Sprites were sprung out into low Earth orbit, where they began communicating with each other and (just barely) to ground stations.

Deployment would have looked like this… kind of. Probably a little slower.

This isn’t the start of a semi-permanent thousands-strong constellation, though — the satellites all burned up a few days later, as planned.

“This was mostly a test of deployment and communication systems for the Sprites,” Manchester explained in an email to TechCrunch. The satellites were testing two different signals: “Specially designed CDMA signals that enable hundreds of Sprites to simultaneously communicate with a single ground station at very long range and with very low power,” and “simpler signals for short-range networking between Sprites in orbit.”

The Cygnus spacecraft with the KickSat-2 CubeSat attached — it’s the little gold thing right by where the docking arm is attached.

This proof of concept is an important one — it seems logical and practical to pack dozens or hundreds of these things into future missions, where they can be released into controlled trajectories providing sensing or communications relay capabilities to other spacecraft. And, of course, as we’ve already seen, the smaller and cheaper the spacecraft, the easier it is for people to access space for any reason: scientific, economic or just for the heck of it.

“We’ve shown that it’s possible for swarms of cheap, tiny satellites to one day carry out tasks now done by larger, costlier satellites, making it affordable for just about anyone to put instruments or experiments into orbit,” Manchester said in a Stanford news release. With launch costs dropping, it might not be long before you’ll be able to take ownership of a Sprite of your own.

Powered by WPeMatico

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

Snap CEO’s sister Caroline Spiegel starts a no-visuals porn site

Posted by | Apps, Entertainment, erotica, Evan Spiegel, funding, Fundings & Exits, Media, Mobile, pornhub, pornography, Recent Funding, Social, stanford, Startups, TC | No Comments

If you took the photos and videos out of pornography, could it appeal to a new audience? Caroline Spiegel’s first startup Quinn aims to bring some imagination to adult entertainment. Her older brother, Snapchat CEO Evan Spiegel, spent years trying to convince people his app wasn’t just for sexy texting. Now Caroline is building a website dedicated to sexy text and audio. The 22-year-old college senior tells TechCrunch that on April 13th she’ll launch Quinn, which she describes as “a much less gross, more fun Pornhub for women.”

TechCrunch checked out Quinn’s private beta site, which is pretty bare bones right now. Caroline tells us she’s already raised less than a million dollars for the project. But given her brother’s success spotting the next generation’s behavior patterns and turning them into beloved products, Caroline might find investors are eager to throw cash at Quinn. That’s especially true given she’s taking a contrarian approach. There will be no imagery on Quinn.

Caroline explains that “There’s no visual content on the site — just audio and written stories. And the whole thing is open source, so people can submit content and fantasies, etc. Everything is vetted by us before it goes on the site.” The computer science major is building Quinn with a three-woman team of her best friends she met while at Stanford, including Greta Meyer, though they plan to relocate to LA after graduation.

“His dream girl was named ‘Quinn’ “

The idea for Quinn sprung from a deeply personal need. “I came up with it because I had to leave Stanford my junior year because I was struggling with anorexia and sexual dysfunction that came along with that,” Caroline tells me. “I started to do a lot of research into sexual dysfunction cures. There are about 30 FDA-approved drugs for sexual dysfunction for men but zero for women, and that’s a big bummer.”

She believes there’s still a stigma around women pleasuring themselves, leading to a lack of products offering assistance. Sure, there are plenty of porn sites, but few are explicitly designed for women, and fewer stray outside of visual content. Caroline says photos and videos can create body image pressure, but with text and audio, anyone can imagine themselves in a scene. “Most visual media perpetuates the male gaze … all mainstream porn tells one story … You don’t have to fit one idea of what a woman should look like.”

That concept fits with the startup’s name “Quinn,” which Caroline says one of her best guy friends thought up. “He said this girl he met — his dream girl — was named ‘Quinn.’ ”

Caroline took to Reddit and Tumblr to find Quinn’s first creators. Reddit stuck to text and links for much of its history, fostering the kinky literature and audio communities. And when Tumblr banned porn in December, it left a legion of adult content makers looking for a new home. “Our audio ranges from guided masturbation to overheard sex, and there’s also narrated stories. It’s literally everything. Different strokes for different for folks, know what I mean?” Caroline says with a cheeky laugh.

To establish its brand, Quinn is running social media influencer campaigns where “The basic idea is to make people feel like it’s okay to experience pleasure. It’s hard to make something like masturbation cool, so that’s a little bit of a lofty goal. We’re just trying to make it feel okay, and even more okay than it is for men.”

As for the business model, Caroline’s research found younger women were embarrassed to pay for porn. Instead, Quinn plans to run ads, though there could be commerce opportunities too. And because the site doesn’t bombard users with nude photos or hardcore videos, it might be able to attract sponsors that most porn sites can’t.

Evan is “very supportive”

Until monetization spins up, Quinn has the sub-$1 million in funding that Caroline won’t reveal the source of, though she confirms it’s not from her brother. “I wouldn’t say that he’s particularly involved other than he’s one of the most important people in my life and I talk to him all the time. He gives me the best advice I can imagine,” the younger sibling says. “He doesn’t have any qualms, he’s very supportive.”

Quinn will need all the morale it can get, as Caroline bluntly admits, “We have a lot of competitors.” There’s the traditional stuff like Pornhub, user-generated content sites like Make Love Not Porn and spontaneous communities like on Reddit. She calls $5 million-funded audio porn startup Dipsea “an exciting competitor,” though she notes that “we sway a little more erotic than they do, but we’re so supportive of their mission.” How friendly.

Quinn’s biggest rival will likely be outdated but institutionalized site Literotica, which SimilarWeb ranks as the 60th most popular adult website, 631st most visited site overall, showing it gets 53 million hits per month. But the fact that Literotica looks like a web 1.0 forum yet has so much traffic signals a massive opportunity for Quinn. With rules prohibiting Quinn from launching native mobile apps, it will have to put all its effort into making its website stand out if it’s going to survive.

But more than competition, Caroline fears that Quinn will have to convince women to give its style of porn a try. “Basically, there’s this idea that for men, masturbation is an innate drive and for women it’s a ‘could do without it, could do with it.’ Quinn is going to have to make a market alongside a product and that terrifies me,” Caroline says, her voice building with enthusiasm. “But that’s what excites me the most about it, because what I’m banking on is if you’ve never had chocolate before, you don’t know. But once you have it, you start craving it. A lot of women haven’t experienced raw, visceral pleasure before, [but once we help them find it] we’ll have momentum.”

Most importantly, Quinn wants all women to feel they have rightful access to whatever they fancy. “It’s not about deserving to feel great. You don’t have to do Pilates to use this. You don’t have to always eat right. There’s no deserving with our product. Our mission is for women to be more in touch with themselves and feel fucking great. It’s all about pleasure and good vibes.”

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Posted by | drones, Gadgets, robotics, science, stanford, Stanford University, UAVs | No Comments

If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.

The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.

“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.

Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.

As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”

Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Powered by WPeMatico

VR optics could help old folks keep the world in focus

Posted by | accessibility, disability, Gadgets, hardware, Health, science, siggraph, stanford, Stanford University, TC, Wearables | No Comments

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Powered by WPeMatico

Autonomous cars could peep around corners via bouncing laser

Posted by | automotive, Computer Vision, Gadgets, lasers, science, stanford, Stanford University, TC | No Comments

 Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers. Read More

Powered by WPeMatico

Research heralds better and bidirectional brain-computer interfaces

Posted by | accessibility, brain-computer interface, Gadgets, Health, science, stanford, TC | No Comments

stanford_bci_header A pair of studies, one from Stanford and another from the University of Geneva, exemplify the speed with which brain-computer interfaces are advancing; and while you won’t be using one instead of a mouse and keyboard any time soon, even in its nascent form the tech may prove transformative for the disabled. Read More

Powered by WPeMatico