university of washington

Watch a laser-powered RoboFly flap its tiny wings

Posted by | Gadgets, lasers, robotics, science, TC, university of washington | No Comments

Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.

We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.

But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.

“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.

There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:

“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.

At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.

The team is headed to Australia next week to present the RoboFly at the International Conference on Robotics and Automation in Brisbane.

Powered by WPeMatico

Who’s a good AI? Dog-based data creates a canine machine learning system

Posted by | allen institute for artificial intelligence, artificial intelligence, Computer Vision, Gadgets, science, university of washington | No Comments

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.

It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June.

Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.

And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.

As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.

In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.

They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.

This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.

“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”

That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.

This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”

Powered by WPeMatico

Student projects leapfrog governments and industry in ‘Data Science for Social Good’ program

Posted by | big data, data science, Education, Gadgets, Government, science, TC, university of washington | No Comments

milan-deprivation Big data is hardly new at this point — nor has it wrought anywhere near its potential effects on many companies and institutions insulated by inertia and red tape. A summer program at the University of Washington called Data Science for Good shows that fresh eyes and good code can make more in 10 weeks than some have done in as many years. Read More

Powered by WPeMatico

Devices could recycle radio waves instead of transmitting them with new ‘interscatter’ technique

Posted by | Bluetooth, Gadgets, research, TC, university of washington, wi-fi, wireless | No Comments

UW grad student Vikram Iyer holds a contact lens antenna that could use interscatter to communicate. If we’re ever to have things like smart contact lenses and permanent brain implants, one of the things we need to figure out is the power problem. Those devices need energy for collecting, processing and especially transmitting data — but that last one might not be a problem anymore, thanks to a new technique called interscatter communication. Read More

Powered by WPeMatico

One professor’s quest to 3D scan every fish in the sea

Posted by | 3d printing, 3d scanning, Bio, biology, Education, Gadgets, science, TC, university of washington | No Comments

Thoracocorax-stellatus-Spotfin-hatchetfish If you were wondering what a mottled sculpin looks like, there are plenty of pictures available online. But while they may satisfy a curious tidepooler, the discerning ichthyologist demands more. That’s why a professor at the University of Washington is getting full 3D scans of every fish in the sea — every species, anyway. Read More

Powered by WPeMatico

Students demonstrate their Hololens apps after a quarter of VR and AR design

Posted by | augmented reality, Education, Gadgets, HoloLens, Microsoft, mixed reality, students, TC, university of washington, Virtual reality, Wearables | No Comments

20160413_160048 It’s just about impossible to get your hands on Microsoft’s impressive mixed-reality Hololens platform these days — unless you’re a computer science student at the University of Washington. Then you get to play with them whenever you want. Read More

Powered by WPeMatico

SpiroCall measures lung health over any phone — no app necessary

Posted by | Bio, Gadgets, Health, medical, Mobile, science, SpiroCall, TC, university of washington | No Comments

Spirosmart-Bangladesh Sometimes you see an application of technology that’s so innovative and helpful you can’t believe it exists in this age of narcissistic and short-sighted startups. SpiroCall, a service that lets anyone in the world call a toll-free number and have their lung health evaluated over the phone, seems too good to be real — but it’s real, and it’s good. Read More

Powered by WPeMatico