science

IBM’s Verifier inspects (and verifies) diamonds, pills and materials at the micron level

Posted by | artificial intelligence, Gadgets, hardware, IBM, Mobile, science, spectroscopy, TC | No Comments

It’s not enough in this day and age that we have to deal with fake news, we also have to deal with fake prescription drugs, fake luxury goods, and fake Renaissance-era paintings. Sometimes all at once! IBM’s Verifier is a gadget and platform made (naturally) to instantly verify that something is what it claims to be, by inspecting it at a microscopic level.

Essentially you stick a little thing on your phone’s camera, open the app, and put the sensor against what you’re trying to verify, be it a generic antidepressant or an ore sample. By combining microscopy, spectroscopy, and a little bit of AI, the Verifier compares what it sees to a known version of the item and tells you whether they’re the same.

The key component in this process is an “optical element” that sits in front of the camera (it can be anything that takes a decent image) amounting to a specialized hyper-macro lens. It allows the camera to detect features as small as a micron — for comparison, a human hair is usually a few dozen microns wide.

At the micron level there are patterns and optical characteristics that aren’t visible to the human eye, like precisely which wavelengths of light it reflects. The quality of a weave, the number of flaws in a gem, the mixture of metals in an alloy… all stuff you or I would miss, but a machine learning system trained on such examples will pick out instantly.

For instance a counterfeit pill, although orange and smooth and imprinted just like a real one if one were to just look at it, will likely appear totally different at the micro level: textures and structures with a very distinct pattern, or at least distinct from the real thing — not to mention a spectral signature that’s probably way different. There’s also no reason it can’t be used on things like expensive wines or oils, contaminated water, currency, and plenty of other items.

IBM was eager to highlight the AI element, which is trained on the various patterns and differentiates between them, though as far as I can tell it’s a pretty straightforward classification task. I’m more impressed by the lens they put together that can resolve at a micron level with so little distortion and not exclude or distort the colors too much. It even works on multiple phones — you don’t have to have this or that model.

The first application IBM is announcing for its Verifier is as a part of the diamond trade, which is of course known for fetishizing the stones and their uniqueness, and also establishing elaborate supply trains to ensure product is carefully controlled. The Verifier will be used as an aide for grading stones, not on its own but as a tool for human checkers; it’s a partnership with the Gemological Institute of America, which will test integrating the tool into its own workflow.

By imaging the stone from several angles, the individual identity of the diamond can be recorded and tracked as well, so that its provenance and trail through the industry can be tracked over the years. Here IBM imagines blockchain will be useful, which is possible but not exactly a given.

It’ll be a while before you can have one of your own, but here’s hoping this type of tech becomes popular enough that you can check the quality or makeup of something at least without having to visit some lab.

Powered by WPeMatico

Watch a laser-powered RoboFly flap its tiny wings

Posted by | Gadgets, lasers, robotics, science, TC, university of washington | No Comments

Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.

We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.

But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.

“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.

There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:

“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.

At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.

The team is headed to Australia next week to present the RoboFly at the International Conference on Robotics and Automation in Brisbane.

Powered by WPeMatico

First CubeSats to travel the solar system snap ‘Pale Blue Dot’ homage

Posted by | cubesat, Gadgets, Insight, jpl, NASA, science, Space, TC | No Comments

The InSight launch earlier this month had a couple of stowaways: a pair of tiny CubeSats that are already the farthest such tiny satellites have ever been from Earth — by a long shot. And one of them got a chance to snap a picture of their home planet as an homage to the Voyager mission’s famous “Pale Blue Dot.” It’s hardly as amazing a shot as the original, but it’s still cool.

The CubeSats, named MarCO-A and B, are an experiment to test the suitability of pint-size craft for exploration of the solar system; previously they have only ever been deployed into orbit.

That changed on May 5, when the InSight mission took off, with the MarCO twins detaching on a similar trajectory to the geology-focused Mars lander. It wasn’t long before they went farther than any CubeSat has gone before.

A few days after launch MarCO-A and B were about a million kilometers (621,371 miles) from Earth, and it was time to unfold its high-gain antenna. A fisheye camera attached to the chassis had an eye on the process and took a picture to send back home to inform mission control that all was well.

But as a bonus (though not by accident — very few accidents happen on missions like this), Earth and the moon were in full view as MarCO-B took its antenna selfie. Here’s an annotated version of the one above:

“Consider it our homage to Voyager,” said JPL’s Andy Klesh in a news release. “CubeSats have never gone this far into space before, so it’s a big milestone. Both our CubeSats are healthy and functioning properly. We’re looking forward to seeing them travel even farther.”

So far it’s only good news and validation of the idea that cheap CubeSats could potentially be launched by the dozen to undertake minor science missions at a fraction of the cost of something like InSight.

Don’t expect any more snapshots from these guys, though. A JPL representative told me the cameras were really only included to make sure the antenna deployed properly. Really any pictures of Mars or other planets probably wouldn’t be worth looking at twice — these are utility cameras with fisheye lenses, not the special instruments that orbiters use to get those great planetary shots.

The MarCOs will pass by Mars at the same time that InSight is making its landing, and depending on how things go, they may even be able to pass on a little useful info to mission control while it happens. Tune in on November 26 for that!

Powered by WPeMatico

NASA’s InSight Mars lander will gaze (and drill) into the depths of the Red Planet

Posted by | Gadgets, hardware, Insight, jpl, mars lander, NASA, robotics, science, Space, TC | No Comments

NASA’s latest mission to Mars, InSight, is set to launch early Saturday morning in pursuit of a number of historic firsts in space travel and planetology. The lander’s instruments will probe the surface of the planet and monitor its seismic activity with unprecedented precision, while a pair of diminutive CubeSats riding shotgun will test the viability of tiny spacecraft for interplanetary travel.

Saturday at 4:05 AM Pacific is the first launch opportunity, but if weather forbids it, they’ll just try again soon after — the chances of clouds sticking around all the way until June 8, when the launch window closes, are slim to none.

InSight isn’t just a pretty name they chose; it stands for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, at least after massaging the acronym a bit. Its array of instruments will teach us about the Martian interior, granting us insight (see what they did there?) into the past and present of Mars and the other rocky planets in the solar system, including Earth.

Bruce Banerdt, principal investigator for the mission at NASA’s Jet Propulsion Laboratory, has been pushing for this mission for more than two decades, after practically a lifetime working at the place.

“This is the only job I’ve ever had in my life other than working in the tire shop during the summertime,” he said in a recent NASA podcast. He’s worked on plenty of other missions, of course, but his dedication to this one has clearly paid off. It was actually originally scheduled to launch in 2016, but some trouble with an instrument meant they had to wait until the next launch window — now.

InSight is a lander in the style of Phoenix, about the size of a small car, and shot towards Mars faster than a speeding bullet. The launch is a first in itself: NASA has never launched an interplanetary mission from the West coast, but conditions aligned in this case, making California’s Vandenberg air base the best option. It doesn’t even require a gravity assist to get where it’s going.

Did you know? I’ll be the 1st spacecraft to travel from the West Coast of the U.S. to another planet. My rocket can do that—we’ve got the power. 🚀
More on launch: https://t.co/DZ8GsDTfGc pic.twitter.com/VOWiMPek5x

— NASAInSight (@NASAInSight) May 2, 2018

“Instead of having to go to Florida and using the Earth’s rotation to help slingshot us into orbit… We can blast our way straight out,” Banerdt said in the same podcast. “Plus we get to launch in a way that is gonna be visible to maybe 10 million people in Southern California because this rocket’s gonna go right by LA, right by San Diego. And if people are willing to get up at four o’clock in the morning, they should see a pretty cool light show that day.”

The Atlas V will take it up to orbit and the Centaur will give it its push towards Mars, after which it will cruise for six months or so, arriving late in the Martian afternoon on November 26 (Earth calendar).

Its landing will be as exciting (and terrifying) as Phoenix’s and many others. When it hits the Martian atmosphere, InSight will be going more than 13,000 MPH. It’ll slow down first using the atmosphere itself, losing 90 percent of its velocity as friction against a new, reinforced heat shield. A parachute takes off another 90 percent, but it’ll still be going more than 100 MPH, which would make for an uncomfortable landing. So a couple thousand feet up it will transition to landing jets that will let it touch down at a stately 5.4 MPH at the desired location and orientation.

After the dust has settled (literally) and the lander has confirmed everything is in working order, it will deploy its circular, fanlike solar arrays and get to work.

Robot arms and self-hammering robomoles

InSight’s mission is to get into the geology of Mars with more detail and depth than ever before. To that end it is packing gear for three major experiments.

SEIS is a collection of six seismic sensors (making the name a tidy bilingual, bidirectional pun) that will sit on the ground under what looks like a tiny Kingdome and monitor the slightest movement of the ground underneath. Tiny high-frequency vibrations or longer-period oscillations, they should all be detected.

“Seismology is the method that we’ve used to gain almost everything we know, all the basic information about the interior of the Earth, and we also used it back during the Apollo era to understand and to measure sort of the properties of the inside of the moon,” Banerdt said. “And so, we want to apply the same techniques but use the waves that are generated by Mars quakes, by meteorite impacts to probe deep into the interior of Mars all the way down to its core.”

The heat flow and physical properties probe is an interesting one. It will monitor the temperature of the planet below the surface continually for the duration of the mission — but in order to do so, of course, it has to dig its way down. For that purpose it’s installed with what the team calls a “self-hammering mechanical mole.” Pretty self-explanatory, right?

The “mole” is sort of like a hollow, inch-thick, 16-inch-long nail that will use a spring-loaded tungsten block inside itself to drive itself into the rock. It’s estimated that it will take somewhere between 5,000 and 20,000 strikes to get deep enough to escape the daily and seasonal temperature changes at the surface.

Lastly there’s the Rotation and Interior Structure Experiment, which actually doesn’t need a giant nail, a tiny Kingdome or anything like that. The experiment involves tracking the position of InSight with extreme precision as Mars rotates, using its radio connection with Earth. It can be located to within about four inches, which when you think about it is pretty unbelievable to begin with. The way that position varies may indicate a wobble in the planet’s rotation and consequently shed light on its internal composition. Combined with data from similar experiments in the ’70s and ’90s, it should let planetologists determine how molten the core is.

“In some ways, InSight is like a scientific time machine that will bring back information about the earliest stages of Mars’ formation 4.5 billion years ago,” said Banerdt in an earlier news release. “It will help us learn how rocky bodies form, including Earth, its moon, and even planets in other solar systems.”

In another space first, Insight has a robotic arm that will not just do things like grab rocks to look at, but will grab items from its own inventory and deploy them into its workspace. Its little fingers will grab handles on top of each deployable instrument and grab it just like a human might. Well, maybe a little differently, but the principle is the same. At nearly 8 feet long, it has a bit more reach than the average astronaut.

Cubes riding shotgun

One of the MarCO cubesats.

Insight is definitely the main payload, but it’s not the only one. Launching on the same rocket are two CubeSats, known collectively as Mars Cube One, or MarCO. These “briefcase-size” guys will separate from the rocket around the same time as InSight, but take slightly different trajectories. They don’t have the control to adjust their motion and enter an orbit, so they’ll just zoom by Mars right as Insight is landing.

CubeSats launch all the time, though, right? Sure — into Earth orbit. This will be the first attempt to send CubeSats to another planet. If successful there’s no limit to what could be accomplished — assuming you don’t need to pack anything bigger than a breadbox.

The spacecraft aren’t carrying any super-important experiments; there are two in case one fails, and both are only equipped with UHF antennas to send and receive data, and a couple of low-resolution visible-light cameras. The experiment here is really the CubeSats themselves and this launch technique. If they make it to Mars, they might be able to help send InSight’s signal home, and if they keep operating beyond that, it’s just icing on the cake.

You can follow along with InSight’s launch here; there’s also the traditional anthropomorphized Twitter account. We’ll post a link to the live stream as soon as it goes up.

Powered by WPeMatico

Smart dresser helps dementia sufferers put their clothes on right

Posted by | Gadgets, hardware, Health, NYU, robotics, science, smart home, TC | No Comments

It goes without saying that getting dressed is one of the most critical steps in our daily routine. But long practice has made it second nature, and people suffering from dementia may lose that familiarity, making dressing a difficult and frustrating process. This smart dresser from NYU is meant to help them through the process while reducing the load on overworked caregivers.

It may seem that replacing responsive human help with a robotic dresser is a bit insensitive. But not only are there rarely enough caregivers to help everyone in a timely manner at, say, a nursing care facility, the residents themselves might very well prefer the privacy and independence conferred by such a solution.

“Our goal is to provide assistance for people with dementia to help them age in place more gracefully, while ideally giving the caregiver a break as the person dresses – with the assurance that the system will alert them when the dressing process is completed or prompt them if intervention is needed,” explained the project’s leader, Winslow Burleson, in an NYU news release.

DRESS, as the team calls the device, is essentially a five-drawer dresser with a tablet on top that serves as both display and camera, monitoring and guiding the user through the dressing process.

There are lots of things that can go wrong when you’re putting on your clothes, and really only one way it can go right — shirts go on right side out and trousers forwards, socks on both feet, etc. That simplifies the problem for DRESS, which looks for tags attached to the clothes to make sure they’re on right and in order, making sure someone doesn’t attempt to put on their shoes before their trousers. Lights on each drawer signal the next item of clothing to don.

If there’s any problem — the person can’t figure something out, can’t find the right drawer or gets distracted, for instance — the caregiver is alerted and will come help. But if all goes right, the person will have dressed themselves all on their own, something that might not have been possible before.

DRESS is just a prototype right now, a proof of concept to demonstrate its utility. The team is looking into improving the vision system, standardizing clothing folding and enlarging or otherwise changing the coded tags on each item.

Powered by WPeMatico

This soft robotic arm is straight out of Big Hero 6 (it’s even from Disney)

Posted by | Disney, disney research, Gadgets, hardware, robotics, science, Soft Robotics, TC | No Comments

The charming robot at the heart of Disney’s Big Hero 6, Baymax, isn’t exactly realistic, but its puffy bod is an (admittedly aspirational) example of the growing field of soft robotics. And now Disney itself has produced a soft robot arm that seems like it could be a prototype from the movie.

Created by Disney Research roboticists, the arm seems clearly inspired by Baymax, from the overstuffed style and delicate sausage fingers to the internal projector that can show status or information to nearby people.

“Where physical human-robot interaction is expected, robots should be compliant and reactive to avoid human injury and hardware damage,” the researchers write in the paper describing the system. “Our goal is the realization of a robot arm and hand system which can physically interact with humans and gently manipulate objects.”

The mechanical parts of the arm are ordinary enough — it has an elbow and wrist and can move around the way many other robot arms do, using the same servos and such.

But around the joints are what look like big pillows, which the researchers call “force sensing modules.” They’re filled with air and can detect pressure on them. This has the dual effect of protecting the servos from humans and vice versa, while also allowing natural tactile interactions.

“Distributing individual modules over the various links of a robot provides contact force sensing over a large area of the robot and allows for the implementation of spatially aware, engaging physical human-robot interactions,” they write. “The independent sensing areas also allow a human to communicate with the robot or guide its motions through touch.”

Like hugging, as one of the researchers demonstrates:

Presumably in this case the robot (also presuming the rest of the robot) would understand that it is being hugged, and reciprocate or otherwise respond.

The fingers are also soft and filled with air; they’re created in a 3D printer that can lay down both rigid and flexible materials. Pressure sensors within each inflatable finger let the robot know whether, for example, one fingertip is pressing too hard or bearing all the weight, signaling it to adjust its grip.

This is still very much a prototype; the sensors can’t detect the direction of a force yet, and the materials and construction aren’t airtight by design, meaning they have to be continuously pumped full. But it still shows what they want it to show: that a traditional “hard” robot can be retrofitted into a soft one with a bit of ingenuity. We’re still a long way from Baymax, but it’s more science than fiction now.

Powered by WPeMatico

Technique to beam HD video with 99 percent less power could sharpen the eyes of smart homes

Posted by | backscatter, Gadgets, hardware, Mobile, science, streaming video, TC, wireless | No Comments

Everyone seems to be insisting on installing cameras all over their homes these days, which seems incongruous with the ongoing privacy crisis — but that’s a post for another time. Today, we’re talking about enabling those cameras to send high-definition video signals wirelessly without killing their little batteries. A new technique makes beaming video out more than 99 percent more efficient, possibly making batteries unnecessary altogether.

Cameras found in smart homes or wearables need to transmit HD video, but it takes a lot of power to process that video and then transmit the encoded data over Wi-Fi. Small devices leave little room for batteries, and they’ll have to be recharged frequently if they’re constantly streaming. Who’s got time for that?

The idea behind this new system, created by a University of Washington team led by prolific researcher Shyam Gollakota, isn’t fundamentally different from some others that are out there right now. Devices with low data rates, like a digital thermometer or motion sensor, can something called backscatter to send a low-power signal consisting of a couple of bytes.

Backscatter is a way of sending a signal that requires very little power, because what’s actually transmitting the power is not the device that’s transmitting the data. A signal is sent out from one source, say a router or phone, and another antenna essentially reflects that signal, but modifies it. By having it blink on and off you could indicate 1s and 0s, for instance.

UW’s system attaches the camera’s output directly to the output of the antenna, so the brightness of a pixel directly correlates to the length of the signal reflected. A short pulse means a dark pixel, a longer one is lighter, and the longest length indicates white.

Some clever manipulation of the video data by the team reduced the number of pulses necessary to send a full video frame, from sharing some data between pixels to using a “zigzag” scan (left to right, then right to left) pattern. To get color, each pixel needs to have its color channels sent in succession, but this too can be optimized.

Assembly and rendering of the video is accomplished on the receiving end, for example on a phone or monitor, where power is more plentiful.

In the end, a full-color HD signal at 60FPS can be sent with less than a watt of power, and a more modest but still very useful signal — say, 720p at 10FPS — can be sent for under 80 microwatts. That’s a huge reduction in power draw, mainly achieved by eliminating the entire analog to digital converter and on-chip compression. At those levels, you can essentially pull all the power you need straight out of the air.

They put together a demonstration device with off-the-shelf components, though without custom chips it won’t reach those

A frame sent during one of the tests. This transmission was going at about 10FPS.

microwatt power levels; still, the technique works as described. The prototype helped them determine what type of sensor and chip package would be necessary in a dedicated device.

Of course, it would be a bad idea to just blast video frames into the ether without any compression; luckily, the way the data is coded and transmitted can easily be modified to be meaningless to an observer. Essentially you’d just add an interfering signal known to both devices before transmission, and the receiver can subtract it.

Video is the first application the team thought of, but there’s no reason their technique for efficient, quick backscatter transmission couldn’t be used for non-video data.

The tech is already licensed to Jeeva Wireless, a startup founded by UW researchers (including Gollakota) a while back that’s already working on commercializing another low-power wireless device. You can read the details about the new system in their paper, presented last week at the Symposium on Networked Systems Design and Implementation.

Powered by WPeMatico

Watch SpaceX launch NASA’s new planet-hunting satellite here

Posted by | Gadgets, hardware, NASA, science, Space, SpaceX, TESS | No Comments

It’s almost time for SpaceX to launch NASA’s TESS, a space telescope that will search for exoplants across nearly the entire night sky. The launch has been delayed more than once already: originally scheduled for March 20, it slipped to April 16 (Monday), then some minor issues pushed it to today — at 3:51 PM Pacific time, to be precise. You can watch the launch live below.

TESS, which stands for Transiting Exoplanet Survey Satellite, is basically a giant wide-angle camera (four of them, actually) that will snap pictures of the night sky from a wide, eccentric and never before tried orbit.

The technique it will use is fundamentally the same as that employed by NASA’s long-running and highly successful Kepler mission. When distant plants pass between us and their star, it causes a momentary decrease in that star’s brightness. TESS will monitor thousands of stars simultaneously for such “transits,” watching a single section of sky for a month straight before moving on to another.

By two years, it will have imaged 85 percent of the sky — hundreds of times the area Kepler observed, and on completely different stars: brighter ones that should yield more data.

TESS, which is about the size of a small car, will launch on top of a SpaceX Falcon 9 rocket. SpaceX will attempt to recover the first stage of the rocket by having it land on a drone ship, and the nose cone will, hopefully, get a gentle parachute-assisted splashdown in the Atlantic, where it too can be retrieved.

The feed below should go live 15 minutes before launch, or at about 3:35.

Powered by WPeMatico

Google’s ‘Semantic Experiences’ let you play word games with its AI

Posted by | artificial intelligence, Developer, Gaming, Google, natural language processing, science, tensorflow | No Comments

Google does a great deal of research into natural language processing and synthesis, but not every project has to be a new Assistant feature or voice improvement. The company has a little fun now and then, when the master AI permits it, and today it has posted a few web experiments that let you engage with its word-association systems in a playful way.

First is an interesting way of searching through Google Books, that fabulous database so rarely mentioned these days. Instead of just searching for text or title verbatim, you can ask questions, like “Why was Napoleon exiled?” or “What is the nature of consciousness?”

It returns passages from books that, based on their language only, are closely associated with your question. And while the results are hit and miss, they are nice and flexible. Sentences answering my questions appeared even though they were not directly adjacent to key words or particularly specific about doing so.

I found, however, it’s not a very intuitive way to interact with a body of knowledge, at least for me. When I ask a question, I generally want to receive an answer, not a competing variety of quotes that may or may not bear on your inquiry. So while I can’t really picture using this regularly, it’s an interesting way to demonstrate the flexibility of the semantic engine at work here. And it may very well expose you to some new authors, though the 100,000 books included in the database are something of a mixed bag.

The second project Google highlights is a game it calls Semantris, though I must say it’s rather too simple to deserve the “-tris” moniker. You’re given a list of words and one in particular is highlighted. You type the word you most associate with that one, and the words will reorder with, as Google’s AI understands it, the closest matches to your word on the bottom. If you moved the target word to the bottom, it blows up a few words and adds some more.

It’s a nice little time waster, but I couldn’t help but feel I was basically just a guinea pig providing testing and training for Google’s word association agent. It was also pretty easy — I didn’t feel much of an achievement for associating “water” with “boat” — but maybe it gets harder as it goes on. I’ve asked Google if our responses are feeding into the AI’s training data.

For the coders and machine learning enthusiasts among you, Google has also provided some pre-trained TensorFlow modules, and of course documented their work in a couple of papers linked in the blog post.

Powered by WPeMatico

Who’s a good AI? Dog-based data creates a canine machine learning system

Posted by | allen institute for artificial intelligence, artificial intelligence, Computer Vision, Gadgets, science, university of washington | No Comments

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.

It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June.

Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.

And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.

As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.

In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.

They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.

This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.

“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”

That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.

This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”

Powered by WPeMatico