robotics

This robot maintains tender, unnerving eye contact

Posted by | Gadgets, hardware, robocalypse, robotics, WTF | No Comments

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Powered by WPeMatico

This bipedal robot has a flying head

Posted by | Gadgets, machine learning, mobile robot, robot, robotics, TC, university of tokyo | No Comments

Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?

University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.

“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.

The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.

Powered by WPeMatico

This happy robot helps kids with autism

Posted by | Co-founder, Gadgets, iPad, LuxAI, robot, robotics, TC, therapist | No Comments

A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.

The project comes from LuxAI, a spin-off of the University of Luxembourg. They will present their findings at the RO-MAN 2018 conference at the end of this month.

“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”

The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.

Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.

The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.

The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.

Powered by WPeMatico

Analysis backs claim drones were used to attack Venezuela’s president

Posted by | dji, DJI Matrice 600, drone, Emerging-Technologies, Gadgets, hardware, robotics, satellite imagery, Venezuela | No Comments

Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela this weekend.

The Venezuelan government claimed three days ago that an attempt had been made to assassinate President Nicolás Maduro using two drones loaded with explosives. The president had been giving a speech which was being broadcast live on television when the incident occurred.

Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.

AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.

Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.

Vídeo prueba del segundo drone que exploto en el aire sin causar daños colaterales #Sucesos Vídeo cortesía pic.twitter.com/ipWR2sbYvW

— Caracas News 24 🌐 (@CaracasNews24) August 5, 2018

Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracas to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.

The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.

DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.

Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.

Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.

The second drone was said by Reverol to have “lost control” and crashed into a nearby building.

Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.

“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.

Here’s its conclusion:

From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.

The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.

It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.

Powered by WPeMatico

NASA’s Open Source Rover lets you build your own planetary exploration platform

Posted by | DIY, Education, Gadgets, Government, jpl, mars rover, NASA, robotics, science, Space | No Comments

Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.

The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.

Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.

The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.

“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”

The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.

“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.

So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.

There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.

“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Powered by WPeMatico

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

Posted by | artificial intelligence, Gadgets, OpenAI, robotics, science | No Comments

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Powered by WPeMatico

PSA: Drone flight restrictions are in force in the UK from today

Posted by | Civil Aviation Authority, drone, drones, Europe, Gadgets, Government, hardware, regulations, robotics, United Kingdom, unmanned aerial vehicles | No Comments

Consumers using drones in the UK have new safety restrictions they must obey starting today, with a change to the law prohibiting drones from being flown above 400ft or within 1km of an airport boundary.

Anyone caught flouting the new restrictions could be charged with recklessly or negligently acting in a manner likely to endanger an aircraft or a person in an aircraft — which carries a penalty of up to five years in prison or an unlimited fine, or both.

The safety restrictions were announced by the government in May, and have been brought in via an amendment the 2016 Air Navigation Order.

They’re a stop-gap because the government has also been working on a full drone bill — which was originally slated for Spring but has been delayed.

However the height and airport flight restrictions for drones were pushed forward, given the clear safety risks — after a year-on-year increase in reports of drone incidents involving aircraft.

The Civil Aviation Authority has today published research to coincide with the new laws, saying it’s found widespread support among the public for safety regulations for drones.

Commenting in a statement, the regulator’s assistant director Jonathan Nicholson said: “Drones are here to stay, not only as a recreational pastime, but as a vital tool in many industries — from agriculture to blue-light services — so increasing public trust through safe drone flying is crucial.”

“As recreational drone use becomes increasingly widespread across the UK it is heartening to see that awareness of the Dronecode has also continued to rise — a clear sign that most drone users take their responsibility seriously and are a credit to the community,” he added, referring to the (informal) set of rules developed by the body to promote safe use of consumer drones — ahead of the government legislating.

Additional measures the government has confirmed it will legislate for — announced last summer — include a requirement for owners of drones weighing 250 grams or more to register with the CAA, and for drone pilots to take an online safety test. The CAA says these additional requirements will be enforced from November 30, 2019 — with more information on the registration scheme set to follow next year.

For now, though, UK drone owners just need to make sure they’re not flying too high or too close to airports.

Earlier this month it emerged the government is considering age restrictions on drone use too. Though it remains to be seen whether or not those proposals will make it into the future drone bill.

Powered by WPeMatico

SmartArm’s AI-powered prosthesis takes the prize at Microsoft’s Imagine Cup

Posted by | artificial intelligence, Gadgets, hardware, imagine cup, Microsoft, Prosthetics, robotics, TC | No Comments

A pair of Canadian students making a simple, inexpensive prosthetic arm have taken home the grand prize at Microsoft’s Imagine Cup, a global startup competition the company holds yearly. SmartArm will receive $85,000, a mentoring session with CEO Satya Nadella, and some other Microsoft goodies. But they were far from the only worthy team from the dozens that came to Redmond to compete.

The Imagine Cup is an event I personally look forward to, because it consists entirely of smart young students, usually engineers and designers themselves (not yet “serial entrepreneurs”) and often aiming to solve real-world problems.

In the semi-finals I attended, I saw a pair of young women from Pakistan looking to reduce stillbirth rates with a new pregnancy monitor, an automated eye-checking device that can be deployed anywhere and used by anyone, and an autonomous monitor for water tanks in drought-stricken areas. When I was their age, I was living at my mom’s house, getting really good at Mario Kart for SNES and working as a preschool teacher.

Even Nadella bowed before their ambitions in his appearance on stage at the final event this morning.

“Last night I was thinking, ‘What advice can I give people who have accomplished so much at such a young age?’ And I said, I should go back to when I was your age and doing great things. Then I realized…I definitely wouldn’t have made these finals.”

That got a laugh, but (with apologies to Nadella) it’s probably true. Students today have unbelievable resources available to them and as many of the teams demonstrated, they’re making excellent use of those resources.

Congratulations to Team smartARM from #Canada, champion of today’s #ImagineCup! Watch the live show on demand at https://t.co/BLxnJ9FGxJ 🏆pic.twitter.com/86itWke2du

— Microsoft Imagine (@MSFTImagine) July 25, 2018

SmartArm in particular combines a clever approach with state of the art tech in a way that’s so simple it’s almost ridiculous.

The issue they saw as needing a new approach is prosthetic arms, which as they pointed out are often either non-functional (think just a plastic arm or simple flexion-based gripper) or highly expensive (a mechanical arm might cost tens of thousands). Why can’t one be both?

Their solution is an extremely interesting and timely one: a relatively simply actuated 3D-printed forearm and hand that has its own vision system built in. A camera built into the palm captures an image of the item the user aims to pick up, and quickly classifies it — an apple, a key ring, a pen — and selects the correct grip for that object.

The user activates the grip by flexing their upper arm muscles, an action that’s detected by a Myo-like muscle sensor (possibly actually a Myo, but I couldn’t tell from the demo). It sends the signal to the arm to activate the hand movement, and the fingers move accordingly.

It’s still extremely limited — you likely can’t twist a doorknob with it, or reliably grip a knife or fork, and so on. But for many everyday tasks it could still be useful. And the idea of putting the camera in the palm is a high-risk, high-reward one. It is of course blocked when you pick up the item, but what does it need to see during that time? You deactivate the grip to put the cup down and the camera is exposed again to watch for the next task.

Bear in mind this is not meant as some kind of serious universal hand replacement. But it provides smart, simple functionality for people who might otherwise have had to use a pincer arm or the like. And according to the team, it should cost less than $100. How that’s possible to do including the arm sensor is unclear to me, but I’m not the one who built a bionic arm so I’m going to defer to them on this. Even if they miss that 50 percent it would still be a huge bargain, honestly.

There’s an optional subscription that would allow the arm to improve itself over time as it learns more about your habits and objects you encounter regularly — this would also conceivably be used to improve other SmartArms as well.

As for how it looks — rather robotic — the team defended it based on their own feedback from amputees: “They’d rather be asked, ‘hey, where did you get that arm?” than ‘what happened to your arm?’ ” But a more realistic-looking set of fingers is also under development.

The team said they were originally looking for venture funding but ended up getting a grant instead; they’ve got interest from a number of Canadian and American institutions already, and winning the Imagine Cup will almost certainly propel them to greater prominence in the field.

My own questions would be on durability, washing, and the kinds of things that really need to be tested in real-world scenarios. What if the camera lens gets dirty or scratched? Will there be color options for people that don’t want to have white “skin” on their arm? What’s the support model? What about insurance?

SmartArm takes the grand prize, but the runners up and some category winners get a bunch of good stuff too. I plan to get in touch with SmartArm and several other teams from the competition to find out more and hear about their progress. I was really quite impressed not just with the engineering prowess but the humanitarianism and thoughtfulness on display this year. Nadella summed it up best:

“One of the things that I always think about is this competition in some sense ups the game, right?” he said at the finals. “People from all over the world are thinking about how do I use technology, how do i learn new concepts, but then more importantly, how do I solve some of these unmet, unarticulated needs? The impact that you all can have is just enormous, the opportunity is enormous. But I also believe there is an amazing sense of responsibility, or a need for responsibility that we all have to collectively exercise given the opportunity we have been given.”

Powered by WPeMatico

This smart prosthetic ankle adjusts to rough terrain

Posted by | accessibility, Gadgets, hardware, Health, prosthesis, Prosthetics, robotics, science | No Comments

Prosthetic limbs are getting better and more personalized, but useful as they are, they’re still a far cry from the real thing. This new prosthetic ankle is a little closer than others, though: it moves on its own, adapting to its user’s gait and the surface on which it lands.

Your ankle does a lot of work when you walk: lifting your toe out of the way so you don’t scuff it on the ground, controlling the tilt of your foot to minimize the shock when it lands or as you adjust your weight, all while conforming to bumps and other irregularities it encounters. Few prostheses attempt to replicate these motions, meaning all that work is done in a more basic way, like the bending of a spring or compression of padding.

But this prototype ankle from Michael Goldfarb, a mechanical engineering professor at Vanderbilt, goes much further than passive shock absorption. Inside the joint are a motor and actuator, controlled by a chip that senses and classifies motion and determines how each step should look.

“This device first and foremost adapts to what’s around it,” Goldfarb said in a video documenting the prosthesis.

“You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should,” he added in a news release from the university.

When it senses that the foot has lifted up for a step, it can lift the toe up to keep it clear, also exposing the heel so that when the limb comes down, it can roll into the next step. And by reading the pressure both from above (indicating how the person is using that foot) and below (indicating the slope and irregularities of the surface) it can make that step feel much more like a natural one.

One veteran of many prostheses, Mike Sasser, tested the device and had good things to say: “I’ve tried hydraulic ankles that had no sort of microprocessors, and they’ve been clunky, heavy and unforgiving for an active person. This isn’t that.”

Right now the device is still very lab-bound, and it runs on wired power — not exactly convenient if someone wants to go for a walk. But if the joint works as designed, as it certainly seems to, then powering it is a secondary issue. The plan is to commercialize the prosthesis in the next couple of years once all that is figured out. You can learn a bit more about Goldfarb’s research at the Center for Intelligent Mechatronics.

Powered by WPeMatico

New system connects your mind to a machine to help stop mistakes

Posted by | artificial intelligence, baxter, Culture, Gadgets, industrial robot, MIT, robot, robotics, TC | No Comments

How do you tell your robot not do something that could be catastrophic? You could give it a verbal or programmatic command or you could have it watch your brain for signs of distress and have it stop itself. That’s what researchers at MIT’s robotics research lab have done with a system that is wired to your brain and tells robots how to do their job.

The initial system is fairly simple. A scalp EEG and EMG system is connected to a Baxter work robot and lets a human wave or gesture when the robot is doing something that it shouldn’t be doing. For example, the robot could regularly do a task – drilling holes, for example – but when it approaches an unfamiliar scenario the human can gesture at the task that should be done.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” said PhD candidate Joseph DelPreto. “This helps make communicating with a robot more like communicating with another person.”

Because the system uses nuances like gestures and emotional reactions you can train robots to interact with humans with disabilities and even prevent accidents by catching concern or alarm before it is communicated verbally. This lets workers stop a robot before it damages something and even help the robot understand slight changes to its tasks before it begins.

In their tests the team trained Baxter to drill holes in an airplane fuselage. The task changed occasionally and a human standing nearby was able to gesture to the robot to change position before it drilled, essentially training it to do new tasks in the midst of its current task. Further, there was no actual programming involved on the human’s part, just a suggestion that the robot move the drill left or right on the fuselage. The most important thing? Humans don’t have to think in a special way or train themselves to interact with the machine.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” said DelPreto. “The machine adapts to you, and not the other way around.”

The team will present their findings at the Robotics: Science and Systems (RSS) conference.

Powered by WPeMatico