robotics

Sense Photonics flashes onto the lidar scene with a new approach and $26M

Posted by | automotive, autonomous vehicles, Gadgets, hardware, Lidar, robotics, self-driving cars, sense photonics, Startups, TC | No Comments

Lidar is a critical part of many autonomous cars and robotic systems, but the technology is also evolving quickly. A new company called Sense Photonics just emerged from stealth mode today with a $26M A round, touting a whole new approach that allows for an ultra-wide field of view and (literally) flexible installation.

Still in prototype phase but clearly enough to attract eight figures of investment, Sense Photonics’ lidar doesn’t look dramatically different from others at first, but the changes are both under the hood and, in a way, on both sides of it.

Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.

All these systems are “scanning” systems in that they sweep a beam, column, or spot of light across the scene in some structured fashion — faster than we can perceive, but still piece by piece. Few companies, however, have managed to implement what’s called “flash” lidar, which illuminates the whole scene with one giant, well, flash.

That’s what Sense has created, and it claims to have avoided the usual shortcomings of such systems — namely limited resolution and range. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.

I talked with CEO and co-founder Scott Burroughs, a veteran engineer of laser systems, about what makes Sense’s lidar a different animal from the competition.

“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.”

These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically.

“We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”

Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case.

These are just prototype units, but they work and you can see they’re more than just renders.

Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.

The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.

The depth data and traditional camera image correspond pixel-to-pixel right out of the system.

Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.

Sense Photonics’ unit also can output a point cloud, as you see here.

The benefits of Sense’s system are manifest, but of course right now the company is still working on getting the first units to production. To that end it has of course raised the $26 million A round, “co-led by Acadia Woods and Congruent Ventures, with participation from a number of other investors, including Prelude Ventures, Samsung Ventures and Shell Ventures,” as the press release puts it.

Cash on hand is always good. But it has also partnered with Infineon and others, including an unnamed tier-1 automotive company, which is no doubt helping shape the first commercial Sense Photonics product. The details will have to wait until later this year when that offering solidifies, and production should start a few months after that — no hard timeline yet, but expect this all before the end of the year.

“We are very appreciative of this strong vote of investor confidence in our team and our technology,” Burroughs said in the press release. “The demand we’ve encountered – even while operating in stealth mode – has been extraordinary.”

Powered by WPeMatico

Europe publishes common drone rules, giving operators a year to prepare

Posted by | drone, drone regulations, Emerging-Technologies, eu, Europe, european union, Gadgets, Gatwick Airport, robotics, Transportation, unmanned aerial vehicles | No Comments

Europe has today published common rules for the use of drones. The European Union Aviation Safety Agency (EASA) says the regulations, which will apply universally across the region, are intended to help drone operators of all stripes have a clear understanding of what is and is not allowed.

Having a common set of rules will also means drones can be operated across European borders without worrying about differences in regulations.

“Once drone operators have received an authorisation in the state of registration, they are allowed to freely circulate in the European Union. This means that they can operate their drones seamlessly when travelling across the EU or when developing a business involving drones around Europe,” writes EASA in a blog post.

Although published today and due to come into force within 20 days, the common rules won’t yet apply — with Member States getting another year, until June 2020, to prepare to implement the requirements.

Key among them is that starting from June 2020 the majority of drone operators will need to register themselves before using a drone, either where they reside or have their main place of business.

Some additional requirements have later deadlines as countries gradually switch over to the new regime.

The pan-EU framework creates three categories of operation for drones — open’ (for low-risk craft of up to 25kg), ‘specific’ (where drones will require authorization to be flown) or ‘certified’ (the highest risk category, such as operating delivery or passenger drones, or flying over large bodies of people) — each with their own set of regulations.

The rules also include privacy provisions, such as a requirement that owners of drones with sensors that could capture personal data should be registered to operate the craft (with an exception for toy drones).

The common rules will replace national regulations that may have already been implemented by individual EU countries. Although member states will retain the ability to set their own no-fly zones — such as covering sensitive installations/facilities and/or gatherings of people, with the regulation setting out the “possibility for Member States to lay down national rules to make subject to certain conditions the operations of unmanned aircraft for reasons falling outside the scope of this Regulation, including environmental protection, public security or protection of privacy and personal data in accordance with the Union law”.

The harmonization of drone rules is likely to be welcomed by operators in Europe who currently face having to do a lot of due diligence ahead of deciding whether or not to pack a drone in their suitcase before heading to another EU country.

EASA also suggests the common rules will reduce the likelihood of another major disruption — such as the unidentified drone sightings that ground flights at Gatwick Airport just before Christmas which stranded thousands of travellers — given the registration requirement, and a stipulation that new drones must be individually identifiable to make it easier to trace their owner.

“The new rules include technical as well as operational requirements for drones,” it writes. “On one hand they define the capabilities a drone must have to be flown safely. For instance, new drones will have to be individually identifiable, allowing the authorities to trace a particular drone if necessary. This will help to better prevent events similar to the ones which happened in 2018 at Gatwick and Heathrow airports. On the other hand the rules cover each operation type, from those not requiring prior authorisation, to those involving certified aircraft and operators, as well as minimum remote pilot training requirements.

“Europe will be the first region in the world to have a comprehensive set of rules ensuring safe, secure and sustainable operations of drones both, for commercial and leisure activities. Common rules will help foster investment, innovation and growth in this promising sector,” adds Patrick Ky, EASA’s executive director, in a statement.

Powered by WPeMatico

Maker Faire halts operations and lays off all staff

Posted by | Education, Entertainment, Exit, Fundings & Exits, Gadgets, Hack, hardware, layoffs, MAKE, maker faire, Maker Media, Media, Personnel, robotics, Startups, Talent, TC | No Comments

Financial troubles have forced Maker Media, the company behind crafting publication MAKE: magazine as well as the science and art festival Maker Faire, to lay off its entire staff of 22 and pause all operations. TechCrunch was tipped off to Maker Media’s unfortunate situation which was then confirmed by the company’s founder and CEO Dale Dougherty.

For 15 years, MAKE: guided adults and children through step-by-step do-it-yourself crafting and science projects, and it was central to the maker movement. Since 2006, Maker Faire’s 200 owned and licensed events per year in over 40 countries let attendees wander amidst giant, inspiring art and engineering installations.

Maker Media Inc ceased operations this week and let go of all of its employees — about 22 employees” Dougherty tells TechCrunch. “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship.” Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire.

But Dougherty is still desperately trying to resuscitate the company in some capacity, if only to keep MAKE:’s online archive running and continue allowing third-party organizers to license the Maker Faire name to throw affiliated events. Rather than bankruptcy, Maker Media is working through an alternative Assignment for Benefit of Creditors process.

“We’re trying to keep the servers running” Dougherty tells me. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program.” The fate of those hopes will depend on negotiations with banks and financiers over the next few weeks. For now the sites remain online.

The CEO says staffers understood the challenges facing the company following layoffs in 2016, and then at least 8 more employees being let go in March according to the SF Chronicle. They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice.

“It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity” Dougherty admits, as his company had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. “The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes for instance are in education.”

The situation is especially sad because the public was still enthusiastic about Maker Media’s products  Dougherty said that despite rain, Maker Faire’s big Bay Area event last week met its ticket sales target. 1.45 million people attended its events in 2016. MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media.

“It works for people but it doesn’t necessarily work as a business today, at least under my oversight” Dougherty concluded. For now the company is stuck in limbo.

Regardless of the outcome of revival efforts, Maker Media has helped inspire a generation of engineers and artists, brought families together around crafting, and given shape to a culture of tinkerers. The memory of its events and weekends spent building will live on as inspiration for tomorrow’s inventors.

Powered by WPeMatico

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by | artificial intelligence, conservation, Gadgets, hardware, robotics, science, TC, XPRIZE | No Comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.

Powered by WPeMatico

Ekasbo’s Matebot may be the cutest cat robot yet created

Posted by | Cats, Computex, Computex 2019, Ekasbo, Gadgets, hardware, MateBook, personal companion, robotics, robots, TC | No Comments

If Shrek saw Matebot, no amount of sad-eyes could win him back to Puss in Boots’ side. Created by Shenzhen-based robotics company Ekasbo, Matebot looks like a black and white cartoon cat and responds to your touch by wiggling its ears, changing the expression in its big LED eyes and tilting its head.

Ekasbo's Matebot in a sad mood

Built with voice recognition, infrared technology and seven moving parts, the Matebot is designed to serve as an interactive companion, including for people who can’t keep pets, creator Zhang Meng told TechCrunch at Computex in Taiwan.

Met Matebot the cat robot today! pic.twitter.com/jJaa5EhKC8

— Computex Shu (@CatherineShu) May 30, 2019

The Matebot is controlled with a smartphone app and can be integrated with Android voice control systems. Its price starts at about 4,999, yen or about US$45.

Powered by WPeMatico

This robot learns its two-handed moves from human dexterity

Posted by | artificial intelligence, Gadgets, hardware, robotic arm, robotics, robots, science, science robotics, TC | No Comments

If robots are really to help us out around the house or care for our injured and elderly, they’re going to want two hands… at least. But using two hands is harder than we make it look — so this robotic control system learns from humans before attempting to do the same.

The idea behind the research, from the University of Wisconsin-Madison, isn’t to build a two-handed robot from scratch, but simply to create a system that understands and executes the same type of manipulations that we humans do without thinking about them.

For instance, when you need to open a jar, you grip it with one hand and move it into position, then tighten that grip as the other hand takes hold of the lid and twists or pops it off. There’s so much going on in this elementary two-handed action that it would be hopeless to ask a robot to do it autonomously right now. But that robot could still have a general idea of why this type of manipulation is done on this occasion, and do what it can to pursue it.

The researchers first had humans wearing motion capture equipment perform a variety of simulated everyday tasks, like stacking cups, opening containers and pouring out the contents, and picking up items with other things balanced on top. All this data — where the hands go, how they interact and so on — was chewed up and ruminated on by a machine learning system, which found that people tended to do one of four things with their hands:

  • Self-handover: This is where you pick up an object and put it in the other hand so it’s easier to put it where it’s going, or to free up the first hand to do something else.
  • One hand fixed: An object is held steady by one hand providing a strong, rigid grip, while the other performs an operation on it like removing a lid or stirring the contents.
  • Fixed offset: Both hands work together to pick something up and rotate or move it.
  • One hand seeking: Not actually a two-handed action, but the principle of deliberately keeping one hand out of action while the other finds the object required or performs its own task.

The robot put this knowledge to work not in doing the actions itself — again, these are extremely complex motions that current AIs are incapable of executing — but in its interpretations of movements made by a human controller.

You would think that when a person is remotely controlling a robot, it would just mirror the person’s movements exactly. And in the tests, the robot does so to provide a baseline of how without knowledge about these “bimanual actions,” but many of them are simply impossible.

Think of the jar-opening example. We know that when we’re opening the jar, we have to hold one side steady with a stronger grip and may even have to push back with the jar hand against the movement of the opening hand. If you tried to do this remotely with robotic arms, that information is not present any more, and the one hand will likely knock the jar out of the grip of the other, or fail to grip it properly because the other isn’t helping out.

The system created by the researchers recognizes when one of the four actions above is happening, and takes measures to make sure that they’re a success. That means, for instance, being aware of the pressures exerted on each arm by the other when they pick up a bucket together. Or providing extra rigidity to the arm holding an object while the other interacts with the lid. Even when only one hand is being used (“seeking”), the system knows that it can deprioritize the movements of the unused hand and dedicate more resources (be it body movements or computational power) to the working hand.

In videos of demonstrations, it seems clear that this knowledge greatly improves the success rate of the attempts by remote operators to perform a set of tasks meant to simulate preparing a breakfast: cracking (fake) eggs, stirring and shifting things, picking up a tray with glasses on it and keeping it level.

Of course this is all still being done by a human, more or less — but the human’s actions are being augmented and re-interpreted into something more than simple mechanical reproduction.

Doing these tasks autonomously is a long ways off, but research like this forms the foundation for that work. Before a robot can attempt to move like a human, it has to understand not just how humans move, but why they do certain things in certain circumstances and, furthermore, what important processes may be hidden from obvious observation — things like planning the hand’s route, choosing a grip location and so on.

The Madison team was led by Daniel Rakita; their paper describing the system is published in the journal Science Robotics.

Powered by WPeMatico

You can do it, robot! Watch the beefy, 4-legged HyQReal pull a plane

Posted by | Europe, Gadgets, hardware, hyqreal, italian institute of technology, Italy, Moog, robotics, science | No Comments

It’s not really clear just yet exactly what all these powerful, agile quadrupedal robots people are working on are going to do, exactly, but even so it never gets old watching them do their thing. The latest is an Italian model called HyQReal, which demonstrates its aspiration to winning strongman competitions, among other things, by pulling an airplane behind it.

The video is the debut for HyQReal, which is the successor to HyQ, a much smaller model created years ago by the Italian Institute of Technology, and its close relations. Clearly the market, such as it is, has advanced since then, and discerning customers now want the robot equivalent of a corn-fed linebacker.

That’s certainly how HyQReal seems to be positioned; in its video, the camera lingers lovingly on its bulky titanium haunches and thick camera cage. Its low slung body recalls a bulldog rather than a cheetah or sprightly prey animal. You may think twice before kicking this one.

The robot was presented today at the International Conference on Robotics and Automation, where in a workshop (documented by IEEE Spectrum) the team described HyQReal’s many bulkinesses.

It’s about four feet long and three high, weighs 130 kilograms (around 287 pounds), of which the battery comprises 15 — enough for about two hours of duty. It’s resistant to dust and water exposure and should be able to get itself up should it fall or tip over. The robot was created in collaboration with Moog, which created special high-powered hydraulics for the purpose.

It sounds good on paper, and the robot clearly has the torque needed to pull a small passenger airplane, as you can see in the video. But that’s not really what robots like this are for — they need to generate versatility and robustness under a variety of circumstances, and the smarts to navigate a human-centric world and provide useful services.

Right now HyQReal is basically still a test bed — it needs to have all kinds of work done to make sure it will stand up under conditions that robots like Spot Mini have already aced. And engineering things like arm or cargo attachments is far from trivial. All the same it’s exciting to see competition in a space that, just a few years back, seemed totally new (and creepy).

Powered by WPeMatico

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

Why is Facebook doing robotics research?

Posted by | artificial intelligence, Facebook, Gadgets, hardware, robotics, robots, science, Social, TC | No Comments

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy,” the hexapod robot

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the auto-didactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Powered by WPeMatico

This clever transforming robot flies and rolls on its rotating arms

Posted by | drones, Gadgets, hardware, robotics, science, TC, UAVs | No Comments

There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.

Conceived by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?

Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.

Of course you could have a drone that simply has a couple of wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient because the same motor drives the rotors and wheels at the same time — though when rolling, the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase and high clearance that make it much more capable on rough terrain.

You can watch FSTAR fly, roll, transform, flatten and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:

The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs or simply ascend and fly to a new location, give FSTAR considerable adaptability.

“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.

Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations and the military.

Powered by WPeMatico