science

Mars Rover Curiosity is switching brains so it can fix itself

Posted by | Gadgets, jpl, mars rover, NASA, robotics, science, Space, TC | No Comments

When you send something to space, it’s good to have redundancy. Sometimes you want to send two whole duplicate spacecraft just in case — as was the case with Voyager — but sometimes it’s good enough to have two of critical components. Mars Rover Curiosity is no exception, and it is now in the process of switching from one main “brain” to the other so it can do digital surgery on the first.

Curiosity landed on Mars with two central computing systems, Side-A and Side-B (not left brain and right brain — that would invite too much silliness). They’re perfect duplicates of each other, or were — it was something of a bumpy ride, after all, and cosmic radiation may flip a bit here and there.

The team was thankful to have made these preparations when, on sol 200 in February of 2013 (we’re almost to sol 2,200 now), the Side-A computer experienced a glitch that ended up taking the whole rover offline. The solution was to swap over to Side-B, which was up and running shortly afterwards and sending diagnostic data for its twin.

Having run for several years with no issues, Side-B is now, however, having its own problems. Since September 15 it has been unable to record mission data, and it doesn’t appear to be a problem that the computer can solve itself. Fortunately, in the intervening period, Side-A has been fixed up to working condition — though it has a bit less memory than it used to, since some corrupted sectors had to be quarantined.

“We spent the last week checking out Side A and preparing it for the swap,” said Steven Lee, deputy project manager of the Curiosity program at JPL, in a mission status report. “We are operating on Side A starting today, but it could take us time to fully understand the root cause of the issue and devise workarounds for the memory on Side B. It’s certainly possible to run the mission on the Side-A computer if we really need to. But our plan is to switch back to Side B as soon as we can fix the problem to utilize its larger memory size.”

No timeline just yet for how that will happen, but the team is confident that they’ll have things back on track soon. The mission isn’t in jeopardy — but this is a good example of how a good system of redundancies can add years to the life of space hardware.

Powered by WPeMatico

How aerial lidar illuminated a Mayan megalopolis

Posted by | Gadgets, History, Lidar, science, TC | No Comments

Archaeology may not be the most likely place to find the latest in technology — AI and robots are of dubious utility in the painstaking fieldwork involved — but lidar has proven transformative. The latest accomplishment using laser-based imaging maps thousands of square kilometers of an ancient Mayan city once millions strong, but the researchers make it clear that there’s no technological substitute for experience and a good eye.

The Pacunam Lidar Initiative began two years ago, bringing together a group of scholars and local authorities to undertake the largest-yet survey of a protected and long-studied region in Guatemala. Some 2,144 square kilometers of the Maya Biosphere Reserve in Petén were scanned, inclusive of and around areas known to be settled, developed or otherwise of importance.

Preliminary imagery and data illustrating the success of the project were announced earlier this year, but the researchers have now performed their actual analyses on the data, and the resulting paper summarizing their wide-ranging results has been published in the journal Science.

The areas covered by the initiative, as you can see, spread over perhaps a fifth of the country.

“We’ve never been able to see an ancient landscape at this scale all at once. We’ve never had a data set like this. But in February really we hadn’t done any analysis, really, in a quantitative sense,” co-author Francisco Estrada-Belli, of Tulane University, told me. He worked on the project with numerous others, including Tulane colleague Marcello Canuto. “Basically we announced we had found a huge urban sprawl, that we had found agricultural features on a grand scale. After another nine months of work we were able to quantify all that and to get some numerical confirmations for the impressions we’d gotten.”

“It’s nice to be able to confirm all our claims,” he said. “They may have seemed exaggerated to some.”

The lidar data was collected not by self-driving cars, which seem to be the only vehicles bearing lidar we ever hear about, nor even by drones, but by traditional airplane. That may sound cumbersome, but the distances and landscapes involved permitted nothing else.

“A drone would never have worked — it could never have covered that area,” Estrada-Belli explained. “In our case it was actually a twin-engine plane flown down from Texas.”

The plane made dozens of passes over a given area, a chosen “polygon” perhaps 30 kilometers long and 20 wide. Mounted underneath was “a Teledyne Optech Titan MultiWave multichannel, multi-spectral, narrow-pulse width lidar system,” which pretty much says it all: this is a heavy-duty instrument, the size of a refrigerator. But you need that kind of system to pierce the canopy and image the underlying landscape.

The many overlapping passes were then collated and calibrated into a single digital landscape of remarkable detail.

“It identified features that I had walked over — a hundred times!” he laughed. “Like a major causeway, I walked over it, but it was so subtle, and it was covered by huge vegetation, underbrush, trees, you know, jungle — I’m sure that in another 20 years I wouldn’t have noticed it.”

But these structures don’t identify themselves. There’s no computer labeling system that looks at the 3D model and says, “this is a pyramid, this is a wall,” and so on. That’s a job that only archaeologists can do.

“It actually begins with manipulating the surface data,” Estrada-Belli said. “We get these surface models of the natural landscape; each pixel in the image is basically the elevation. Then we do a series of filters to simulate light being projected on it from various angles to enhance the relief, and we combine these visualizations with transparencies and different ways of sharpening or enhancing them. After all this process, basically looking at the computer screen for a long time, then we can start digitizing it.”

“The first step is to visually identify features. Of course, pyramids are easy, but there are subtler features that, even once you identify them, it’s hard to figure out what they are.”

The lidar imagery revealed, for example, lots of low linear features that could be man-made or natural. It’s not always easy to tell the difference, but context and existing scholarship fill in the gaps.

“Then we proceeded to digitize all these features… there were 61,000 structures, and everything had to be done manually,” Estrada-Belli said — in case you were wondering why it took nine months. “There’s really no automation because the digitizing has to be done based on experience. We looked into AI, and we hope that maybe in the near future we’ll be able to apply that, but for now an experienced archaeologist’s eye can discern the features better than a computer.”

You can see the density of the annotations on the maps. It should be noted that many of these features had by this point been verified by field expeditions. By consulting existing maps and getting ground truth in person, they had made sure that these weren’t phantom structures or wishful thinking. “We’re confident that they’re all there,” he told me.

“Next is the quantitative step,” he continued. “You measure the length and the areas and you put it all together, and you start analyzing them like you’d analyze other data set: the structure density of some area, the size of urban sprawl or agricultural fields. Finally we even figured a way to quantify the potential production of agriculture.”

This is the point where the imagery starts to go from point cloud to academic study. After all, it’s well known that the Maya had a large city in this area; it’s been intensely studied for decades. But the Pacunam (which stands for Patrimonio Cultural y Natural Maya) study was meant to advance beyond the traditional methods employed previously.

“It’s a huge data set. It’s a huge cross-section of the Maya lowlands,” Estrada-Belli said. “Big data is the buzzword now, right? You truly can see things that you would never see if you only looked at one site at a time. We could never have put together these grand patterns without lidar.”

“For example, in my area, I was able to map 47 square kilometers over the course of 15 years,” he said, slightly wistfully. “And in two weeks the lidar produced 308 square kilometers, to a level of detail that I could never match.”

As a result the paper includes all kinds of new theories and conclusions, from population and economy estimates, to cultural and engineering knowledge, to the timing and nature of conflicts with neighbors.

The resulting report doesn’t just advance the knowledge of Mayan culture and technology, but the science of archaeology itself. It’s iterative, of course, like everything else — Estrada-Belli noted that they were inspired by work done by colleagues in Belize and Cambodia; their contribution, however, exemplifies new approaches to handling large areas and large data sets.

The more experiments and field work, the more established these methods will become, and the greater they will be accepted and replicated. Already they have proven themselves invaluable, and this study is perhaps the best example of lidar’s potential in the field.

“We simply would not have seen these massive fortifications. Even on the ground, many of their details remain unclear. Lidar makes most human-made features clear, coherent, understandable,” explained co-author Stephen Houston, of Brown University, in an email. “AI and pattern recognition may help to refine the detection of features, and drones may, we hope, bring down the cost of this technology.”

“These technologies are important not only for discovery, but also for conservation,” pointed out co-author, Ithaca College’s Thomas Garrison, in an email. “3D scanning of monuments and artifacts provide detailed records and also allow for the creation of replicas via 3D printing.”

Lidar imagery can also show the extent of looting, he wrote, and help cultural authorities provide against it by being aware of relics and sites before the looters are.

The researchers are already planning a second, even larger set of flyovers, founded on the success of the first experiment. Perhaps by the time the initial physical work is done the trendier tools of the last few years will make themselves applicable.

“I doubt the airplanes are going to get less expensive but the instruments will be more powerful,” Estrada-Belli suggested. “The other line is the development of artificial intelligence that can speed up the project; at least it can rule out areas, so we don’t waste any time, and we can zero in on the areas with the greatest potential.”

He’s also excited by the idea of putting the data online so citizen archaeologists can help pore over it. “Maybe they don’t have the same experience we do, but like artificial intelligence they can certainly generate a lot of good data in a short time,” he said.

But as his colleagues point out, even years in this line of work are necessarily preliminary.

“We have to emphasize: it’s a first step, leading to innumerable ideas to test. Dozens of doctoral dissertations,” wrote Houston. “Yet there must always be excavation to look under the surface and to extract clear dates from the ruins.”

“Like many disciplines in the social sciences and humanities, archaeology is embracing digital technologies. Lidar is just one example,” wrote Garrison. “At the same time, we need to be conscious of issues in digital archiving (particularly the problem of obsolete file formatting) and be sure to use technology as a complement to, and not a replacement for methods of documentation that have proven tried and true for over a century.”

The researchers’ paper was published today in Science; you can learn about their conclusions (which are of more interest to the archaeologists and anthropologists among our readers) there, and follow other work being undertaken by the Fundación Pacunam at its website.

Powered by WPeMatico

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

Posted by | artificial intelligence, Gadgets, robotics, science, TC | No Comments

Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.

“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”

Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.

The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.

The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle

The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.

Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.

The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.

Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”

Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Powered by WPeMatico

VR optics could help old folks keep the world in focus

Posted by | accessibility, disability, Gadgets, hardware, Health, science, siggraph, stanford, Stanford University, TC, Wearables | No Comments

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Powered by WPeMatico

NASA’s Parker Solar Probe launches tonight to ‘touch the sun’

Posted by | artificial intelligence, Gadgets, Government, hardware, NASA, parker solar probe, science, Space, TC | No Comments

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Powered by WPeMatico

This 3D-printed camp stove is extra-efficient and wind-resistant

Posted by | 3d printing, Camping, ETH Zurich, ETHZ, food, Gadgets, hardware, Outdoors, science | No Comments

I love camping, but there’s always an awkward period when you’ve left the tent but haven’t yet created coffee that I hate camping. It’s hard not to watch the pot not boil and not want to just go back to bed, but since the warm air escaped when I opened the tent it’s pointless! Anyway, the Swiss figured out a great way to boil water faster, and I want one of these sweet stoves now.

The PeakBoil stove comes from design students at ETH Zurich, who have clearly faced the same problems as myself. But since they actually camp in inclement weather, they also have to deal with wind blowing out the feeble flame of an ordinary gas burner.

Their attempt to improve on the design takes the controversial step of essentially installing a stovepipe inside the vessel and heating it from the inside out rather than from the bottom up. This has been used in lots of other situations to heat water but it’s the first time I’ve seen it in a camp stove.

By carefully configuring the gas nozzles and adding ripples to the wall of the heat pipe, PeakBoil “increases the contact area between the flame and the jug,” explained doctoral student and project leader Julian Ferchow in an ETH Zurich news release.

“That, plus the fact that the wall is very thin, makes heat transfer to the contents of the jug ideal,” added his colleague Patrick Beutler.

Keeping the flames isolated inside the chimney behind baffles minimizes wind interference with the flames, and prevents you having to burn extra gas to keep it alive.

The design was created using a selective laser melting or sintering process, in which metal powder is melted in a pattern much like a 3D printer lays down heated plastic. It’s really just another form of additive manufacturing, and it gave the students “a huge amount of design freedom…with metal casting, for instance, we could never achieve channels that are as thin as the ones inside our gas burner,” Ferchow said.

Of course, the design means it’s pretty much only usable for boiling water (you wouldn’t want to balance a pan on top of it), but that’s such a common and specific use case that many campers already have a stove dedicated to the purpose.

The team is looking to further improve the design and also find an industry partner with which to take it to market. MSR, GSI, REI… I’m looking at you. Together we can make my mornings bearable.

Powered by WPeMatico

NASA’s Open Source Rover lets you build your own planetary exploration platform

Posted by | DIY, Education, Gadgets, Government, jpl, mars rover, NASA, robotics, science, Space | No Comments

Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.

The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.

Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.

The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.

“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”

The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.

“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.

So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.

There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.

“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Powered by WPeMatico

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

Posted by | artificial intelligence, Gadgets, OpenAI, robotics, science | No Comments

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Powered by WPeMatico

NASA’s 3D-printed Mars Habitat competition doles out prizes to concept habs

Posted by | 3d printing, Gadgets, Government, hardware, mars, NASA, science, Space | No Comments

A multi-year NASA contest to design a 3D-printable Mars habitat using on-planet materials has just hit another milestone — and a handful of teams have taken home some cold, hard cash. This more laid-back phase had contestants designing their proposed habitat using architectural tools, with the five winners set to build scale models next year.

Technically this is the first phase of the third phase — the (actual) second phase took place last year and teams took home quite a bit of money.

The teams had to put together realistic 3D models of their proposed habitats, and not just in Blender or something. They used Building Information Modeling software that would require these things to be functional structures designed down to a particular level of detail — so you can’t just have 2D walls made of “material TBD,” and you have to take into account thickness from pressure sealing, air filtering elements, heating, etc.

The habitats had to have at least a thousand square feet of space, enough for four people to live for a year, along with room for the machinery and paraphernalia associated with, you know, living on Mars. They must be largely assembled autonomously, at least enough that humans can occupy them as soon as they land. They were judged on completeness, layout, 3D-printing viability and aesthetics.

So although the images you see here look rather sci-fi, keep in mind they were also designed using industrial tools and vetted by experts with “a broad range of experience from Disney to NASA.” These are going to Mars, not paperback. And they’ll have to be built in miniature for real next year, so they better be realistic.

The five winning designs embody a variety of approaches. Honestly all these videos are worth a watch; you’ll probably learn something cool, and they really give an idea of how much thought goes into these designs.

Zopherus has the whole print taking place inside the body of a large lander, which brings its own high-strength printing mix to reinforce the “Martian concrete” that will make up the bulk of the structure. When it’s done printing and embedding the pre-built items like airlocks, it lifts itself up, moves over a few feet, and does it again, creating a series of small rooms. (They took first place and essentially tied the next team for take-home case, a little under $21K.)

AI SpaceFactory focuses on the basic shape of the vertical cylinder as both the most efficient use of space and also one of the most suitable for printing. They go deep on the accommodations for thermal expansion and insulation, but also have thought deeply about how to make the space safe, functional, and interesting. This one is definitely my favorite.

Kahn-Yates has a striking design, with a printed structural layer giving way to a high-strength plastic layer that lets the light in. Their design is extremely spacious but in my eyes not very efficiently allocated. Who’s going to bring apple trees to Mars? Why have a spiral staircase with such a huge footprint? Still, if they could pull it off, this would allow for a lot of breathing room, something that will surely be of great value during a year or multi-year stay on the planet.

SEArch+/Apis Cor has carefully considered the positioning and shape of its design to maximize light and minimize radiation exposure. There are two independent pressurized areas — everyone likes redundancy — and it’s built using a sloped site, which may expand the possible locations. It looks a little claustrophobic, though.

Northwestern University has a design that aims for simplicity of construction: an inflatable vessel provides the base for the printer to create a simple dome with reinforcing cross-beams. This practical approach no doubt won them points, and the inside, while not exactly roomy, is also practical in its layout. As AI SpaceFactory pointed out, a dome isn’t really the best shape (lots of wasted space) but it is easy and strong. A couple of these connected at the ends wouldn’t be so bad.

The teams split a total of $100K for this phase, and are now moving on to the hard part: actually building these things. In spring of 2019 they’ll be expected to have a working custom 3D printer that can create a 1:3 scale model of their habitat. It’s difficult to say who will have the worst time of it, but I’m thinking Kahn-Yates (that holey structure will be a pain to print) and SEArch+/Apis (slope, complex eaves and structures).

The purse for the real-world construction is an eye-popping $2 million, so you can bet the competition will be fierce. In the meantime, seriously, watch those videos above, they’re really interesting.

Powered by WPeMatico

SpaceX lands Falcon 9 booster on Just Read The Instructions drone ship

Posted by | booster, falcon, Falcon 9, Gadgets, science, Space, spacecraft, spaceflight, SpaceX, transport | No Comments

SpaceX confirmed on Twitter this morning that it recovered the booster from the latest Falcon 9 launch. Shortly after launching from Vandenberg Air Force Base in Southern California at 7:39AM ET this morning, the booster stage landed on the Just Read The Instructions drone ship. The company will now try to catch the rocket’s fairing with a giant net attached to the ship Mr. Stevens.

Despite challenging weather conditions, Falcon 9 first stage booster landed on Just Read the Instructions.

— SpaceX (@SpaceX) July 25, 2018

SpaceX has become more adept at landing its booster rockets but it’s still a spectacle every time it happens. This landing is extra special as the winds were gusting around the time of the launch.

The rocket company has so far been less successful with catching the payload shrouds. SpaceX’s high-speed recovery boat Mr. Steven took to the seas this time around with a larger net in the hopes of recovering the fairings. Reusing as much as possible is critical to SpaceX’s mission to lower the cost of space flight.

Today’s launch was SpaceX’s seventh mission for the company’s client Iridium who contracted with SpaceX to launch 75 satellites into orbit. According to SpaceX, today’s payload of Iridium satellites so far deployed without an issue. SpaceX is contracted for one more launch with Iridium.

This was SpaceX’s 14th launch of 2018.

Powered by WPeMatico