science

This robot crawls along wind turbine blades looking for invisible flaws

Posted by | Gadgets, Government, GreenTech, hardware, robotics, sandia, sandia national labs, science, Wind power, wind turbines | No Comments

Wind turbines are a great source of clean power, but their apparent simplicity — just a big thing that spins — belie complex systems that wear down like any other, and can fail with disastrous consequences. Sandia National Labs researchers have created a robot that can inspect the enormous blades of turbines autonomously, helping keep our green power infrastructure in good kit.

The enormous towers that collect energy from wind currents are often only in our view for a few minutes as we drive past. But they must stand for years through inclement weather, temperature extremes, and naturally — being the tallest things around — lightning strikes. Combine that with normal wear and tear and it’s clear these things need to be inspected regularly.

But such inspections can be both difficult and superficial. The blades themselves are among the largest single objects manufactured on the planet, and they’re often installed in distant or inaccessible areas, like the many you see offshore.

“A blade is subject to lightning, hail, rain, humidity and other forces while running through a billion load cycles during its lifetime, but you can’t just land it in a hanger for maintenance,” explained Sandia’s Joshua Paquette in a news release. In other words, not only do crews have to go to the turbines to inspect them, but they often have to do those inspections in place — on structures hundreds of feet tall and potentially in dangerous locations.

Using a crane is one option, but the blade can also be orientated downwards so an inspector can rappel along its length. Even then the inspection may be no more than eyeballing the surface.

“In these visual inspections, you only see surface damage. Often though, by the time you can see a crack on the outside of a blade, the damage is already quite severe,” said Paquette.

Obviously better and deeper inspections are needed, and that’s what the team decided to work on, with partners International Climbing Machines and Dophitech. The result is this crawling robot, which can move along a blade slowly but surely, documenting it both visually and using ultrasonic imaging.

A visual inspection will see cracks or scuffs on the surface, but the ultrasonics penetrate deep into the blades, making them capable of detecting damage to interior layers well before it’s visible outside. And it can do it largely autonomously, moving a bit like a lawnmower: side to side, bottom to top.

Of course at this point it does it quite slowly and requires human oversight, but that’s because it’s fresh out of the lab. In the near future teams could carry around a few of these things, attach one to each blade, and come back a few hours or days later to find problem areas marked for closer inspection or scanning. Perhaps a crawler robot could even live onboard the turbine and scurry out to check each blade on a regular basis.

Another approach the researchers took was drones — a natural enough solution, since the versatile fliers have been pressed into service for inspection of many other structures that are dangerous for humans to get around: bridges, monuments, and so on.

These drones would be equipped with high-resolution cameras and infrared sensors that detect the heat signatures in the blade. The idea is that as warmth from sunlight diffuses through the material of the blade, it will do so irregularly in spots where damage below the surface has changed its thermal properties.

As automation of these systems improves, the opportunities open up: A quick pass by a drone could let crews know whether any particular tower needs closer inspection, then trigger the live-aboard crawler to take a closer look. Meanwhile the humans are on their way, arriving to a better picture of what needs to be done, and no need to risk life and limb just to take a look.

Powered by WPeMatico

Crowdfunded spacecraft LightSail 2 prepares to go sailing on sunlight

Posted by | falcon heavy, Gadgets, hardware, lightsail, science, Space, SpaceX, TC | No Comments

Among the many spacecraft and satellites ascending to space on Monday’s Falcon Heavy launch, the Planetary Society’s LightSail 2 may be the most interesting. If all goes well, a week from launch it will be moving through space — slowly, but surely — on nothing more than the force exerted on it by sunlight.

LightSail 2 doesn’t have solar-powered engines, or use solar energy or heat for some secondary purpose; it will literally be propelled by the physical force of photons hitting its immense shiny sail. Not solar wind, mind you — that’s a different thing altogether.

It’s an idea, explained Planetary Society CEO and acknowledged Science Guy Bill Nye said in a press call ahead of the launch, that goes back centuries.

“It really goes back to the 1600s,” he said; Kepler deduced that a force from the sun must cause comet tails and other effects, and “he speculated that brave people would one day sail the void.”

So they might, as more recent astronomers and engineers have pondered the possibility more seriously.

“I was introduced to this in the 1970s, in the disco era. I was in Carl Sagan’s astronomy class… wow, 42 years ago, and he talked about solar sailing,” Nye recalled. “I joined the Planetary Society when it was formed in 1980, and we’ve been talking about solar sails around here ever since then. It’s really a romantic notion that has tremendous practical applications; there are just a few missions that solar sails are absolutely ideal for.”

Those would primarily be long-term, medium-orbit missions where a craft needs to stay in an Earth-like orbit, but still get a little distance away from the home planet — or, in the future, long-distance missions where slow and steady acceleration from the sun or a laser would be more practical than another propulsion method.

Mission profile

The eagle-eyed among you may have spotted the “2” in the name of the mission. LightSail 2 is indeed the second of its type; the first launched in 2015, but was not planned to be anything more than a test deployment that would burn up after a week or so.

That mission had some hiccups, with the sail not deploying to its full extent and a computer glitch compromising communications with the craft. It was not meant to fly via solar sailing, and did not.

“We sent the CubeSat up, we checked out the radio, the communications, the overall electronics, and we deployed the sail and we got a picture of that deployed sail in space,” said COO Jennifer Vaughn. “That was purely a deployment test; no solar sailing took place.”

The spacecraft itself, minus the sail, of course.

But it paved the way for its successor, which will attempt this fantastical form of transportation. Other craft have done so, most notably JAXA’s IKAROS mission to Venus, which was quite a bit larger — though as LightSail 2’s creators pointed out, not nearly as efficient as their craft — and had a very different mission.

The brand new spacecraft, loaded into a 3U CubeSat enclosure — that’s about the size of a loaf of bread — is piggybacking on an Air Force payload going up to an altitude of about 720 kilometers. There it will detach and float freely for a week to get away from the rest of the payloads being released.

Once it’s safely on its own, it will fire out from its carrier craft and begin to unfurl the sail. From that loaf-sized package will emerge an expanse of reflective Mylar with an area of 32 square meters — about the size of a boxing ring.

Inside the spacecraft’s body is also what’s called a reaction wheel, which can be spun up or slowed down in order to impart the opposite force on the craft, causing it to change its attitude in space. By this method LightSail 2 will continually orient itself so that the photons striking it propel it in the desired direction, nudging it into the desired orbit.

1 HP (housefly power) engine

The thrust produced, the team explained, is very small — as you might expect. Photons have no mass, but they do (somehow) have momentum. Not a lot, to be sure, but it’s greater than zero, and that’s what counts.

“In terms of the amount of force that solar pressure is going to exert on us, it’s on the micronewton level,” said LightSail project manager Dave Spencer. “It’s very tiny compared to chemical propulsion, very small even compared to electric propulsion. But the key for solar sailing is that it’s always there.”

“I have many numbers that I love,” cut in Nye, and detailed one of them: “It’s nine micronewtons per square meter. So if you have 32 square meters you get about a hundred micronewtons. It doesn’t sound like much, but as Dave points out, it’s continuous. Once a rocket engine stops, when it runs out of fuel, it’s done. But a solar sail gets a continuous push day and night. Wait…” (He then argued with himself about whether it would experience night — it will, as you see in the image below.)

Bruce Betts, chief scientist for LightSail, chimed in as well, to make the numbers a bit more relatable: “The total force on the sail is approximately equal to the weight of a house fly on your hand on Earth.”

Yet if you added another fly every second for hours at a time, pretty soon you’ve got a really considerable amount of acceleration going on. This mission is meant to find out whether we can capture that force.

“We’re very excited about this launch,” said Nye, “because we’re going to get to a high enough altitude to get away from the atmosphere, far enough that we’ll really gonna be able to build orbital energy and take some, I hope, inspiring pictures.”

Second craft, same (mostly) as the last

The LightSail going up this week has some improvements over the last one, though overall it’s largely the same — and a relatively simple, inexpensive craft at that, the team noted. Crowdfunding and donations over the last decade have provided quite a bit of cash to pursue this project, but it still is only a small fraction of what NASA might have spent on a similar mission, Spencer pointed out.

“This mission is going to be much more robust than the previous LightSail 1, but as we said previously, it’s done by a small team,” he said. “We’ve had a very small budget relative to our NASA counterparts, probably 1/20th of the budget that a similar NASA mission would have. It’s a low-cost spacecraft.”

Annotated image of LightSail 2, courtesy of Planetary Society.

But the improvements are specifically meant to address the main problems encountered by LightSail 2’s predecessor.

Firstly, the computer inside has been upgraded to be more robust (though not radiation-hardened) and given the ability to sense faults and reboot if necessary — they won’t have to wait, as they did for LightSail 1, for a random cosmic ray to strike the computer and cause a “natural reboot.” (Yes, really.)

The deployment of the sail itself has also improved. The previous one only extended to about 90% of its full width and couldn’t be adjusted after the fact. Subsequently tests have been done, Betts told me, to exactly determine how many revolutions the motor must make to extend the sail to 100%. Not only that, but they have put markings on the extending booms or rods that will help double check how deployment has gone.

“We also have the capability on orbit, if it looks like it’s not fully extended, we can extend it a little bit more,” he said.

Once it’s all out there, it’s uncharted territory. No one has attempted to do this kind of mission, even IKAROS, which had a totally different flight profile. The team is hoping their sensors and software are up to the task — and it should be clear whether that’s the case within a few hours of unfurling the sail.

It’s still mainly an experiment, of course, and what the team learns from this they will put into any future LightSail mission they attempt, but also share it with the spaceflight community and others attempting to sail on sunlight.

“We all know each other and we all share information,” said Nye. “And it really is — I’ve said it as much as I can — it’s really exciting to be flying this thing at last. It’s almost 2020 and we’ve been talking about it for, well, for 40 years. It’s very, very cool.”

LightSail 2 will launch aboard a SpaceX Falcon Heavy no sooner than June 24th. Keep an eye on the site for the latest news and a link to the live stream when it’s almost time for takeoff.

Powered by WPeMatico

Tripping grad students over and over for science (and better prosthetic limbs)

Posted by | artificial limbs, Gadgets, hardware, Prosthetics, science, TC | No Comments

Prosthetic limbs are getting better, but not as quickly as you’d think. They’re not as smart as our real limbs, which (directed by the brain) do things like automatically stretch out to catch ourselves when we fall. This particular “stumble reflex” was the subject of an interesting study at Vanderbilt that required its subjects to fall down… a lot.

The problem the team is aiming to help alleviate is simply that users of prosthetic limbs fall, as you might guess, more than most, and when they do fall, it can be very difficult to recover, because an artificial leg — especially for above-the-knee amputations — doesn’t react the same way a natural leg would.

The idea, explained lead researcher and mechanical engineering Professor Michael Goldfarb, is to determine what exactly goes into a stumble response and how to recreate that artificially.

“An individual who stumbles will perform different actions depending on various factors, not all of which are well known. The response changes, because the strategy that is most likely to prevent a fall is highly dependent on the ‘initial conditions’ at the time of stumble,” he told TechCrunch in an email. “We are hoping to construct a model of which factors determine the nature of the stumble response, so when a stumble occurs, we can use the various sensors on a robotic prosthetic leg to artificially reconstruct the reflex in order to provide a response that is effective and consistent with the biological reflex loop.”

The experimental setup looked like this. Subjects were put on a treadmill and told to walk forward normally; a special pair of goggles prevented them from looking down, arrows on a display kept them going straight, and a simple mental task (count backwards by sevens) kept their brain occupied.

Meanwhile an “obstacle delivery apparatus” bode its time, waiting for the best opportunity to slip a literal stumbling block onto the treadmill for the person to trip over.

When this happened, the person inevitably stumbled, though a harness prevented them from actually falling and hurting themselves. But as they stumbled, their movements were captured minutely by a motion capture rig.

After 196 stumbling blocks and 190 stumbles, the researchers had collected a great deal of data on how exactly people move to recover from a stumble. Where do their knees go relative to their ankles? How do they angle their feet? How much force is taken up by the other foot?

Exactly how this data would be integrated with a prosthesis is highly dependent on the nature of the artificial limb and the conditions of the person using it. But having this data, and perhaps feeding it to a machine learning model, will help expose patterns that can be used to inform emergency prosthetic movements.

It could also be used for robotics: “The model could be used directly to program reflexes in a biped,” said Goldfarb. Those human-like motions we see robots undertaking could be even more human when directly based on the original. There’s no rush there — they might be a little too human already.

The research describing the system and the data set, which they’re releasing for free to anyone who’d like to use it, appeared in the Journal of NeuroEngineering and Rehabilitation.

Powered by WPeMatico

NASA’s X-59 supersonic jet will have a 4K TV instead of a forward window

Posted by | Gadgets, Government, hardware, Lockheed Martin, NASA, science, Space, supersonic, supersonic flight, Transportation, x-59 | No Comments

NASA’s X-59 QueSST experimental quiet supersonic aircraft will have a cockpit like no other — featuring a big 4K screen where you’d normally have a front window. Why? Because this is one weird-looking plane.

The X-59, which is being developed by Lockheed Martin on a $247 million budget, is meant to go significantly faster than sound without producing a sonic boom, or indeed any noise “louder than a car door closing,” at least to observers on the ground.

Naturally in order to do this the craft has to be as aerodynamic as possible, which precludes the cockpit bump often found in fighter jets. In fact, the design can’t even have the pilot up front with a big window, because it would likely be far too narrow. Check out these lines:

The cockpit is more like a section taken out of the plane just over the leading edge of the rather small and exotically shaped wings. So while the view out the sides will be lovely, the view forward would be nothing but nose.

To fix that, the plane will be equipped with several displays, the lower ones just like you might expect on a modern aircraft, but the top one is a 4K monitor that’s part of what’s called the eXternal Visibility System, or XVS. It shows imagery stitched together from two cameras on the craft’s exterior, combined with high-definition terrain data loaded up ahead of time.

It’s not quite the real thing, but pilots spend a lot of time in simulators (as you can see here), so they’ll be used to it. And the real world is right outside the other windows if they need a reality check.

Lockheed and NASA’s plane is currently in the construction phase, though no doubt some parts are still being designed, as well. The program has committed to a 2021 flight date, an ambitious goal considering this is the first such experimental, or X-plane, the agency has developed in some 30 years. If successful, it could be the precursor to other quiet supersonic craft and could bring back supersonic overland flight in the future.

That’s if Boom doesn’t beat them to it.

Powered by WPeMatico

KickSat-2 project launches 105 cracker-sized satellites

Posted by | femtosats, Gadgets, hardware, kicksat, science, Space, stanford, Stanford University | No Comments

Move over, Starlink. SpaceX’s global internet play might have caught the world’s attention with its 60-satellite launch last month, but little did we know that it had already been upstaged — at least in terms of sheer numbers. The KickSat-2 project put 105 tiny “femtosats” into space at once months earlier, the culmination of a years-long project begun by a grad student.

KickSat-2 was the second attempt by Zac Manchester, now a professor at Stanford, to test what he believes is an important piece of the coming new space economy: ultra-tiny satellites.

Sure, the four-inch CubeSat standard is small… and craft like Swarm Technologies’ SpaceBEEs are even smaller. But the satellites tested by Manchester are tiny. We’re talking Triscuit size here — perhaps Wheat Thin, or even Cheez-It.

The KickSat project started back in 2011, when Manchester and his colleagues did a Kickstarter to raise funds for about 300 “Sprite” satellites that would be launched to space and deployed on behalf of backers. It was a success, but unfortunately once launched a glitch caused the satellites to burn up before being deployed. Manchester was undeterred and the project continued.

He worked with Cornell University and NASA Ames to redesign the setup, and as part of that he and collaborator Andy Filo collected a prize for their clever 3D-printed deployment mechanism. The Sprites themselves are relatively simple things: essentially an unshielded bit of PCB with a solar panel, antennas and electronics on board to send and receive signals.

The “mothership” launched in November to the ISS, where it sat for several months awaiting an opportunity to be deployed. That opportunity came on March 17: all 105 Sprites were sprung out into low Earth orbit, where they began communicating with each other and (just barely) to ground stations.

Deployment would have looked like this… kind of. Probably a little slower.

This isn’t the start of a semi-permanent thousands-strong constellation, though — the satellites all burned up a few days later, as planned.

“This was mostly a test of deployment and communication systems for the Sprites,” Manchester explained in an email to TechCrunch. The satellites were testing two different signals: “Specially designed CDMA signals that enable hundreds of Sprites to simultaneously communicate with a single ground station at very long range and with very low power,” and “simpler signals for short-range networking between Sprites in orbit.”

The Cygnus spacecraft with the KickSat-2 CubeSat attached — it’s the little gold thing right by where the docking arm is attached.

This proof of concept is an important one — it seems logical and practical to pack dozens or hundreds of these things into future missions, where they can be released into controlled trajectories providing sensing or communications relay capabilities to other spacecraft. And, of course, as we’ve already seen, the smaller and cheaper the spacecraft, the easier it is for people to access space for any reason: scientific, economic or just for the heck of it.

“We’ve shown that it’s possible for swarms of cheap, tiny satellites to one day carry out tasks now done by larger, costlier satellites, making it affordable for just about anyone to put instruments or experiments into orbit,” Manchester said in a Stanford news release. With launch costs dropping, it might not be long before you’ll be able to take ownership of a Sprite of your own.

Powered by WPeMatico

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by | artificial intelligence, conservation, Gadgets, hardware, robotics, science, TC, XPRIZE | No Comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.

Powered by WPeMatico

This robot learns its two-handed moves from human dexterity

Posted by | artificial intelligence, Gadgets, hardware, robotic arm, robotics, robots, science, science robotics, TC | No Comments

If robots are really to help us out around the house or care for our injured and elderly, they’re going to want two hands… at least. But using two hands is harder than we make it look — so this robotic control system learns from humans before attempting to do the same.

The idea behind the research, from the University of Wisconsin-Madison, isn’t to build a two-handed robot from scratch, but simply to create a system that understands and executes the same type of manipulations that we humans do without thinking about them.

For instance, when you need to open a jar, you grip it with one hand and move it into position, then tighten that grip as the other hand takes hold of the lid and twists or pops it off. There’s so much going on in this elementary two-handed action that it would be hopeless to ask a robot to do it autonomously right now. But that robot could still have a general idea of why this type of manipulation is done on this occasion, and do what it can to pursue it.

The researchers first had humans wearing motion capture equipment perform a variety of simulated everyday tasks, like stacking cups, opening containers and pouring out the contents, and picking up items with other things balanced on top. All this data — where the hands go, how they interact and so on — was chewed up and ruminated on by a machine learning system, which found that people tended to do one of four things with their hands:

  • Self-handover: This is where you pick up an object and put it in the other hand so it’s easier to put it where it’s going, or to free up the first hand to do something else.
  • One hand fixed: An object is held steady by one hand providing a strong, rigid grip, while the other performs an operation on it like removing a lid or stirring the contents.
  • Fixed offset: Both hands work together to pick something up and rotate or move it.
  • One hand seeking: Not actually a two-handed action, but the principle of deliberately keeping one hand out of action while the other finds the object required or performs its own task.

The robot put this knowledge to work not in doing the actions itself — again, these are extremely complex motions that current AIs are incapable of executing — but in its interpretations of movements made by a human controller.

You would think that when a person is remotely controlling a robot, it would just mirror the person’s movements exactly. And in the tests, the robot does so to provide a baseline of how without knowledge about these “bimanual actions,” but many of them are simply impossible.

Think of the jar-opening example. We know that when we’re opening the jar, we have to hold one side steady with a stronger grip and may even have to push back with the jar hand against the movement of the opening hand. If you tried to do this remotely with robotic arms, that information is not present any more, and the one hand will likely knock the jar out of the grip of the other, or fail to grip it properly because the other isn’t helping out.

The system created by the researchers recognizes when one of the four actions above is happening, and takes measures to make sure that they’re a success. That means, for instance, being aware of the pressures exerted on each arm by the other when they pick up a bucket together. Or providing extra rigidity to the arm holding an object while the other interacts with the lid. Even when only one hand is being used (“seeking”), the system knows that it can deprioritize the movements of the unused hand and dedicate more resources (be it body movements or computational power) to the working hand.

In videos of demonstrations, it seems clear that this knowledge greatly improves the success rate of the attempts by remote operators to perform a set of tasks meant to simulate preparing a breakfast: cracking (fake) eggs, stirring and shifting things, picking up a tray with glasses on it and keeping it level.

Of course this is all still being done by a human, more or less — but the human’s actions are being augmented and re-interpreted into something more than simple mechanical reproduction.

Doing these tasks autonomously is a long ways off, but research like this forms the foundation for that work. Before a robot can attempt to move like a human, it has to understand not just how humans move, but why they do certain things in certain circumstances and, furthermore, what important processes may be hidden from obvious observation — things like planning the hand’s route, choosing a grip location and so on.

The Madison team was led by Daniel Rakita; their paper describing the system is published in the journal Science Robotics.

Powered by WPeMatico

Delane Parnell’s plan to conquer amateur esports

Posted by | accelerator, Alexis Ohanian, Amazon, Apps, Brian Wong, Canada, coach, delane parnell, detroit, esports, Facebook, Fundings & Exits, Gaming, league of legends, Los Angeles, Ludlow Ventures, Matt Mazzeo, Media, national basketball association, north america, Personnel, Peter Pham, playvs, Riot Games, rocket fiber, Rocket League, science, serial entrepreneur, Sports, Spotify, Startups, Talent, TC, Twitch, United States, Venture Capital, video game | No Comments

Most of the buzz about esports focuses on high-profile professional teams and audiences watching live streams of those professionals.

What gets ignored is the entire base of amateurs wanting to compete in esports below the professional tier. This is like talking about the NBA and the value of its sponsorships and broadcast rights as if that is the entirety of the basketball market in the US.

Los Angeles-based PlayVS (pronounced “play versus”) wants to become the dominant platform for amateur esports, starting at the high school level. The company raised $46 million last year—its first year operating—with the vision that owning the infrastructure for competitions and expanding it to encompass other social elements of gaming can make it the largest gaming company in the world.

I recently sat down with Founder & CEO Delane Parnell to talk about his company’s formation and growth strategy. Below is the transcript of our conversation (edited for length and clarity):

Founding PlayVS

Eric P: You have a fascinating background as a serial entrepreneur while you were a teenager.

Delane P.: I grew up on the west side of Detroit and started working at the cell phone store of a family friend when I was 13. When I turned 16 or so, I joined two guys in opening our own Metro PCS franchise. And then two additional franchises. And I was on the founding team of a car rental company called Executive Rental Car.

Eric P: And this segued into tech startups after meeting Jon Triest from Ludlow Ventures?

Delane P: He got me a ticket to the Launch conference in SF, and that experience inspired me to start a Fireside Chat series in Detroit that brought in people like Brian Wong from Kiip and Alexis Ohanian from Reddit to speak. Starting at 21, I worked at a venture capital firm called IncWell based in Birmingham, Michigan then joined a startup called Rocket Fiber.

We were focused on internet infrastructure – this is 2015-ish – and I was appointed to lead our strategy in esports. So I met with many of the publishers, ancillary startups, tournament organizers, and OG players and team owners. Through the process, I became passionate about esports and ended up leaving Rocket Fiber to start a Call of Duty team that I quickly sold to TSM.

Eric P: What then drove you to found PlayVS? Did it seem like an obvious opportunity or did it take you a while to figure it out?

Delane P.: What esports means is playing video games competitively bound to governance and a competitive ruleset. As a player, what that experience means is you play on a team, in a position, with a coach, in a season that culminates in some sort of championship.

Powered by WPeMatico

Bidding for this like-new Enigma Machine starts at $200,000

Posted by | alan turing, enigma, enigma machine, Gadgets, hardware, science, TC, world war II, wwii | No Comments

If you’re feeling flush this week, then perhaps instead of buying a second Bugatti you might consider picking up this lightly used Enigma Machine. These devices, the scourge of the Allies in World War II, are rarely for sale to begin with — and one in such good shape that was actually used in the war is practically unheard of.

The Enigma saga is a fascinating one, though far too long to repeat here — let it suffice to say that these machines created a code that was close to unbreakable, allowing the Nazis to communicate securely and reliably even with the Allies listening in. But a team of mathematicians and other experts at Bletchley Park in Britain, the most famous of them Alan Turing, managed to crack the Enigma’s code, helping turn the tide of the war. (If you’re interested, a good biography of Turing will of course tell you more, and Simon Singh’s The Code Book tells the story well as part of the history of cryptography.)

The risk of exposure should a machine be captured by the Allies meant that German troops were instructed to destroy their Enigma rather than let it be taken. And at the end of the war, Winston Churchill ordered that any surviving Enigmas be destroyed, but many escaped into the hands of private collectors like the person who got this one. It is thought that only a few hundred remain extant, though as with other such infamous artifacts a precise estimate is impossible.

This machine, however, passed through the fires of World War II and survived not only intact but with its original rotors — the interchangeable parts which would spin in a special fashion to irreversibly scramble text — and only one of its interior light bulbs out. The battery’s shot, but that’s to be expected after so long a duration in storage. If you’re waiting on an Enigma in better condition, expect to be waiting a long time.

Naturally this would be of inestimable value to a deep-pocketed collector of such things (let us hope in good taste) or a museum of war or cryptography. The secrets of the Enigma are long since revealed (even replicated in a pocket watch), but the original machines are marvels of ingenuity that may still yield discoveries and provoke wonder.

Bidding for this Enigma starts at $200,000 on Thursday at Nate D Sanders Auctions. That’s some 10 times what another machine went for 10 years ago, so you can see they’re not getting any less expensive (this one is in better condition, admittedly) — and it seems likely it will fetch far more than the minimum.

Powered by WPeMatico

You can do it, robot! Watch the beefy, 4-legged HyQReal pull a plane

Posted by | Europe, Gadgets, hardware, hyqreal, italian institute of technology, Italy, Moog, robotics, science | No Comments

It’s not really clear just yet exactly what all these powerful, agile quadrupedal robots people are working on are going to do, exactly, but even so it never gets old watching them do their thing. The latest is an Italian model called HyQReal, which demonstrates its aspiration to winning strongman competitions, among other things, by pulling an airplane behind it.

The video is the debut for HyQReal, which is the successor to HyQ, a much smaller model created years ago by the Italian Institute of Technology, and its close relations. Clearly the market, such as it is, has advanced since then, and discerning customers now want the robot equivalent of a corn-fed linebacker.

That’s certainly how HyQReal seems to be positioned; in its video, the camera lingers lovingly on its bulky titanium haunches and thick camera cage. Its low slung body recalls a bulldog rather than a cheetah or sprightly prey animal. You may think twice before kicking this one.

The robot was presented today at the International Conference on Robotics and Automation, where in a workshop (documented by IEEE Spectrum) the team described HyQReal’s many bulkinesses.

It’s about four feet long and three high, weighs 130 kilograms (around 287 pounds), of which the battery comprises 15 — enough for about two hours of duty. It’s resistant to dust and water exposure and should be able to get itself up should it fall or tip over. The robot was created in collaboration with Moog, which created special high-powered hydraulics for the purpose.

It sounds good on paper, and the robot clearly has the torque needed to pull a small passenger airplane, as you can see in the video. But that’s not really what robots like this are for — they need to generate versatility and robustness under a variety of circumstances, and the smarts to navigate a human-centric world and provide useful services.

Right now HyQReal is basically still a test bed — it needs to have all kinds of work done to make sure it will stand up under conditions that robots like Spot Mini have already aced. And engineering things like arm or cargo attachments is far from trivial. All the same it’s exciting to see competition in a space that, just a few years back, seemed totally new (and creepy).

Powered by WPeMatico