science

Deploy the space harpoon

Posted by | airbus, Gadgets, hardware, harpoons, moby dick, robotics, science, Space, space debris, space junk | No Comments

Watch out, starwhales. There’s a new weapon for the interstellar dwellers whom you threaten with your planet-crushing gigaflippers, undergoing testing as we speak. This small-scale version may only be good for removing dangerous orbital debris, but in time it will pierce your hypercarbon hides and irredeemable sun-hearts.

Literally a space harpoon. (Credit: Airbus)

However, it would be irresponsible of me to speculate beyond what is possible today with the technology, so let a summary of the harpoon’s present capabilities suffice.

The space harpoon is part of the RemoveDEBRIS project, a multi-organization European effort to create and test methods of reducing space debris. There are thousands of little pieces of who knows what clogging up our orbital neighborhood, ranging in size from microscopic to potentially catastrophic.

There are as many ways to take down these rogue items as there are sizes and shapes of space junk; perhaps it’s enough to use a laser to edge a small piece down toward orbital decay, but larger items require more hands-on solutions. And seemingly all nautical in origin: RemoveDEBRIS has a net, a sail and a harpoon. No cannon?

You can see how the three items are meant to operate here:

The harpoon is meant for larger targets, for example full-size satellites that have malfunctioned and are drifting from their orbit. A simple mass driver could knock them toward the Earth, but capturing them and controlling descent is a more controlled technique.

While an ordinary harpoon would simply be hurled by the likes of Queequeg or Dagoo, in space it’s a bit different. Sadly it’s impractical to suit up a harpooner for EVA missions. So the whole thing has to be automated. Fortunately the organization is also testing computer vision systems that can identify and track targets. From there it’s just a matter of firing the harpoon at it and reeling it in, which is what the satellite demonstrated today.

This Airbus-designed little item is much like a toggling harpoon, which has a piece that flips out once it pierces the target. Obviously it’s a single-use device, but it’s not particularly large and several could be deployed on different interception orbits at once. Once reeled in, a drag sail (seen in the video above) could be deployed to hasten reentry. The whole thing could be done with little or no propellant, which greatly simplifies operation.

Obviously it’s not yet a threat to the starwhales. But we’ll get there. We’ll get those monsters good one day.

Powered by WPeMatico

The Opportunity Mars rover’s greatest shots and discoveries

Posted by | Gadgets, Government, hardware, mars rovers, NASA, robotics, science, Space | No Comments

Opportunity’s mission is complete, and the rover that was supposed to last 90 days closes the book on 15 years of exploration. It’s sad, but it’s also a great time to look back on the mission and see some of its greatest hits. Here are 25 images showing where it came from, where it went, and what it discovered on its marathon-length journey.

Powered by WPeMatico

DARPA wants smart bandages for wounded warriors

Posted by | artificial intelligence, DARPA, Gadgets, Government, hardware, Health, medical, medtech, science, TC | No Comments

Nowhere is prompt and effective medical treatment more important than on the battlefield, where injuries are severe and conditions dangerous. DARPA thinks that outcomes can be improved by the use of intelligent bandages and other systems that predict and automatically react to the patient’s needs.

Ordinary cuts and scrapes just need a bit of shelter and time and your amazing immune system takes care of things. But soldiers not only receive far graver wounds, but under complex conditions that are not just a barrier to healing but unpredictably so.

DARPA’s Bioelectronics for Tissue Regeneration program, or BETR, will help fund new treatments and devices that “closely track the progress of the wound and then stimulate healing processes in real time to optimize tissue repair and regeneration.”

“Wounds are living environments and the conditions change quickly as cells and tissues communicate and attempt to repair,” said Paul Sheehan, BETR program manager, in a DARPA news release. “An ideal treatment would sense, process, and respond to these changes in the wound state and intervene to correct and speed recovery. For example, we anticipate interventions that modulate immune response, recruit necessary cell types to the wound, or direct how stem cells differentiate to expedite healing.”

It’s not hard to imagine what these interventions might comprise. Smart watches are capable of monitoring several vital signs already, and in fact have alerted users to such things as heart-rate irregularities. A smart bandage would use any signal it can collect — “optical, biochemical, bioelectronic, or mechanical” — to monitor the patient and either recommend or automatically adjust treatment.

A simple example might be a wound that the bandage detects from certain chemical signals is becoming infected with a given kind of bacteria. It can then administer the correct antibiotic in the correct dose and stop when necessary rather than wait for a prescription. Or if the bandage detects shearing force and then an increase in heart rate, it’s likely the patient has been moved and is in pain — out come the painkillers. Of course, all this information would be relayed to the caregiver.

This system may require some degree of artificial intelligence, although of course it would have to be pretty limited. But biological signals can be noisy and machine learning is a powerful tool for sorting through that kind of data.

BETR is a four-year program, during which DARPA hopes that it can spur innovation in the space and create a “closed-loop, adaptive system” that improves outcomes significantly. There’s a further ask to have a system that addresses osseointegration surgery for prosthetics fitting — a sad necessity for many serious injuries incurred during combat.

One hopes that the technology will trickle down, of course, but let’s not get ahead of ourselves. It’s all largely theoretical for now, though it seems more than possible that the pieces could come together well ahead of the deadline.

Powered by WPeMatico

Dandelion Energy, the Alphabet X spin out, raises another $16M led by GV and Comcast

Posted by | Alphabet X, Comcast Ventures, Dandelion, energy, Fundings & Exits, Gadgets, geothermal, GV, hardware, science, smart home, TC | No Comments

As tech companies continue their race to control the smart home, a promising energy startup has raised a round of funding from traditionally tech and strategic investors for a geothermal solution to heat and cool houses. Dandelion Energy, a spin out from Alphabet X, has raised $16 million in a Series A round of funding, with strategic investors Comcast Ventures leading the round, along with GV, the investment arm of Alphabet formerly known as Google Ventures.

Lennar Corporation, the home-building giant, is also coming in as an investor, as are previous backers NEA, Collaborative Fund, Ground Up and Zhenfund, as well as other unnamed investors. Notably, Lennar once worked with Apple but is now collaborating with Amazon on smart homes.

As a side note, Dandelion’s investment is a timely reminder of how central “new home” startups are right now in smart home plays. Amazon just yesterday announced one more big move in its own connected home strategy with the acquisition of mesh Wi-Fi startup Eero, which helps extend the range and quality of Wi-Fi coverage in a property.

This is the second funding round for Dandelion in the space of a year, after the company raised a seed round of $4.5 million in March 2018, a mark of how the company has been seeing a demand for its services and now needs the capital to scale. In the past year, it had accrued a waitlist of “thousands” of homeowners requesting its services across America, where it is estimated that millions of homeowners heat their homes with fossil fuels, which are estimated to account for 11 percent of all carbon emissions.

The company is based out of New York, and for now New York is the only state where its services are offered. The funding may help change that. It will be used in part for R&D, but also to hire more people, open new warehouses for its equipment and supplies and for business development.

Dandelion is not disclosing its valuation, but in its last round the company had a modest post-money valuation of $15 million, according to PitchBook. It has now raised $23 million in total since spinning out from Alphabet X, the company’s moonshot lab, in May 2017.

The premise of Dandelion’s business is that it provides a source of heating and cooling for homes that takes people away from consuming traditional, energy grid-based services — which represent significant costs, both in terms of financial and environmental impact. If you calculate usage over a period of years, Dandelion claims that it can cut a household’s energy bills in half while also being significantly more friendly for the environment compared to conventional systems that use gas and fossil fuels.

While there have been a number of efforts over the years to tap geothermal currents to provide home heating and cooling, many of the solutions up to now have been challenging to put in place, with services typically using wide drills and digging wells at depths of more than 1,000 feet.

“These machines are unnecessarily large and slow for installing a system that needs only a few 4” diameter holes at depths of a few hundred feet,” Kathy Hannun, co-founder and CEO of Dandelion, has said in the past. “So we decided to try to design a better drill that could reduce the time, mess and hassle of installing these pipes, which could in turn reduce the final cost of a system to homeowners.”

The smaller scale of what Dandelion builds also means that the company can do an installation in one day.

While a pared-down approach means a lower set of costs (half the price of traditional geothermal systems) and quicker installation, that doesn’t mean that upfront costs are non-existent. Dandelion installations run between $20,000 and $25,000, although home owners can subsequently rack up savings of $35,000 over 20 years. (Hannun noted that today about 50 percent of customers choose to finance the installation, which removes the upfront cost and spreads it out across monthly payments.)

This is also where Lennar comes in. The company is in the business of building homes, and it has been investing in particular in the idea of building the next generation of homes by incorporating better connectivity, more services — and potentially alternative energy sources — from the ground up.

“We’re incredibly excited to invest in Dandelion Energy,” said Eric Feder, managing general partner for Lennar Ventures, in a statement. “The possibility of incorporating geothermal heating & cooling systems in our new homes is something we’ve explored for years, but the math never made sense. Dandelion Energy is finally making geothermal affordable and we look forward to the possibility of including it in the homes Lennar builds.”

The fact that Comcast is among the investors in Dandelion is a notable development.

The company has been acquiring, and taking strategic stakes in, a number of connected-home businesses as it builds its own connected home offering, where it not only brings broadband and entertainment to your TV and come computers, it also provides the tools to link up other connected devices to that network to control them from a centralised point.

Dandelion is “off grid” in its approach to providing home energy, and while you might think that it doesn’t make sense for a company that is investing in and peddling services and electronic devices connected to a centralised (equally electricity-consuming) internet to be endorsing a company that’s trying to build an alternative, it actually does.

For starters, Dandelion may be tapping geothermal energy but its pump uses electricity and sensors to monitor and moderate its performance.

“Dandelion’s heat pump is a connected device with 60 sensors that monitor the performance and ensures that the home owner is proactively warned if there are any issues,” Hannun said in an interview. “This paves the way to operate it in a smart way. It’s aligned with the connected home.” In other words, this positions Dandelion as one more device and system that could be integrated into Comcast’s connected home solution.

Aside from this, viewed in terms of the segment of customers that Comcast is targeting, it’s selling a bundle of connected home services to a demographic of users who are not afraid of using (and buying) new and alternative technology to do things a different way from how their parents did it. Dandelion may not be “connected,” but even its approach to disconnecting will appeal to a person who may already be thinking of ways of reducing his or her carbon footprint and energy bills (especially since they may be consuming vast amounts of electricity to run their connected homes).

“The home heating and cooling industry has been constrained by lack of innovation and high-costs,” said Sam Landman, managing director of Comcast Ventures, in a statement. “The team at Dandelion and their modern approach to implementing geothermal technology is transforming the industry and giving consumers a convenient, safe, and cost-effective way to heat and cool their homes while reducing carbon emissions.”

Landman and Shaun Maguire, a partner at GV, will both be joining Dandelion’s board with this round.

“In a short amount of time, Dandelion has already proven to be an effective and affordable alternative for home heating and cooling, leveraging best-in-class geothermal technology,” said Maguire, in a statement. “Driven by an exceptional leadership team, including CEO Kathy Hannun, Dandelion Energy is poised to have a meaningful impact on adoption of geothermal energy solutions among homeowners.”

Powered by WPeMatico

This light-powered 3D printer materializes objects all at once

Posted by | 3d printers, 3d printing, Berkeley, Gadgets, hardware, holograms, holography, science, TC, uc-berkeley | No Comments

3D printing has changed the way people approach hardware design, but most printers share a basic limitation: they essentially build objects layer by layer, generally from the bottom up. This new system from UC Berkeley, however, builds them all at once, more or less, by projecting a video through a jar of light-sensitive resin.

The device, which its creators call the replicator (but shouldn’t, because that’s a MakerBot trademark), is mechanically quite simple. It’s hard to explain it better than Berkeley’s Hayden Taylor, who led the research:

Basically, you’ve got an off-the-shelf video projector, which I literally brought in from home, and then you plug it into a laptop and use it to project a series of computed images, while a motor turns a cylinder that has a 3D-printing resin in it.

Obviously there are a lot of subtleties to it — how you formulate the resin, and, above all, how you compute the images that are going to be projected, but the barrier to creating a very simple version of this tool is not that high.

Using light to print isn’t new — many devices out there use lasers or other forms of emitted light to cause material to harden in desired patterns. But they still do things one thin layer at a time. Researchers did demonstrate a “holographic” printing method a bit like this using intersecting beams of light, but it’s much more complex. (In fact, Berkeley worked with Lawrence Livermore on this project.)

In Taylor’s device, the object to be recreated is scanned first in such a way that it can be divided into slices, a bit like a CT scanner — which is in fact the technology that sparked the team’s imagination in the first place.

By projecting light into the resin as it revolves, the material for the entire object is resolved more or less at once, or at least over a series of brief revolutions rather than hundreds or thousands of individual drawing movements.

This has a number of benefits besides speed. Objects come out smooth — if a bit crude in this prototype stage — and they can have features and cavities that other 3D printers struggle to create. The resin can even cure around an existing object, as they demonstrate by manifesting a handle around a screwdriver shaft.

Naturally, different materials and colors can be swapped in, and the uncured resin is totally reusable. It’ll be some time before it can be used at scale or at the level of precision traditional printers now achieve, but the advantages are compelling enough that it will almost certainly be pursued in parallel with other techniques.

The paper describing the new technique was published this week in the journal Science.

Powered by WPeMatico

Let’s save the bees with machine learning

Posted by | artificial intelligence, bees, conservation, EPFL, Gadgets, GreenTech, science, TC | No Comments

Machine learning and all its related forms of “AI” are being used to work on just about every problem under the sun, but even so, stemming the alarming decline of the bee population still seems out of left field. In fact it’s a great application for the technology and may help both bees and beekeepers keep hives healthy.

The latest threat to our precious honeybees is the Varroa mite, a parasite that infests hives and sucks the blood from both bees and their young. While it rarely kills a bee outright, it can weaken it and cause young to be born similarly weak or deformed. Over time this can lead to colony collapse.

The worst part is that unless you’re looking closely, you might not even see the mites — being mites, they’re tiny: a millimeter or so across. So infestations often go on for some time without being discovered.

Beekeepers, caring folk at heart obviously, want to avoid this. But the solution has been to put a flat surface beneath a hive and pull it out every few days, inspecting all the waste, dirt and other hive junk for the tiny bodies of the mites. It’s painstaking and time-consuming work, and of course if you miss a few, you might think the infestation is getting better instead of worse.

Machine learning to the rescue!

As I’ve had occasion to mention about a billion times before this, one of the things machine learning models are really good at is sorting through noisy data, like a surface covered in random tiny shapes, and finding targets, like the shape of a dead Varroa mite.

Students at the École Polytechnique Fédérale de Lausanne in Switzerland created an image recognition agent called ApiZoom trained on images of mites that can sort through a photo and identify any visible mite bodies in seconds. All the beekeeper needs to do is take a regular smartphone photo and upload it to the EPFL system.

The project started back in 2017, and since then the model has been trained with tens of thousands of images and achieved a success rate of detection of about 90 percent, which the project’s Alain Bugnon told me is about at parity with humans. The plan now is to distribute the app as widely as possible.

“We envisage two phases: a web solution, then a smartphone solution. These two solutions allow to estimate the rate of infestation of a hive, but if the application is used on a large scale, of a region,” Bugnon said. “By collecting automatic and comprehensive data, it is not impossible to make new findings about a region or atypical practices of a beekeeper, and also possible mutations of the Varroa mites.”

That kind of systematic data collection would be a major help for coordinating infestation response at a national level. ApiZoom is being spun out as a separate company by Bugnon; hopefully this will help get the software to beekeepers as soon as possible. The bees will thank them later.

Powered by WPeMatico

Don’t worry, this rocket-launching Chinese robo-boat is strictly for science

Posted by | artificial intelligence, Asia, autonomous vehicles, Boats, China, climate change, Gadgets, hardware, robotics, science, TC | No Comments

It seems inevitable that the high seas will eventually play host to a sort of proxy war as automated vessels clash over territory for the algae farms we’ll soon need to feed the growing population. But this rocket-launching robo-boat is a peacetime vessel concerned only with global weather patterns.

The craft is what’s called an unmanned semi-submersible vehicle, or USSV, and it functions as a mobile science base — and now, a rocket launch platform. For meteorological sounding rockets, of course, nothing scary.

It solves a problem we’ve seen addressed by other seagoing robots like the Saildrone: that the ocean is very big, and very dangerous — so monitoring it properly is equally big and dangerous. You can’t have a crew out in the middle of nowhere all the time, even if it would be critical to understanding the formation of a typhoon or the like. But you can have a fleet of robotic ships systematically moving around the ocean.

In fact this is already done in a variety of ways and by numerous countries and organizations, but much of the data collection is both passive and limited in range. A solar-powered buoy drifting on the currents is a great resource, but you can’t exactly steer it, and it’s limited to sampling the water around it. And weather balloons are nice, too, if you don’t mind flying it out to where it needs to be first.

A robotic boat, on the other hand, can go where you need it and deploy instruments in a variety of ways, dropping or projecting them deep into the water or, in the case of China’s new USSV, firing them 20,000 feet into the air.

“Launched from a long-duration unmanned semi-submersible vehicle, with strong mobility and large coverage of the sea area, rocketsonde can be used under severe sea conditions and will be more economical and applicable in the future,” said Jun Li, a researcher at the Chinese Academy of Sciences, in a news release.

The 24-foot craft, which has completed a handful of near-land cruises in Bohai Bay, was announced in the paper. You may wonder what “semi-submersible” means. Essentially they put as much of the craft as possible under the water, with only instruments, hatches and other necessary items aboveboard. That minimizes the effect of rough weather on the craft — but it is still self-righting in case it capsizes in major wave action.

The USSV’s early travels

It runs on a diesel engine, so it’s not exactly the latest tech there, but for a large craft going long distances, solar is still a bit difficult to manage. The diesel on board will last it about 10 days and take it around 3,000 km, or 1,800 miles.

The rocketsondes are essentially small rockets that shoot up to a set altitude and then drop a “driftsonde,” a sensor package attached to a balloon, parachute or some other descent-slowing method. The craft can carry up to 48 of these, meaning it could launch one every few hours for its entire 10-day cruise duration.

The researchers’ findings were published in the journal Advances in Atmospheric Sciences. This is just a prototype, but its success suggests we can expect a few more at the very least to be built and deployed. I’ve asked Li a few questions about the craft and will update this post if I hear back.

Powered by WPeMatico

StarCraft II-playing AI AlphaStar takes out pros undefeated

Posted by | artificial intelligence, DeepMind, Gaming, science, Starcraft | No Comments

Losing to the computer in StarCraft has been a tradition of mine since the first game came out in 1998. Of course, the built-in “AI” is trivial for serious players to beat, and for years researchers have attempted to replicate human strategy and skill in the latest version of the game. They’ve just made a huge leap with AlphaStar, which recently beat two leading pros 5-0.

The new system was created by DeepMind, and in many ways it’s very unlike what you might call a “traditional” StarCraft AI. The computer opponents you can select in the game are really pretty dumb — they have basic built-in strategies, and know in general how to attack and defend and how to progress down the tech tree. But they lack everything that makes a human player strong: adaptability, improvisation and imagination.

AlphaStar is different. It learned from watching humans play at first, but soon honed its skills by playing against facets of itself.

The first iterations watched replays of games to learn the basics of “micro” (i.e. controlling units effectively) and “macro” (i.e. game economy and long-term goals) strategy. With this knowledge it was able to beat the in-game computer opponents on their hardest setting 95 percent of the time. But as any pro will tell you, that’s child’s play. So the real work started here.

Hundreds of agents were spawned and pitted against each other.

Because StarCraft is such a complex game, it would be silly to think that there’s a single optimal strategy that works in all situations. So the machine learning agent was essentially split into hundreds of versions of itself, each given a slightly different task or strategy. One might attempt to achieve air superiority at all costs; another to focus on teching up; another to try various “cheese” attempts like worker rushes and the like. Some were even given strong agents as targets, caring about nothing else but beating an already successful strategy.

This family of agents fought and fought for hundreds of years of in-game time (undertaken in parallel, of course). Over time the various agents learned (and of course reported back) various stratagems, from simple things such as how to scatter units under an area-of-effect attack to complex multi-pronged offenses. Putting them all together produced the highly robust AlphaStar agent, with some 200 years of gameplay under its belt.

Most StarCraft II pros are well younger than 200, so that’s a bit of an unfair advantage. There’s also the fact that AlphaStar, in its original incarnation anyway, has two other major benefits.

First, it gets its information directly from the game engine, rather than having to observe the game screen — so it knows instantly that a unit is down to 20 HP without having to click on it. Second, it can (though it doesn’t always) perform far more “actions per minute” than a human, because it isn’t limited by fleshy hands and banks of buttons. APM is just one measure among many that determines the outcome of a match, but it can’t hurt to be able to command a guy 20 times in a second rather than two or three.

It’s worth noting here that AIs for micro control have existed for years, having demonstrated their prowess in the original StarCraft. It’s incredibly useful to be able to perfectly cycle out units in a firefight so none takes lethal damage, or to perfectly time movements so no attacker is idle, but the truth is good strategy beats good tactics pretty much every time. A good player can counter the perfect micro of an AI and take that valuable tool out of play.

AlphaStar was matched up against two pro players, MaNa and TLO of the highly competitive Team Liquid. It beat them both handily, and the pros seemed excited rather than depressed by the machine learning system’s skill. Here’s game 2 against MaNa:

In comments after the game series, MaNa said:

I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected. I’ve realised how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me. We’re all excited to see what comes next.

And TLO, who actually is a Zerg main but gamely played Protoss for the experiment:

I was surprised by how strong the agent was. AlphaStar takes well-known strategies and turns them on their head. The agent demonstrated strategies I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet.

You can get the replays of the matches here.

AlphaStar is inarguably a strong player, but there are some important caveats here. First, when they handicapped the agent by making it play like a human, in that it had to move the camera around, could only click on visible units, had a human-like delay on perception and so on, it was far less strong and in fact was beaten by MaNa. But that version, which perhaps may become the benchmark rather than its untethered cousin, is still under development, so for that and other reasons it was never going to be as strong.

AlphaStar only plays Protoss, and the most successful versions of itself used very micro-heavy units.

Most importantly, though, AlphaStar is still an extreme specialist. It only plays Protoss versus Protoss — probably has no idea what a Zerg looks like — with a single opponent, on a single map. As anyone who has played the game can tell you, the map and the races produce all kinds of variations, which massively complicate gameplay and strategy. In essence, AlphaStar is playing only a tiny fraction of the game — though admittedly many players also specialize like this.

That said, the groundwork of designing a self-training agent is the hard part — the actual training is a matter of time and computing power. If it’s 1v1v1 on Bloodbath maybe it’s stalker/zealot time, while if it’s 2v2 on a big map with lots of elevation, out come the air units. (Is it obvious I’m not up on my SC2 strats?)

The project continues and AlphaStar will grow stronger, naturally, but the team at DeepMind thinks that some of the basics of the system, for instance how it efficiently visualizes the rest of the game as a result of every move it makes, could be applied in many other areas where AIs must repeatedly make decisions that affect a complex and long-term series of outcomes.

Powered by WPeMatico

Autonomous subs spend a year cruising under Antarctic ice

Posted by | antarctica, artificial intelligence, Gadgets, robotics, science, seaglider, university of washington | No Comments

The freezing waters underneath Antarctic ice shelves and the underside of the ice itself are of great interest to scientists… but who wants to go down there? Leave it to the robots. They won’t complain! And indeed, a pair of autonomous subs have been nosing around the ice for a full year now, producing data unlike any other expedition ever has.

The mission began way back in 2017, with a grant from the late Paul Allen. With climate change affecting sea ice around the world, precise measurements and study of these frozen climes is more important than ever. And fortunately, robotic exploration technology had reached a point where long-term missions under and around ice shelves were possible.

The project would use a proven autonomous seagoing vehicle called the Seaglider, which has been around for some time but had been redesigned to perform long-term operations in these dark, sealed-over environments. ne of the craft’s co-creators, UW’s Chris Lee, said of the mission at the time: “This is a high-risk, proof-of-concept test of using robotic technology in a very risky marine environment.”

The risks seem to have paid off, as an update on the project shows. The modified craft have traveled hundreds of miles during a year straight of autonomous operation.

It’s not easy to stick around for a long time on the Antarctic coast for a lot of reasons. But leaving robots behind to work while you go relax elsewhere for a month or two is definitely doable.

“This is the first time we’ve been able to maintain a persistent presence over the span of an entire year,” Lee said in a UW news release today. “Gliders were able to navigate at will to survey the cavity interior… This is the first time any of the modern, long-endurance platforms have made sustained measurements under an ice shelf.”

You can see the paths of the robotic platforms below as they scout around near the edge of the ice and then dive under in trips of increasing length and complexity:

They navigate in the dark by monitoring their position with regard to a pair of underwater acoustic beacons fixed in place by cables. The blue dots are floats that go along with the natural currents to travel long distances on little or no power. Both are equipped with sensors to monitor the shape of the ice above, the temperature of the water, and other interesting data points.

It isn’t the first robotic expedition under the ice shelves by a long shot, but it’s definitely the longest term and potentially the most fruitful. The Seagliders are smaller, lighter, and better equipped for long-term missions. One went 87 miles in a single trip!

The mission continues, and two of the three initial Seagliders are still operational and ready to continue their work.

Powered by WPeMatico

Watch Blue Origin’s 10th New Shepard mission launch a science-loaded capsule to space

Posted by | Blue Origin, Gadgets, NASA, new shepard, rocket launch, science, Space, TC | No Comments

Blue Origin, the rocket company founded by Amazon’s Jeff Bezos, is about to undertake the 10th launch of its New Shepard launch vehicle, with its capsule chock full of experiments. The launch, which was originally scheduled for a month ago but delayed for various reasons, will take place tomorrow at 6:50 AM Pacific time.

New Shepard is a sub-orbital space-visiting platform, not a satellite-launching one. But it uses a very traditional method of getting to the edge of space compared with Virgin Galactic’s rather involved mothership-spaceship combo, which scraped the very edge of space in its fourth test launch last month.

The rocket shoots straight up, as rockets do, reaches escape velocity, then pops its capsule off the top just before the Karman line that officially, if somewhat arbitrarily, delineates space from Earth’s atmosphere. The capsule, after exhausting its upward momentum, gently floats back to the surface under a parachute.

That’s the plan for Wednesday’s launch, which you can watch live here starting half an hour or so before T-0. But instead of taking a dummy load or “Mannequin Skywalker,” as the company calls its human stand-in during tests of the crew capsule, mission 10 has a whole collection of experiments on board.

There are nine experiments total, all flying through NASA’s Flight Opportunities program. They’re detailed here. Most have already been up in other vehicles or even a Blue Origin one, but obviously repetition and iteration is important to their development.

“The opportunity to re-fly our payload is helping us not only validate and compare data for different flight profiles, but also test modifications and upgrades,” said NASA’s Kathryn Hurlbert, who heads up the Suborbital Flight Experiment Monitor-2 project at Johnson Space Center.

More Flight Opportunities spots will be available on future NASA-sponsored launches, so if your lab has an experiment it would like to test on a sub-orbital rocket, get at the administrators as soon as the shutdown ends.

Powered by WPeMatico