science

Flexible stick-on sensors could wirelessly monitor your sweat and pulse

Posted by | Berkeley, flexible, flexible electronics, Gadgets, hardware, Health, science, stanford, Stanford University, uc-berkeley | No Comments

As people strive ever harder to minutely quantify every action they do, the sensors that monitor those actions are growing lighter and less invasive. Two prototype sensors from crosstown rivals Stanford and Berkeley stick right to the skin and provide a wealth of physiological data.

Stanford’s stretchy wireless “BodyNet” isn’t just flexible in order to survive being worn on the shifting surface of the body; that flexing is where its data comes from.

The sensor is made of metallic ink laid on top of a flexible material like that in an adhesive bandage. But unlike phones and smartwatches, which use tiny accelerometers or optical tricks to track the body, this system relies on how it is itself stretched and compressed. These movements cause tiny changes in how electricity passes through the ink, changes that are relayed to a processor nearby.

Naturally if one is placed on a joint, as some of these electronic stickers were, it can report back whether and how much that joint has been flexed. But the system is sensitive enough that it can also detect the slight changes the skin experiences during each heartbeat, or the broader changes that accompany breathing.

The problem comes when you have to get that signal off the skin. Using a wire is annoying and definitely very ’90s. But antennas don’t work well when they’re flexed in weird directions — efficiency drops off a cliff, and there’s very little power to begin with — the skin sensor is powered by harvesting RFID signals, a technique that renders very little in the way of voltage.

bodynet sticker and receiver

The second part of their work, then, and the part that is clearly most in need of further improvement and miniaturization, is the receiver, which collects and re-transmits the sensor’s signal to a phone or other device. Although they managed to create a unit that’s light enough to be clipped to clothes, it’s still not the kind of thing you’d want to wear to the gym.

The good news is that’s an engineering and design limitation, not a theoretical one — so a couple years of work and progress on the electronics front and they could have a much more attractive system.

“We think one day it will be possible to create a full-body skin-sensor array to collect physiological data without interfering with a person’s normal behavior,” Stanford professor Zhenan Bao said in a news release.

Over at Cal is a project in a similar domain that’s working to get from prototype to production. Researchers there have been working on a sweat monitor for a few years that could detect a number of physiological factors.

SensorOnForehead BN

Normally you’d just collect sweat every 15 minutes or so and analyze each batch separately. But that doesn’t really give you very good temporal resolution — what if you want to know how the sweat changes minute by minute or less? By putting the sweat collection and analysis systems together right on the skin, you can do just that.

While the sensor has been in the works for a while, it’s only recently that the team has started moving toward user testing at scale to see what exactly sweat measurements have to offer.

RollToRoll BN 768x960“The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us — I always say ‘decoding’ sweat composition. For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects,” explained Ali Javey, Berkeley professor and head of the project.

As anyone who’s working in hardware will tell you, going from a hand-built prototype to a mass-produced model is a huge challenge. So the Berkeley team tapped their Finnish friends at VTT Technical Research Center, who make a specialty of roll-to-roll printing.

For flat, relatively simple electronics, roll-to-roll is a great technique, essentially printing the sensors right onto a flexible plastic substrate that can then simply be cut to size. This way they can make hundreds or thousands of the sensors quickly and cheaply, making them much simpler to deploy at arbitrary scales.

These are far from the only flexible or skin-mounted electronics projects out there, but it’s clear that we’re approaching the point when they begin to leave the lab and head out to hospitals, gyms and homes.

The paper describing Stanford’s flexible sensor appeared this week in the journal Nature Electronics, while Berkeley’s sweat tracker was in Science Advances.

Powered by WPeMatico

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Posted by | augmented reality, automotive, Battlefield, biotech, connected devices, Enterprise, Gadgets, Gaming, hardware, hardware battlefield, Hardware Battlefield at TC Shenzhen, Health, Logistics, manufacturing, Mobile, robotics, science, Startup Battlefield, Startups, Virtual reality, Wearables | No Comments

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Powered by WPeMatico

Cryptographic ICE Cube tests orbital cybersecurity protocols aboard the ISS

Posted by | esa, Gadgets, hardware, international space station, ISS, science, Security, Space | No Comments

Encryption in space can be tricky. Even if you do everything right, a cosmic ray might come along and flip a bit, sabotaging the whole secure protocol. So if you can’t radiation-harden the computer, what can you do? European Space Agency researchers are testing solutions right now in an experiment running on board the ISS.

Cosmic radiation flipping bits may sound like a rare occurrence, and in a way it is. But satellites and spacecraft are out there for a long time and it only takes one such incident to potentially scuttle a whole mission. What can you do if you’re locked out of your own satellite? At that point it’s pretty much space junk. Just wait for it to burn up.

Larger, more expensive missions like GPS satellites and interplanetary craft use special hardened computers that are carefully proofed against cosmic rays and other things that go bump in the endless night out there. But these bespoke solutions are expensive and often bulky and heavy; if you’re trying to minimize costs and space to launch a constellation or student project, hardening isn’t always an option.

“We’re testing two related approaches to the encryption problem for non rad-hardened systems,” explained ESA’s Lukas Armborst in a news release. To keep costs down and hardware recognizable, the team is using a Raspberry Pi Zero board, one of the simplest and lowest-cost full-fledged computers you can buy these days. It’s mostly unmodified, just coated to meet ISS safety requirements.

It’s the heart of the Cryptography International Commercial Experiments Cube, or Cryptographic ICE Cube, or CryptIC. The first option they’re pursuing is a relatively traditional software one: hard-coded backup keys. If a bit gets flipped and the current encryption key is no longer valid, they can switch to one of those.

“This needs to be done in a secure and reliable way, to restore the secure link very quickly,” said Armborst. It relies on “a secondary fall-back base key, which is wired into the hardware so it cannot be compromised. However, this hardware solution can only be done for a limited number of keys, reducing flexibility.”

If you’re expecting one failure per year and a five-year mission, you could put 20 keys and be done with it. But for longer missions or higher exposures, you might want something more robust. That’s the other option, an “experimental hardware reconfiguration approach.”

“A number of microprocessor cores are inside CryptIC as customizable, field-programmable gate arrays, rather than fixed computer chips,” Armborst explained. “These cores are redundant copies of the same functionality. Accordingly, if one core fails then another can step in, while the faulty core reloads its configuration, thereby repairing itself.”

In other words, the encryption software would be running in parallel with itself and one part would be ready to take over and serve as a template for repairs should another core fail due to radiation interference.

A CERN-developed radiation dosimeter is flying inside the enclosure as well, measuring the exposure the device has over the next year of operation. And a set of flash memory units are sitting inside to see which is the most reliable in orbital conditions. Like many experiments on the ISS, this one has many purposes. The encryption tests are set to begin shortly and we’ll know next summer how the two methods fared.

Powered by WPeMatico

Voyant Photonics raises $4.3M to fit lidar on the head of a pin

Posted by | contour venture partners, DARPA, funding, Gadgets, hardware, LDV Capital, Lidar, science, Startups, voyant photonics | No Comments

Lidar is a critical method by which robots and autonomous vehicles sense the world around them, but the lasers and sensors generally take up a considerable amount of space. Not so with Voyant Photonics, which has created a lidar system that you really could conceivably balance on the head of a pin.

Before getting into the science, it’s worth noting why this is important. Lidar is most often used as a way for a car to sense things at a medium distance — far away, radar can outperform it, and up close, ultrasonics and other methods are more compact. But from a few feet to a couple hundred feed out, lidar is very useful.

Unfortunately, even the most compact lidar solutions today are still, roughly, the size of a hand, and the ones ready for use in production vehicles are still larger. A very small lidar unit that could be hidden on every corner of a car, or even inside the cabin, could provide rich positional data about everything in and around the car with little power and no need to disrupt the existing lines and design. (And that’s not getting into the many, many other industries that could use this.)

Lidar began with the idea of, essentially, a single laser being swept across a scene multiple times per second, its reflection carefully measured to track the distances of objects. But mechanically steered lasers are bulky, slow and prone to failure, so newer companies are attempting other techniques, like illuminating the whole scene at once (flash lidar) or steering the beam with complex electronic surfaces (metamaterials) instead.

One discipline that seems primed to join in the fun is silicon photonics, which is essentially the manipulation of light on a chip for various purposes — for instance, to replace electricity in logic gates to provide ultra-fast, low-heat processing. Voyant, however, has pioneered a technique to apply silicon photonics to lidar.

In the past, attempts in chip-based photonics to send out a coherent laser-like beam from a surface of lightguides (elements used to steer light around or emit it) have been limited by a low field of view and power because the light tends to interfere with itself at close quarters.

Voyant’s version of these “optical phased arrays” sidesteps that problem by carefully altering the phase of the light traveling through the chip. The result is a strong beam of non-visible light that can be played over a wide swathe of the environment at high speed with no moving parts at all — yet it emerges from a chip dwarfed by a fingertip.

LIDAR Fingertip Crop

“This is an enabling technology because it’s so small,” said Voyant co-founder Steven Miller. “We’re talking cubic centimeter volumes. There’s a lot of electronics that can’t accommodate a lidar the size of a softball — think about drones and things that are weight-sensitive, or robotics, where it needs to be on the tip of its arm.”

Lest you think this is just a couple yahoos who think they’ve one-upped years of research, Miller and co-founder Chris Phare came out of the Lipson Nanophotonics Group at Columbia University.

“This lab basically invented silicon photonics,” said Phare. “We’re all deeply ingrained with the physics and devices-level stuff. So we were able to step back and look at lidar, and see what we needed to fix and make better to make this a reality.”

The advances they’ve made frankly lie outside my area of expertise, so I won’t attempt to characterize them too closely, except that it solves the interference issues and uses a frequency modulated continuous wave technique, which lets it measure velocity as well as distance (Blackmore does this as well). At any rate, their unique approach to moving and emitting light from the chip lets them create a device that is not only compact, but combines transmitter and receiver in one piece, and has good performance — not just good for its size, they claim, but good.

“It’s a misconception that small lidars need to be low-performance,” explained Phare. “The silicon photonic architecture we use lets us build a very sensitive receiver on-chip that would be difficult to assemble in traditional optics. So we’re able to fit a high-performance lidar into that tiny package without any additional or exotic components. We think we can achieve specs comparable to lidars out there, but just make them that much smaller.”

photonics testbed

The chip-based lidar in its test bed.

It’s even able to be manufactured in a normal fashion like other photonics chips. That’s a huge plus when you’re trying to move from research to product development.

With this first round of funding, the team plans to expand and get this tech out of the lab and into the hands of engineers and developers. The exact specs, dimensions, power requirements and so on are all very different depending on the application and industry, so Voyant can make decisions based on feedback from people in other fields.

In addition to automotive (“It’s such a big application that no one can make lidar and not look at that space,” Miller said), the team is in talks with numerous potential partners.

Although being at this stage while others are raising nine-figure rounds might seem daunting, Voyant has the advantage that it has created something totally different from what’s out there, a product that can safely exist alongside popular big lidars from companies like Innoviz and Luminar.

“We’re definitely talking to big players in a lot of these places, drones and robotics, perhaps augmented reality. We’re trying to suss out exactly where this is most interesting to people,” said Phare. “We see the evolution here being something like bringing room-size computers down to chips.”

The $4.3 million raised by Voyant comes from Contour Venture Partners, LDV Capital and DARPA, which naturally would be interested in something like this.

Powered by WPeMatico

These robo-ants can work together in swarms to navigate tricky terrain

Posted by | artificial intelligence, EPFL, Gadgets, hardware, robotics, science, TC | No Comments

While the agility of a Spot or Atlas robot is something to behold, there’s a special merit reserved for tiny, simple robots that work not as a versatile individual but as an adaptable group. These “tribots” are built on the model of ants, and like them can work together to overcome obstacles with teamwork.

Developed by EPFL and Osaka University, tribots are tiny, light and simple, moving more like inchworms than ants, but able to fling themselves up and forward if necessary. The bots themselves and the system they make up are modeled on trap-jaw ants, which alternate between crawling and jumping, and work (as do most other ants) in fluid roles like explorer, worker and leader. Each robot is not itself very intelligent, but they are controlled as a collective that deploys their abilities intelligently.

In this case a team of tribots might be expected to get from one end of a piece of complex terrain to another. An explorer could move ahead, sensing obstacles and relaying their locations and dimensions to the rest of the team. The leader can then assign worker units to head over to try to push the obstacles out of the way. If that doesn’t work, an explorer can try hopping over it — and if successful, it can relay its telemetry to the others so they can do the same thing.

fly tribot fly

Fly, tribot, fly!

It’s all done quite slowly at this point — you’ll notice that in the video, much of the action is happening at 16x speed. But rapidity isn’t the idea here; similar to Squishy Robotics’ creations, it’s more about adaptability and simplicity of deployment.

The little bots weigh only 10 grams each, and are easily mass-produced, as they’re basically PCBs with some mechanical bits and grip points attached — “a quasi-two-dimensional metamaterial sandwich,” according to the paper. If they only cost (say) a buck each, you could drop dozens or hundreds on a target area and over an hour or two they could characterize it, take measurements and look for radiation or heat hot spots, and so on.

If they moved a little faster, the same logic and a modified design could let a set of robots emerge in a kitchen or dining room to find and collect crumbs or scoot plates into place. (Ray Bradbury called them “electric mice” or something in “There will come soft rains,” one of my favorite stories of his. I’m always on the lookout for them.)

Swarm-based bots have the advantage of not failing catastrophically when something goes wrong — when a robot fails, the collective persists, and it can be replaced as easily as a part.

“Since they can be manufactured and deployed in large numbers, having some ‘casualties’ would not affect the success of the mission,” noted EPFL’s Jamie Paik, who co-designed the robots. “With their unique collective intelligence, our tiny robots can demonstrate better adaptability to unknown environments; therefore, for certain missions, they would outperform larger, more powerful robots.”

It raises the question, in fact, of whether the sub-robots themselves constitute a sort of uber-robot? (This is more of a philosophical question, raised first in the case of the Constructicons and Devastator. Transformers was ahead of its time in many ways.)

The robots are still in prototype form, but even as they are, constitute a major advance over other “collective” type robot systems. The team documents their advances in a paper published in the journal Nature.

Powered by WPeMatico

AI smokes 5 poker champs at a time in no-limit Hold’em with ‘relentless consistency’

Posted by | artificial intelligence, Gaming, poker, science, TC | No Comments

The machines have proven their superiority in one-on-one games like chess and go, and even poker — but in complex multiplayer versions of the card game humans have retained their edge… until now. An evolution of the last AI agent to flummox poker pros individually is now decisively beating them in championship-style 6-person game.

As documented in a paper published in the journal Science today, the CMU/Facebook collaboration they call Pluribus reliably beats five professional poker players in the same game, or one pro pitted against five independent copies of itself. It’s a major leap forward in capability for the machines, and amazingly is also far more efficient than previous agents as well.

One-on-one poker is a weird game, and not a simple one, but the zero-sum nature of it (whatever you lose, the other player gets) makes it susceptible to certain strategies in which computer able to calculate out far enough can put itself at an advantage. But add four more players into the mix and things get real complex, real fast.

With six players, the possibilities for hands, bets, and possible outcomes are so numerous that it is effectively impossible to account for all of them, especially in a minute or less. It’d be like trying to exhaustively document every grain of sand on a beach between waves.

Yet over 10,000 hands played with champions, Pluribus managed to win money at a steady rate, exposing no weaknesses or habits that its opponents could take advantage of. What’s the secret? Consistent randomness.

Even computers have regrets

Pluribus was trained, like many game-playing AI agents these days, not by studying how humans play but by playing against itself. At the beginning this is probably like watching kids, or for that matter me, play poker — constant mistakes, but at least the AI and the kids learn from them.

The training program used something called Monte Carlo counterfactual regret minimization. Sounds like when you have whiskey for breakfast after losing your shirt at the casino, and in a way it is — machine learning style.

Regret minimization just means that when the system would finish a hand (against itself, remember), it would then play that hand out again in different ways, exploring what might have happened had it checked here instead of raised, folded instead of called, and so on. (Since it didn’t really happen, it’s counterfactual.)

A Monte Carlo tree is a way of organizing and evaluating lots of possibilities, akin to climbing a tree of them branch by branch and noting the quality of each leaf you find, then picking the best one once you think you’ve climbed enough.

If you do it ahead of time (this is done in chess, for instance) you’re looking for the best move to choose from. But if you combine it with the regret function, you’re looking through a catalog of possible ways the game could have gone and observing which would have had the best outcome.

So Monte Carlo counterfactual regret minimization is just a way of systematically investigating what might have happened if the computer had acted differently, and adjusting its model of how to play accordingly.

traverserj

The game originall played out as you see on the left, with a loss. But the engine explores other avenues where it might have done better.

Of course the number of games is nigh-infinite if you want to consider what would happen if you had bet $101 rather than $100, or you would have won that big hand if you’d had an eight kicker instead of a seven. Therein also lies nigh-infinite regret, the kind that keeps you in bed in your hotel room until past lunch.

The truth is these minor changes matter so seldom that the possibility can basically be ignored entirely. It will never really matter that you bet an extra buck — so any bet within, say, 70 and 130 can be considered exactly the same by the computer. Same with cards — whether the jack is a heart or a spade doesn’t matter except in very specific (and usually obvious) situations, so 99.999 percent of the time the hands can be considered equivalent.

This “abstraction” of gameplay sequences and “bucketing” of possibilities greatly reduces the possibilities Pluribus has to consider. It also helps keep the calculation load low; Pluribus was trained on a relatively ordinary 64-core server rack over about a week, while other models might take processor-years in high-power clusters. It even runs on a (admittedly beefy) rig with two CPUs and 128 gigs of RAM.

Random like a fox

The training produces what the team calls a “blueprint” for how to play that’s fundamentally strong and would probably beat plenty of players. But a weakness of AI models is that they develop tendencies that can be detected and exploited.

In Facebook’s writeup of Pluribus, it provides the example of two computers playing rock-paper-scissors. One picks randomly while the other always picks rock. Theoretically they’d both win the same amount of games. But if the computer tried the all-rock strategy on a human, it would start losing with a quickness and never stop.

As a simple example in poker, maybe a particular series of bets always makes the computer go all in regardless of its hand. If a player can spot that series, they can take the computer to town any time they like. Finding and preventing ruts like these is important to creating a game-playing agent that can beat resourceful and observant humans.

To do this Pluribus does a couple things. First, it has modified versions of its blueprint to put into play should the game lean towards folding, calling, or raising. Different strategies for different games mean it’s less predictable, and it can switch in a minute should the bet patterns change and the hand go from a calling to a bluffing one.

It also engages in a short but comprehensive introspective search looking at how it would play if it had every other hand, from a big nothing up to a straight flush, and how it would bet. It then picks its bet in the context of all those, careful to do so in such a way that it doesn’t point to any one in particular. Given the same hand and same play again, Pluribus wouldn’t choose the same bet, but rather vary it to remain unpredictable.

These strategies contribute to the “consistent randomness” I alluded to earlier, and which were a part of the model’s ability to slowly but reliably put some of the best players in the world.

The human’s lament

There are too many hands to point to a particular one or ten that indicate the power Pluribus was bringing to bear on the game. Poker is a game of skill, luck, and determination, and one where winners emerge after only dozens or hundreds of hands.

And here it must be said that the experimental setup is not entirely reflective of an ordinary 6-person poker game. Unlike a real game, chip counts are not maintained as an ongoing total — for every hand, each player was given 10,000 chips to use as they pleased, and win or lose they were given 10,000 in the next hand as well.

interface

The interface used to play poker with Pluribus. Fancy!

Obviously this rather limits the long-term strategies possible, and indeed “the bot was not looking for weaknesses in its opponents that it could exploit,” said Facebook AI research scientist Noam Brown. Truly Pluribus was living in the moment the way few humans can.

But simply because it was not basing its play on long-term observations of opponents’ individual habits or styles does not mean that its strategy was shallow. On the contrary, it is arguably more impressive, and casts the game in a different light, that a winning strategy exists that does not rely on behavioral cues or exploitation of individual weaknesses.

The pros who had their lunch money taken by the implacable Pluribus were good sports, however. They praised the system’s high level play, its validation of existing techniques, and inventive use of new ones. Here’s a selection of laments from the fallen humans:

I was one of the earliest players to test the bot so I got to see its earlier versions. The bot went from being a beatable mediocre player to competing with the best players in the world in a few weeks. Its major strength is its ability to use mixed strategies. That’s the same thing that humans try to do. It’s a matter of execution for humans — to do this in a perfectly random way and to do so consistently. It was also satisfying to see that a lot of the strategies the bot employs are things that we do already in poker at the highest level. To have your strategies more or less confirmed as correct by a supercomputer is a good feeling. -Darren Elias

It was incredibly fascinating getting to play against the poker bot and seeing some of the strategies it chose. There were several plays that humans simply are not making at all, especially relating to its bet sizing. -Michael ‘Gags’ Gagliano

Whenever playing the bot, I feel like I pick up something new to incorporate into my game. As humans I think we tend to oversimplify the game for ourselves, making strategies easier to adopt and remember. The bot doesn’t take any of these short cuts and has an immensely complicated/balanced game tree for every decision. -Jimmy Chou

In a game that will, more often than not, reward you when you exhibit mental discipline, focus, and consistency, and certainly punish you when you lack any of the three, competing for hours on end against an AI bot that obviously doesn’t have to worry about these shortcomings is a grueling task. The technicalities and deep intricacies of the AI bot’s poker ability was remarkable, but what I underestimated was its most transparent strength – its relentless consistency. -Sean Ruane

Beating humans at poker is just the start. As good a player as it is, Pluribus is more importantly a demonstration that an AI agent can achieve superhuman performance at something as complicated as 6-player poker.

“Many real-world interactions, such as financial markets, auctions, and traffic navigation, can similarly be modeled as multi-agent interactions with limited communication and collusion among participants,” writes Facebook in its blog.

Yes, and war.

Powered by WPeMatico

This solar array expands itself at the right temperature

Posted by | CalTech, ETHZ, Gadgets, hardware, science, solar, solar cells, Solar Power | No Comments

Wouldn’t it be nice to have a solar panel that’s only there when the sun shines on it? That’s the idea behind this research project, which uses shape-shifting materials to make a solar panel grow from a compressed state to an expanded one with nothing more than a change in temperature.

The flower-like prototype device is made of what’s called a “shape-memory polymer,” a material that can be shaped when cool to one form, then when heated will attempt to return to its original, natural configuration. In this case the cool form is a compressed disc, and the warm one is a much wider one.

The transition (demonstrated here in warm water for simplicity) takes less than a minute. It’s guided by a network of hinged joints, the structure of which was inspired by the children’s toy known as a Hoberman sphere, which changes from a small, spiky ball to a larger spherical one when thrown.

circle1

The cooled-down material would stay rigid during, say, deployment on a satellite. Then when the satellite enters the sun, the mechanism would bloom into the full-sized array, no power necessary. That would potentially save space on a satellite that can’t quite fit a battery or spare solar array to kick-start a larger one.

For now the transformation is one-way; the larger disc must be manually folded back into the smaller configuration — but one can imagine how once powered up, a separate mechanism could accomplish that, stowing itself away until the next chance to absorb some sunlight appears.

Don’t expect to see this on any spacecraft next year, but it’s definitely a cool (and warm) idea that could prove more than a little useful for small satellites and the like in the future. And who knows? Maybe you’ll have a garden of these little blooming arrays on your roof before that.

The research, from Caltech and ETHZ, is documented in the journal Physics Review Applied.

Powered by WPeMatico

GPS on the Moon? NASA’s working on it

Posted by | Gadgets, Government, gps, hardware, NASA, science, Space | No Comments

If you’re driving your car from Portland to Merced, you probably rely on GPS to see where you are. But what if you’re driving your Moon rover from Oceanus Procellarum to the Sea of Tranquility? Actually, GPS should be fine — if this NASA research pans out.

Knowing exactly where you are in space, relative to other bodies anyway, is definitely a non-trivial problem. Fortunately the stars are fixed and by triangulating with them and other known landmarks, a spacecraft can figure out its location quite precisely.

But that’s so much work! Here on Earth we gave that up years ago, and now rely (perhaps too much) on GPS to tell us where we are within a few meters.

By creating our own fixed stars — satellites in geosynchronous orbits — constantly emitting known signals, we made it possible for our devices to quickly sample those signals and immediately locate themselves.

That sure would be handy on the Moon, but a quarter of a million miles makes a lot of difference to a system that relies on ultra-precise timing and signal measurement. Yet there’s nothing theoretically barring GPS signals from being measured out there — and in fact, NASA has already done it at nearly half that distance with the MMS mission a few years ago.

“NASA has been pushing high-altitude GPS technology for years,” said MMS system architect Luke Winternitz in a NASA news release. “GPS around the Moon is the next frontier.”

Astronauts can’t just take their phones up there, of course. Our devices are calibrated for catching and calculating signals from satellites known to be in orbit above us and within a certain range of distances. The time for the signal to reach us from orbit is a fraction of a second, while on or near the Moon it would take perhaps a full second and a half. That may not sound like much, but it fundamentally affects how the receiving and processing systems have to be built.

navcube 0That’s precisely what the team at NASA Goddard has been working on addressing with a new navigation computer that uses a special high-gain antenna, a super-precise clock and other improvements over the earlier NavCube space GPS system and, of course, the terrestrial ones we all have in our phones.

The idea is to use GPS instead of relying on NASA’s network of ground and satellite measurement systems, which must exchange data to the spacecraft and eat up valuable bandwidth and power. Freeing up those systems could empower them to work on other missions and let more of the GPS-capable satellite’s communications be dedicated to science and other high-priority transmissions.

The team hopes to complete the lunar NavCube hardware by the end of the year and then find a flight to the Moon on which to test it as soon as possible. Fortunately, with Artemis gaining traction, it looks as if there will be no shortage of those.

Powered by WPeMatico

NASA picks a dozen science and tech projects to bring to the surface of the Moon

Posted by | Artemis, astrobotic, Gadgets, Government, hardware, NASA, robotics, science, Space | No Comments

With the Artemis mission scheduled to put boots on lunar regolith as soon as 2024, NASA has a lot of launching to do — and you can be sure none of those launches will go to waste. The agency just announced 12 new science and technology projects to send to the Moon’s surface, including a new rover.

The 12 projects are being sent up as part of the Commercial Lunar Payload Services program, which is — as NASA Administrator Jim Bridenstine has emphasized strongly — part of an intentional increase in reliance on private companies. If a company already has a component or rover or craft ready to go and meeting a program’s requirements, why should NASA build it from scratch at great cost?

In this case, the selected projects cover a wide range of origins and intentions. Some are repurposed or spare parts from other missions, like the Lunar Surface Electromagnetics Experiment. LuSEE is related to the Park Solar Probe’s STEREO/Waves instrument and pieces from MAVEN, re-engineered to make observations and measurements on the Moon.

moonrangerOthers are quite new. Astrobotic, which was also recently awarded an $80 million contract to develop its Peregrine lunar lander, will now also be putting together a rover, which it calls MoonRanger (no relation to the NES game). This little bot will autonomously traverse the landscape within half a mile or so of its base and map it in 3D.

The new funding from NASA amounts to $5.6 million, which isn’t a lot to develop a lunar rover from scratch — no doubt it’s using its own funds and working with its partner, Carnegie Mellon University, to make sure the rover isn’t a bargain-bin device. With veteran rover engineer Red Whittaker on board, it should be a good one.

“MoonRanger offers a means to accomplish far-ranging science of significance, and will exhibit an enabling capability on missions to the Moon for NASA and the commercial sector. The autonomy techniques demonstrated by MoonRanger will enable new kinds exploration missions that will ultimately herald in a new era on the Moon,” said Whittaker in an Astrobotic news release.

The distance to the lunar surface isn’t so far that controlling a rover directly from the surface is nearly impossible, like on Mars, but if it can go from here to there without someone in Houston twiddling a joystick, why shouldn’t it?

To be clear, this is different from the upcoming CubeRover project and others that are floating around in Astrobotic and Whittaker’s figurative orbits.

“MoonRanger is a 13 kg microwave-sized rover with advanced autonomous capabilities,” Astrobotic’s Mike Provenzano told me. “The CubeRover is a 2 kg shoebox-sized rover developed for light payloads and geared for affordable science and exploration activities.”

While both have flight contracts, CubeRover is scheduled to go up on the first Peregrine mission in 2021, while MoonRanger is TBD.

Another NASA selection is the Planetary Science Institute’s Heimdall, a new camera system that will point downward during the lander’s descent and collect super-high-resolution imagery of the regolith before, during and after landing.

heimdall

“The camera system will return the highest resolution images of the undisturbed lunar surface yet obtained, which is important for understanding regolith properties. We will be able to essentially video the landing in high resolution for the first time, so we can understand how the plume behaves – how far it spreads, how long particles are lofted. This information is crucial for the safety of future landings,” said the project’s R. Aileen Yingst in a PSI release.

The regolith is naturally the subject of much curiosity, since if we’re to establish a semi-permanent presence on the Moon we’ll have to deal with it one way or another. So projects like Honeybee’s PlanetVac, which can suck up and test materials right at landing, or the Regolith Adherence Characterization, which will see how the stuff sticks to various materials, will be invaluable.

RadSatg Deployed w Crop

RadSat-G deployed from the ISS for its year-long mission to test radiation tolerance on its computer systems

Several projects are continuations of existing projects that are great fits for lunar missions. For example, the lunar surface is constantly being bombarded with all kinds of radiation, since the Moon lacks any kind of atmosphere. That’s not a problem for machinery like wheels or even solar cells, but for computers, radiation can be highly destructive. So Brock LaMere’s work in radiation-tolerant computers will be highly relevant to landers, rovers and payloads.

LaMere’s work has already been tested in space via the Nanoracks facility aboard the International Space Station, and the new NASA funding will allow it to be tested on the lunar surface. If we’re going to be sending computers up there that people’s lives will depend on, we better be completely sure they aren’t going to crash because of a random EM flux.

The rest of the projects are characterized here, with varying degrees of detail. No doubt we’ll learn more soon as the funding disbursed by NASA over the next year or so helps flesh them out.

Powered by WPeMatico

Team studies drone strikes on airplanes by firing them into a wall at 500 MPH

Posted by | drones, fraunhofer, Gadgets, hardware, robotics, science, UAVs | No Comments

Bird strikes are a very real danger to planes in flight, and consequently aircraft are required to undergo bird strike testing — but what about drones? With UAV interference at airports on the rise, drone strike testing may soon be likewise mandatory, and if it’s anything like what these German researchers are doing, it’ll involve shooting the craft out of air cannons at high speed.

The work being done at Fraunhofer EMI in Freiburg is meant to establish some basic parameters for how these things ought to be tested.

Bird strikes, for example, are tested by firing a frozen poultry bird like a chicken or turkey out of an air cannon. It’s not pretty, but it has to be done. Even so, it’s not a very good analogue to a drone strike.

“From a mechanical point of view, drones behave differently to birds and also weigh considerably more. It is therefore uncertain, whether an aircraft that has been successfully tested against bird strike, would also survive a collision with a drone,” explained Fraunhofer’s Sebastian Schopferer in a news release.

The team chose to load an air cannon with drone batteries and engines, since those make up most of any given UAV’s mass. The propellers and arms on which they’re mounted are generally pretty light and will break easily — compared with a battery weighing the better part of a kilogram, they won’t add much to the damage.

drone testing

The remains of a drone engine and battery after being propelled into the plate on the left at hundreds of miles per hour

The drones were fired at speeds from 250 to 570 miles per hour (115 to 255 meters per second by their measurement) at aluminum plates of up to 8 millimeters of thickness. Unsurprisingly, there was “substantial deformation” of the plates and the wingless drones were “completely destroyed.” Said destruction was recorded by a high-speed camera, though unfortunately the footage was not made available.

It’s necessary to do a variety of tests to determine what’s practical and what’s unnecessary or irrelevant — why spend the extra time and money firing the drones at 570 mph when 500 does the same level of damage? Does including the arms and propellers make a difference? At what speed is the plate in danger of being pierced, necessitating additional protective measures? And so on. A new rig is being constructed that will allow acceleration (and deceleration) of larger UAVs.

With enough testing the team hopes that not only could such things be standardized, but simulations could be built that would allow engineers to virtually test different surfaces or materials without a costly and explosive test rig.

Powered by WPeMatico