artificial intelligence

Apple acquires talking Barbie voicetech startup PullString

Posted by | Apple, Apps, artificial intelligence, Developer, Entertainment, Exit, Fundings & Exits, Gadgets, hardware, M&A, pullstring, Startups, TC, toytalk, voice apps, voice assistant | No Comments

Apple has just bought up the talent it needs to make talking toys a part of Siri, HomePod, and its voice strategy. Apple has acquired PullString, also known as ToyTalk, according to Axios’ Dan Primack and Ina Fried. TechCrunch has received confirmation of the acquistion from sources with knowledge of the deal. The startup makes voice experience design tools, artificial intelligence to power those experiences, and toys like talking Barbie and Thomas The Tank Engine toys in partnership with Mattel. Founded in 2011 by former Pixar executives, PullString went on to raise $44 million.

Apple’s Siri is seen as lagging far behind Amazon Alexa and Google Assistant, not only in voice recognition and utility, but also in terms of developer ecosystem. Google and Amazon has built platforms to distribute Skills from tons of voice app makers, including storytelling, quizzes, and other games for kids. If Apple wants to take a real shot at becoming the center of your connected living room with Siri and HomePod, it will need to play nice with the children who spend their time there. Buying PullString could jumpstart Apple’s in-house catalog of speech-activated toys for kids as well as beef up its tools for voice developers.

PullString did catch some flack for being a “child surveillance device” back in 2015, but countered by detailing the security built intoHello Barbie product and saying it’d never been hacked to steal childrens’ voice recordings or other sensitive info. Privacy norms have changed since with so many people readily buying always-listening Echos and Google Homes.

In 2016 it rebranded as PullString with a focus on developers tools that allow for visually mapping out conversations and publishing finished products to the Google and Amazon platforms. Given SiriKit’s complexity and lack of features, PullString’s Converse platform could pave the way for a lot more developers to jump into building voice products for Apple’s devices.

We’ve reached out to Apple and PullString for more details about whether PullString and ToyTalk’s products will remain available.

The startup raised its cash from investors including Khosla Ventures, CRV, Greylock, First Round, and True Ventures, with a Series D in 2016 as its last raise that PitchBook says valued the startup at $160 million. While the voicetech space has since exploded, it can still be difficult for voice experience developers to earn money without accompanying physical products, and many enterprises still aren’t sure what to build with tools like those offered by PullString. That might have led the startup to see a brighter future with Apple, strengthening one of the most ubiquitous though also most detested voice assistants.

Powered by WPeMatico

DARPA wants smart bandages for wounded warriors

Posted by | artificial intelligence, DARPA, Gadgets, Government, hardware, Health, medical, medtech, science, TC | No Comments

Nowhere is prompt and effective medical treatment more important than on the battlefield, where injuries are severe and conditions dangerous. DARPA thinks that outcomes can be improved by the use of intelligent bandages and other systems that predict and automatically react to the patient’s needs.

Ordinary cuts and scrapes just need a bit of shelter and time and your amazing immune system takes care of things. But soldiers not only receive far graver wounds, but under complex conditions that are not just a barrier to healing but unpredictably so.

DARPA’s Bioelectronics for Tissue Regeneration program, or BETR, will help fund new treatments and devices that “closely track the progress of the wound and then stimulate healing processes in real time to optimize tissue repair and regeneration.”

“Wounds are living environments and the conditions change quickly as cells and tissues communicate and attempt to repair,” said Paul Sheehan, BETR program manager, in a DARPA news release. “An ideal treatment would sense, process, and respond to these changes in the wound state and intervene to correct and speed recovery. For example, we anticipate interventions that modulate immune response, recruit necessary cell types to the wound, or direct how stem cells differentiate to expedite healing.”

It’s not hard to imagine what these interventions might comprise. Smart watches are capable of monitoring several vital signs already, and in fact have alerted users to such things as heart-rate irregularities. A smart bandage would use any signal it can collect — “optical, biochemical, bioelectronic, or mechanical” — to monitor the patient and either recommend or automatically adjust treatment.

A simple example might be a wound that the bandage detects from certain chemical signals is becoming infected with a given kind of bacteria. It can then administer the correct antibiotic in the correct dose and stop when necessary rather than wait for a prescription. Or if the bandage detects shearing force and then an increase in heart rate, it’s likely the patient has been moved and is in pain — out come the painkillers. Of course, all this information would be relayed to the caregiver.

This system may require some degree of artificial intelligence, although of course it would have to be pretty limited. But biological signals can be noisy and machine learning is a powerful tool for sorting through that kind of data.

BETR is a four-year program, during which DARPA hopes that it can spur innovation in the space and create a “closed-loop, adaptive system” that improves outcomes significantly. There’s a further ask to have a system that addresses osseointegration surgery for prosthetics fitting — a sad necessity for many serious injuries incurred during combat.

One hopes that the technology will trickle down, of course, but let’s not get ahead of ourselves. It’s all largely theoretical for now, though it seems more than possible that the pieces could come together well ahead of the deadline.

Powered by WPeMatico

Robin’s robotic mowers now have a patented doggie door just for them

Posted by | artificial intelligence, Battlefield, Gadgets, hardware, robin, robotic mower, robotics, Startups | No Comments

Back in 2016 we had Robin up onstage demonstrating the possibility of a robotic mower as a service rather than just something you buy. They’re still going strong, and just introduced and patented what seems in retrospect a pretty obvious idea: an automatic door for the mower to go through fences between front and back yards.

It’s pretty common, after all, to have a back yard isolated from the front lawn by a wood or chain link fence so dogs and kids can roam freely with only light supervision. And if you’re lucky enough to have a robot mower, it can be a pain to carry it from one side to the other. Isn’t the whole point of the thing that you don’t have to pick it up or interact with it in any way?

The solution Justin Crandall and his team at Robin came up with is simple and straightforward: an automatic mower-size door that opens only to let it through.

“In Texas over 90 percent of homes have a fenced in backyard, and even in places like Charlotte and Cleveland it’s roughly 25-30 percent, so technology like this is critical to adoption,” Crandall told me. “We generally dock the robots in the backyard for security. When it’s time to mow the front yard, the robots drive to the door we place in the fence. As it approaches the door, the robot drives over a sensor we place in the ground. That sensor unlocks the door to allow the mower access.”

Simple, right? It uses a magnetometer rather than wireless or IR sensor, since those introduced possibilities of false positives. And it costs around $100-$150, easily less than a second robot or base, and probably pays for itself in goodwill around the third or fourth time you realize you didn’t have to carry your robot around.

It’s patented, but rivals (like iRobot, which recently introduced its own mower) could certainly build one if it was sufficiently different.

Robin has expanded to several states and a handful of franchises (its plan from the start) and maintains that its all-inclusive robot-as-a-service method is better than going out and buying one for yourself. Got a big yard and no teenage kids who can mow it for you? See if Robin’s available in your area.

Powered by WPeMatico

Let’s save the bees with machine learning

Posted by | artificial intelligence, bees, conservation, EPFL, Gadgets, GreenTech, science, TC | No Comments

Machine learning and all its related forms of “AI” are being used to work on just about every problem under the sun, but even so, stemming the alarming decline of the bee population still seems out of left field. In fact it’s a great application for the technology and may help both bees and beekeepers keep hives healthy.

The latest threat to our precious honeybees is the Varroa mite, a parasite that infests hives and sucks the blood from both bees and their young. While it rarely kills a bee outright, it can weaken it and cause young to be born similarly weak or deformed. Over time this can lead to colony collapse.

The worst part is that unless you’re looking closely, you might not even see the mites — being mites, they’re tiny: a millimeter or so across. So infestations often go on for some time without being discovered.

Beekeepers, caring folk at heart obviously, want to avoid this. But the solution has been to put a flat surface beneath a hive and pull it out every few days, inspecting all the waste, dirt and other hive junk for the tiny bodies of the mites. It’s painstaking and time-consuming work, and of course if you miss a few, you might think the infestation is getting better instead of worse.

Machine learning to the rescue!

As I’ve had occasion to mention about a billion times before this, one of the things machine learning models are really good at is sorting through noisy data, like a surface covered in random tiny shapes, and finding targets, like the shape of a dead Varroa mite.

Students at the École Polytechnique Fédérale de Lausanne in Switzerland created an image recognition agent called ApiZoom trained on images of mites that can sort through a photo and identify any visible mite bodies in seconds. All the beekeeper needs to do is take a regular smartphone photo and upload it to the EPFL system.

The project started back in 2017, and since then the model has been trained with tens of thousands of images and achieved a success rate of detection of about 90 percent, which the project’s Alain Bugnon told me is about at parity with humans. The plan now is to distribute the app as widely as possible.

“We envisage two phases: a web solution, then a smartphone solution. These two solutions allow to estimate the rate of infestation of a hive, but if the application is used on a large scale, of a region,” Bugnon said. “By collecting automatic and comprehensive data, it is not impossible to make new findings about a region or atypical practices of a beekeeper, and also possible mutations of the Varroa mites.”

That kind of systematic data collection would be a major help for coordinating infestation response at a national level. ApiZoom is being spun out as a separate company by Bugnon; hopefully this will help get the software to beekeepers as soon as possible. The bees will thank them later.

Powered by WPeMatico

Don’t worry, this rocket-launching Chinese robo-boat is strictly for science

Posted by | artificial intelligence, Asia, autonomous vehicles, Boats, China, climate change, Gadgets, hardware, robotics, science, TC | No Comments

It seems inevitable that the high seas will eventually play host to a sort of proxy war as automated vessels clash over territory for the algae farms we’ll soon need to feed the growing population. But this rocket-launching robo-boat is a peacetime vessel concerned only with global weather patterns.

The craft is what’s called an unmanned semi-submersible vehicle, or USSV, and it functions as a mobile science base — and now, a rocket launch platform. For meteorological sounding rockets, of course, nothing scary.

It solves a problem we’ve seen addressed by other seagoing robots like the Saildrone: that the ocean is very big, and very dangerous — so monitoring it properly is equally big and dangerous. You can’t have a crew out in the middle of nowhere all the time, even if it would be critical to understanding the formation of a typhoon or the like. But you can have a fleet of robotic ships systematically moving around the ocean.

In fact this is already done in a variety of ways and by numerous countries and organizations, but much of the data collection is both passive and limited in range. A solar-powered buoy drifting on the currents is a great resource, but you can’t exactly steer it, and it’s limited to sampling the water around it. And weather balloons are nice, too, if you don’t mind flying it out to where it needs to be first.

A robotic boat, on the other hand, can go where you need it and deploy instruments in a variety of ways, dropping or projecting them deep into the water or, in the case of China’s new USSV, firing them 20,000 feet into the air.

“Launched from a long-duration unmanned semi-submersible vehicle, with strong mobility and large coverage of the sea area, rocketsonde can be used under severe sea conditions and will be more economical and applicable in the future,” said Jun Li, a researcher at the Chinese Academy of Sciences, in a news release.

The 24-foot craft, which has completed a handful of near-land cruises in Bohai Bay, was announced in the paper. You may wonder what “semi-submersible” means. Essentially they put as much of the craft as possible under the water, with only instruments, hatches and other necessary items aboveboard. That minimizes the effect of rough weather on the craft — but it is still self-righting in case it capsizes in major wave action.

The USSV’s early travels

It runs on a diesel engine, so it’s not exactly the latest tech there, but for a large craft going long distances, solar is still a bit difficult to manage. The diesel on board will last it about 10 days and take it around 3,000 km, or 1,800 miles.

The rocketsondes are essentially small rockets that shoot up to a set altitude and then drop a “driftsonde,” a sensor package attached to a balloon, parachute or some other descent-slowing method. The craft can carry up to 48 of these, meaning it could launch one every few hours for its entire 10-day cruise duration.

The researchers’ findings were published in the journal Advances in Atmospheric Sciences. This is just a prototype, but its success suggests we can expect a few more at the very least to be built and deployed. I’ve asked Li a few questions about the craft and will update this post if I hear back.

Powered by WPeMatico

StarCraft II-playing AI AlphaStar takes out pros undefeated

Posted by | artificial intelligence, DeepMind, Gaming, science, Starcraft | No Comments

Losing to the computer in StarCraft has been a tradition of mine since the first game came out in 1998. Of course, the built-in “AI” is trivial for serious players to beat, and for years researchers have attempted to replicate human strategy and skill in the latest version of the game. They’ve just made a huge leap with AlphaStar, which recently beat two leading pros 5-0.

The new system was created by DeepMind, and in many ways it’s very unlike what you might call a “traditional” StarCraft AI. The computer opponents you can select in the game are really pretty dumb — they have basic built-in strategies, and know in general how to attack and defend and how to progress down the tech tree. But they lack everything that makes a human player strong: adaptability, improvisation and imagination.

AlphaStar is different. It learned from watching humans play at first, but soon honed its skills by playing against facets of itself.

The first iterations watched replays of games to learn the basics of “micro” (i.e. controlling units effectively) and “macro” (i.e. game economy and long-term goals) strategy. With this knowledge it was able to beat the in-game computer opponents on their hardest setting 95 percent of the time. But as any pro will tell you, that’s child’s play. So the real work started here.

Hundreds of agents were spawned and pitted against each other.

Because StarCraft is such a complex game, it would be silly to think that there’s a single optimal strategy that works in all situations. So the machine learning agent was essentially split into hundreds of versions of itself, each given a slightly different task or strategy. One might attempt to achieve air superiority at all costs; another to focus on teching up; another to try various “cheese” attempts like worker rushes and the like. Some were even given strong agents as targets, caring about nothing else but beating an already successful strategy.

This family of agents fought and fought for hundreds of years of in-game time (undertaken in parallel, of course). Over time the various agents learned (and of course reported back) various stratagems, from simple things such as how to scatter units under an area-of-effect attack to complex multi-pronged offenses. Putting them all together produced the highly robust AlphaStar agent, with some 200 years of gameplay under its belt.

Most StarCraft II pros are well younger than 200, so that’s a bit of an unfair advantage. There’s also the fact that AlphaStar, in its original incarnation anyway, has two other major benefits.

First, it gets its information directly from the game engine, rather than having to observe the game screen — so it knows instantly that a unit is down to 20 HP without having to click on it. Second, it can (though it doesn’t always) perform far more “actions per minute” than a human, because it isn’t limited by fleshy hands and banks of buttons. APM is just one measure among many that determines the outcome of a match, but it can’t hurt to be able to command a guy 20 times in a second rather than two or three.

It’s worth noting here that AIs for micro control have existed for years, having demonstrated their prowess in the original StarCraft. It’s incredibly useful to be able to perfectly cycle out units in a firefight so none takes lethal damage, or to perfectly time movements so no attacker is idle, but the truth is good strategy beats good tactics pretty much every time. A good player can counter the perfect micro of an AI and take that valuable tool out of play.

AlphaStar was matched up against two pro players, MaNa and TLO of the highly competitive Team Liquid. It beat them both handily, and the pros seemed excited rather than depressed by the machine learning system’s skill. Here’s game 2 against MaNa:

In comments after the game series, MaNa said:

I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected. I’ve realised how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me. We’re all excited to see what comes next.

And TLO, who actually is a Zerg main but gamely played Protoss for the experiment:

I was surprised by how strong the agent was. AlphaStar takes well-known strategies and turns them on their head. The agent demonstrated strategies I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet.

You can get the replays of the matches here.

AlphaStar is inarguably a strong player, but there are some important caveats here. First, when they handicapped the agent by making it play like a human, in that it had to move the camera around, could only click on visible units, had a human-like delay on perception and so on, it was far less strong and in fact was beaten by MaNa. But that version, which perhaps may become the benchmark rather than its untethered cousin, is still under development, so for that and other reasons it was never going to be as strong.

AlphaStar only plays Protoss, and the most successful versions of itself used very micro-heavy units.

Most importantly, though, AlphaStar is still an extreme specialist. It only plays Protoss versus Protoss — probably has no idea what a Zerg looks like — with a single opponent, on a single map. As anyone who has played the game can tell you, the map and the races produce all kinds of variations, which massively complicate gameplay and strategy. In essence, AlphaStar is playing only a tiny fraction of the game — though admittedly many players also specialize like this.

That said, the groundwork of designing a self-training agent is the hard part — the actual training is a matter of time and computing power. If it’s 1v1v1 on Bloodbath maybe it’s stalker/zealot time, while if it’s 2v2 on a big map with lots of elevation, out come the air units. (Is it obvious I’m not up on my SC2 strats?)

The project continues and AlphaStar will grow stronger, naturally, but the team at DeepMind thinks that some of the basics of the system, for instance how it efficiently visualizes the rest of the game as a result of every move it makes, could be applied in many other areas where AIs must repeatedly make decisions that affect a complex and long-term series of outcomes.

Powered by WPeMatico

Autonomous subs spend a year cruising under Antarctic ice

Posted by | antarctica, artificial intelligence, Gadgets, robotics, science, seaglider, university of washington | No Comments

The freezing waters underneath Antarctic ice shelves and the underside of the ice itself are of great interest to scientists… but who wants to go down there? Leave it to the robots. They won’t complain! And indeed, a pair of autonomous subs have been nosing around the ice for a full year now, producing data unlike any other expedition ever has.

The mission began way back in 2017, with a grant from the late Paul Allen. With climate change affecting sea ice around the world, precise measurements and study of these frozen climes is more important than ever. And fortunately, robotic exploration technology had reached a point where long-term missions under and around ice shelves were possible.

The project would use a proven autonomous seagoing vehicle called the Seaglider, which has been around for some time but had been redesigned to perform long-term operations in these dark, sealed-over environments. ne of the craft’s co-creators, UW’s Chris Lee, said of the mission at the time: “This is a high-risk, proof-of-concept test of using robotic technology in a very risky marine environment.”

The risks seem to have paid off, as an update on the project shows. The modified craft have traveled hundreds of miles during a year straight of autonomous operation.

It’s not easy to stick around for a long time on the Antarctic coast for a lot of reasons. But leaving robots behind to work while you go relax elsewhere for a month or two is definitely doable.

“This is the first time we’ve been able to maintain a persistent presence over the span of an entire year,” Lee said in a UW news release today. “Gliders were able to navigate at will to survey the cavity interior… This is the first time any of the modern, long-endurance platforms have made sustained measurements under an ice shelf.”

You can see the paths of the robotic platforms below as they scout around near the edge of the ice and then dive under in trips of increasing length and complexity:

They navigate in the dark by monitoring their position with regard to a pair of underwater acoustic beacons fixed in place by cables. The blue dots are floats that go along with the natural currents to travel long distances on little or no power. Both are equipped with sensors to monitor the shape of the ice above, the temperature of the water, and other interesting data points.

It isn’t the first robotic expedition under the ice shelves by a long shot, but it’s definitely the longest term and potentially the most fruitful. The Seagliders are smaller, lighter, and better equipped for long-term missions. One went 87 miles in a single trip!

The mission continues, and two of the three initial Seagliders are still operational and ready to continue their work.

Powered by WPeMatico

Robots learn to grab and scramble with new levels of agility

Posted by | artificial intelligence, Berkeley, ETHZ, Gadgets, hardware, robotics, robots, science, TC | No Comments

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Powered by WPeMatico

Wrest control from a snooping smart speaker with this teachable ‘parasite’

Posted by | Advertising Tech, Alexa, artificial intelligence, connected devices, Europe, Gadgets, GitHub, Google, google home, hardware, Home Automation, Internet of Things, IoT, neural network, privacy, Security, smart assistant, smart speaker, Speaker | No Comments

What do you get when you put one internet-connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite’” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.

The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Amazon Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.

Project Alias from Bjørn Karmann on Vimeo.

Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.

The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.

The open-source TensorFlow library was used for building the name training component.

So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.

This means you could rename Alexa “Bezosallseeingeye,” or refer to your Google Home as “Carelesswhispers.” Whatever floats your boat.

Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.

“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write-up of the project here. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”

Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.

And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)

If you’re hankering after your own Alexa-disrupting blob-topper, the pair have uploaded a build guide to Instructables and put the source code on GitHub. So fill yer boots.

Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.

That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And — unlike this creative physical IoT add-on — that kind of tech would not be at all legal.

Powered by WPeMatico

Daily Crunch: The age of quantum computing is here

Posted by | Apps, artificial intelligence, automotive, Enterprise, Entertainment, Exit, Gadgets, Mobile, Startups, TC, Virtual reality | No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. IBM unveils its first commercial quantum computer

The 20-qubit system combines the quantum and classical computing parts it takes to use a machine like this for research and business applications into a single package. While it’s worth stressing that the 20-qubit machine is nowhere near powerful enough for most commercial applications, IBM sees this as the first step towards tackling problems that are too complex for classical systems.

2. Apple’s trillion-dollar market cap was always a false idol

Nothing grows forever, not even Apple. Back in August we splashed headlines across the globe glorifying Apple’s brief stint as the world’s first $1 trillion company, but in the end it didn’t matter. Fast-forward four months and Apple has lost more than a third of its stock value, and last week the company lost $75 billion in market cap in a single day.

3. GitHub Free users now get unlimited private repositories

Starting today, free GitHub users will now get unlimited private projects with up to three collaborators. Previously, GitHub had a caveat for its free users that code had to be public if they didn’t pay for the service.

Photo credit: Chesnot/Getty Images

4. Uber’s IPO may not be as eye-popping as we expected

Uber’s public debut later this year is undoubtedly the most anticipated IPO of 2019, but the company’s lofty valuation (valued by some as high as $120 billion) has some investors feeling uneasy.

5. Amazon is getting more serious about Alexa in the car with Telenav deal

Amazon has announced a new partnership with Telenav, a Santa Clara-based provider of connected car services. The collaboration will play a huge role in expanding Amazon’s ability to give drivers relevant information and furthers the company’s mission to bake Alexa into every aspect of your life.

6. I used VR in a car going 90 mph and didn’t get sick

The future of in-vehicle entertainment could be VR. Audi announced at CES that it’s rolling out a new company called Holoride to bring adaptive VR entertainment to cars. The secret sauce here is matching VR content to the slight movements of the vehicle to help those who often get motion sickness.

7. Verizon and T-Mobile call out AT&T over fake 5G labels

Nothing like some CES drama to start your day. AT&T recently shared a shady marketing campaign that labeled its 4G networks as 5G and rivals Verizon and T-Mobile are having none of it.

Powered by WPeMatico