artificial intelligence

At last, a camera app that automatically removes all people from your photos

Posted by | Apps, Art, artificial intelligence, machine learning, Mobile, Photography | No Comments

As a misanthrope living in a vibrant city, I’m never short of things to complain about. And in particular the problem of people crowding into my photos, whatever I happen to shoot, is a persistent one. That won’t be an issue any more with Bye Bye Camera, an app that simply removes any humans from photos you take. Finally!

It’s an art project, though a practical one (art can be practical!), by Do Something Good. The collective, in particular the artist damjanski, has worked on a variety of playful takes on the digital era, such as a CAPTCHA that excludes humans, and setting up a dialogue between two Google conversational agents.

The new app, damjanski told Artnome, is “an app for the post-human era… The app takes out the vanity of any selfie and also the person.” Fortunately, it leaves dogs intact.

Of course it’s all done in a self-conscious, arty way — are humans necessary? What defines one? What will the world be like without us? You can ponder those questions or not; fortunately, the app doesn’t require it of you.

Bye Bye Camera works using some of the AI tools that are already out there for the taking in the world of research. It uses YOLO (You Only Look Once), a very efficient object classifier that can quickly denote the outline of a person, and then a separate tool that performs what Adobe has called “context-aware fill.” Between the two of them a person is reliably — if a bit crudely — deleted from any picture you take and credibly filled in by background.

It’s a fun project (though the results are a mixed bag) and it speaks not only to the issues it supposedly raises about the nature of humanity, but also the accessibility of tools under the broad category of “AI” and what they can and should be used for.

You can download Bye Bye Camera for $3 on the iOS App Store.

Powered by WPeMatico

Is your product’s AI annoying people?

Posted by | Android, artificial intelligence, brand management, cars, Column, customer experience, Emerging-Technologies, Google, Google Duplex, personal assistant, Tesla, Tesla Autopilot, Tesla Model S | No Comments
James Glasnapp
Contributor

James Glasnapp is a senior UX researcher at PARC.

Artificial intelligence is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.

We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers that end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.

Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.

After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.

Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.

In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.

However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.

For two-lane traffic highways that are not busy, the Tesla software might work reasonably well. However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.

Post Intelligence robot

Another example from the internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.

However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal — making a simple reservation.

On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.

As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.

From a user-experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.

An aspirational goal should be to delight this adjacent group of people enough that they would move toward being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method that combines the observations of people as they interact with processes and the product.

A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”

That’s just human intelligence, and it’s not artificial.

Powered by WPeMatico

Krisp’s smart noise-cancelling gets official release and pricing

Posted by | Apps, artificial intelligence, Krisp, krisp.ai, machine learning, Mobile, noise cancellation, noise reduction, Startups | No Comments

Background noise on calls could be a thing of the past if Krisp has anything to do with it. The app, now available on Windows and Macs after a long beta, uses machine learning to silence the bustle of a home, shared office or coffee shop so your voice and the voices of others comes through clearly.

I first encountered Krisp in prototype form when we were visiting UC Berkeley’s Skydeck accelerator, which ended up plugging $500,000 into the startup alongside a $1.5 million round from Sierra Ventures and Shanda Group.

Like so many apps and services these days, Krisp uses machine learning. But unlike many of them, it uses the technology in a fairly straightforward, easily understandable way.

The machine learning model the company has created is trained to recognize the voice of a person talking into a microphone. By definition pretty much everything else is just noise — so the model just sort of subtracts it from the waveform, leaving your audio clean even if there’s a middle school soccer team invading the cafe where you’re running the call from.

It can also mute sound coming the other direction — that is, the noise on your friend’s side. So if they’re in a noisy street and you’re safe at home, you can apply the smart noise reduction to them as well.

Because it changes the audio signal before it gets to any apps or services, it’s compatible with pretty much everything: Skype, Messenger, Slack, whatever. You could even use it to record podcasts when there’s a leaf blower outside. A mobile version is on the way for release later this year.

It works — I’ve tested it, as have thousands of other users during the beta. But now comes the moment of truth: will anyone pay for it?

The new, official release of the app will let you mute the noise you hear on the line — that is, the noise coming from the microphones of people you talk to — for free, forever. But clearing the noise on your own line, like the baby crying next to you, after a two-week trial period, will cost you $20 per month, or $120 per year, or as low as $5 per month for group licenses. You can collect free time by referring people to the app, but eventually you’ll probably have to shell out.

Not that there’s anything wrong with that: A straightforward pay-as-you-go business model is refreshing in an age of intrusive data collection, pushy “freemium” platforms and services that lack any way to make money whatsoever.

Powered by WPeMatico

This year’s Computex was a wild ride with dueling chip releases, new laptops and 467 startups

Posted by | amd, artificial intelligence, asus, chips, computers, Computex, Computex 2019, Gadgets, Gaming, GPUS, Innovex, Intel, nvidia, Processors, Qualcomm, taipei, taiwan, TC, trade show | No Comments

After a relatively quiet show last year, Computex picked up the pace this year, with dueling chip launches by rivals AMD and Intel and a slew of laptop releases from Asus, Qualcomm, Nvidia, Lenovo and other companies.

Founded in 1981, the trade show, which took place last week from May 28 to June 1, is one of the ICT industry’s largest gatherings of OEMs and ODMs. In recent years, the show’s purview has widened, thanks to efforts by its organizers, the Taiwan External Trade Development Council and Taipei Computer Association, to attract two groups: high-end computer customers, such as hardcore gamers, and startups looking for investors and business partners. This makes for a larger, more diverse and livelier show. Computex’s organizers said this year’s event attracted 42,000 international visitors, a new record.

Though the worldwide PC market continues to see slow growth, demand for high-performance computers is still being driven by gamers and the popularity of esports and live-streaming sites like Twitch. Computex, with its large, elaborate booths run by brands like Asus’ Republic of Gaming, is a popular destination for many gamers (the show is open to the public, with tickets costing NTD $200, or about $6.40), and began hosting esport competitions a few years ago.

People visit the ASUS stand during Computex at Nangang exhibition centre in Taipei on May 28, 2019. (Photo by Chris STOWERS / AFP) (Photo credit should read CHRIS STOWERS/AFP/Getty Images)

The timing of the show, formally known as the Taipei International Information Technology Show, at the end of May or beginning of June each year, also gives companies a chance to debut products they teased at CES or preview releases for other shows later in the year, including E3 and IFA.

One difference between Computex now and ten (or maybe even just five) years ago is that the increasing accessibility of high-end PCs means many customers keep a close eye on major announcements by companies like AMD, Intel and Nvidia, not only to see when more powerful processors will be available but also because of potential pricing wars. For example, many gamers hope competition from new graphic processor units from AMD will force Nvidia to bring down prices on its popular but expensive GPUs.

The Battle of the Chips

The biggest news at this year’s Computex was the intense rivalry between AMD and Intel, whose keynote presentations came after a very different twelve months for the two competitors.

Powered by WPeMatico

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by | artificial intelligence, conservation, Gadgets, hardware, robotics, science, TC, XPRIZE | No Comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.

Powered by WPeMatico

This robot learns its two-handed moves from human dexterity

Posted by | artificial intelligence, Gadgets, hardware, robotic arm, robotics, robots, science, science robotics, TC | No Comments

If robots are really to help us out around the house or care for our injured and elderly, they’re going to want two hands… at least. But using two hands is harder than we make it look — so this robotic control system learns from humans before attempting to do the same.

The idea behind the research, from the University of Wisconsin-Madison, isn’t to build a two-handed robot from scratch, but simply to create a system that understands and executes the same type of manipulations that we humans do without thinking about them.

For instance, when you need to open a jar, you grip it with one hand and move it into position, then tighten that grip as the other hand takes hold of the lid and twists or pops it off. There’s so much going on in this elementary two-handed action that it would be hopeless to ask a robot to do it autonomously right now. But that robot could still have a general idea of why this type of manipulation is done on this occasion, and do what it can to pursue it.

The researchers first had humans wearing motion capture equipment perform a variety of simulated everyday tasks, like stacking cups, opening containers and pouring out the contents, and picking up items with other things balanced on top. All this data — where the hands go, how they interact and so on — was chewed up and ruminated on by a machine learning system, which found that people tended to do one of four things with their hands:

  • Self-handover: This is where you pick up an object and put it in the other hand so it’s easier to put it where it’s going, or to free up the first hand to do something else.
  • One hand fixed: An object is held steady by one hand providing a strong, rigid grip, while the other performs an operation on it like removing a lid or stirring the contents.
  • Fixed offset: Both hands work together to pick something up and rotate or move it.
  • One hand seeking: Not actually a two-handed action, but the principle of deliberately keeping one hand out of action while the other finds the object required or performs its own task.

The robot put this knowledge to work not in doing the actions itself — again, these are extremely complex motions that current AIs are incapable of executing — but in its interpretations of movements made by a human controller.

You would think that when a person is remotely controlling a robot, it would just mirror the person’s movements exactly. And in the tests, the robot does so to provide a baseline of how without knowledge about these “bimanual actions,” but many of them are simply impossible.

Think of the jar-opening example. We know that when we’re opening the jar, we have to hold one side steady with a stronger grip and may even have to push back with the jar hand against the movement of the opening hand. If you tried to do this remotely with robotic arms, that information is not present any more, and the one hand will likely knock the jar out of the grip of the other, or fail to grip it properly because the other isn’t helping out.

The system created by the researchers recognizes when one of the four actions above is happening, and takes measures to make sure that they’re a success. That means, for instance, being aware of the pressures exerted on each arm by the other when they pick up a bucket together. Or providing extra rigidity to the arm holding an object while the other interacts with the lid. Even when only one hand is being used (“seeking”), the system knows that it can deprioritize the movements of the unused hand and dedicate more resources (be it body movements or computational power) to the working hand.

In videos of demonstrations, it seems clear that this knowledge greatly improves the success rate of the attempts by remote operators to perform a set of tasks meant to simulate preparing a breakfast: cracking (fake) eggs, stirring and shifting things, picking up a tray with glasses on it and keeping it level.

Of course this is all still being done by a human, more or less — but the human’s actions are being augmented and re-interpreted into something more than simple mechanical reproduction.

Doing these tasks autonomously is a long ways off, but research like this forms the foundation for that work. Before a robot can attempt to move like a human, it has to understand not just how humans move, but why they do certain things in certain circumstances and, furthermore, what important processes may be hidden from obvious observation — things like planning the hand’s route, choosing a grip location and so on.

The Madison team was led by Daniel Rakita; their paper describing the system is published in the journal Science Robotics.

Powered by WPeMatico

This is one smart device that every urban home could use

Posted by | air pollution, air purifier, artificial intelligence, Europe, Gadgets, GreenTech, machine learning, pollution, smart device, TC | No Comments

Living in a dense urban environment brings many startup-fuelled conveniences, be it near instant delivery of food — or pretty much whatever else you fancy — to a whole range of wheels that can be hopped on (or into) to whisk you around at the tap of an app.

But the biggest problem afflicting city dwellers is not some minor inconvenience. It’s bad, poor, terrible, horrible, unhealthy air. And there’s no app to fix that.

Nor can hardware solve this problem. But smart hardware can at least help.

For about a month I’ve been road-testing a wi-fi connected air purifier made by Swedish company, Blueair. It uses an Hepa filtration system combined with integrated air quality sensors to provide real-time in-app feedback which can be reassuring or alert you to unseen problems.

Flip to the bottom of this article for a speed take or continue reading for the full review of the Blueair Classic 480i with dual filters to reduce dust, smoke and pollen   

Review

If you’re even vaguely environmentally aware it’s fascinating and not a little horrifying to see how variable the air quality is inside your home. Everyday stuff like cooking, cleaning and changing the sheets can cause drastic swings in PM 2.5 and tVOC levels. Aka very small particles such as fine dust, smoke, odours and mite feces; and total volatile organic compounds, which refers to hundreds of different gases emitted by certain solids and liquids — including stuff humans breathe out by also harmful VOCs like formaldehyde.

What you learn from smart hardware can be not just informative but instructive. For instance I’ve switched to a less dusty cat litter after seeing how quickly the machine’s fan stepped up a gear after clearing the litter tray. I also have a new depth of understanding of quite how much pollution finds its way into my apartment when the upstairs neighbour is having a rooftop BBQ. Which makes it doubly offensive I wasn’t invited.

Though, I must admit, I’ve yet to figure out a diplomatic way to convince him to rethink his regular cook-out sessions. Again, some problems can’t be fixed by apps. Meanwhile city life means we’re all, to a greater or lesser degree, adding to the collectively polluted atmosphere. Changing that requires new politics.

You cannot hermetically seal your home against outdoor air pollution. It wouldn’t make for a healthy environment either. Indoor spaces must be properly ventilated. Adequate ventilation is also of course necessary to control moisture levels to prevent other nasty issues like mould. And using this device I’ve watched as opening a window almost instantly reduced tVOC levels.

Pretty much every city resident is affected by air pollution, to some degree. And it’s a heck of a lot harder to switch your home than change your brand of cat litter. But even on that far less fixable front, having an air quality sensor indoors can be really useful — to help you figure out the best (and worst) times to air out the house. I certainly won’t be opening the balcony doors on a busy Saturday afternoon any time soon, for example.

Blueair sells a range of air purifiers. The model I’ve been testing, the Blueair Classic 480i, is large enough to filter a room of up to 40m2. It includes filters capable of filtering both particulate matter and traffic fumes (aka its “SmokeStop” filter). The latter was important for me, given I live near a pretty busy road. But the model can be bought with just a particle filter if you prefer. The dual filtration model I’m testing is priced at €725 for EU buyers.

Point number one is that if you’re serious about improving indoor air quality the size of an air purifier really does matter. You need a device with a fan that’s powerful enough to cycle all the air in the room in a reasonable timeframe. (Blueair promises five air changes per hour for this model, per the correct room size).

So while smaller air filter devices might look cute, if a desktop is all the space you can stretch to you’d probably be better off getting a few pot plants.

Blueair’s hardware also has software in the mix too, of course. The companion Blueair Friend app serves up the real-time feedback on both indoor air quality and out. The latter via a third party service whose provider can vary depending on your location. Where I live in Europe it’s powered by BreezoMeter.

This is a handy addition for getting the bigger picture. If you find you have stubbornly bad air quality levels indoors and really can’t figure out why, most often a quick tab switch will confirm local pollution levels are indeed awful right now. It’s likely not just you but the whole neighbourhood suffering.

Dirty cities 

From Asia to America the burning of fossil fuels has consequences for air quality and health that are usually especially pronounced in dense urban environments where humans increasingly live. More than half the world’s population now lives in urban areas — with the UN predicting this will grow to around 70% by 2050.

In Europe, this is already true for more than 70% of the population which makes air pollution a major concern in many regional cities.

Growing awareness of the problem is beginning to lead to policy interventions — such as London’s ultra low emission charging zone and car free Sundays one day a month in Paris’ city center. But EU citizens are still, all too often, stuck sucking in unhealthy air.

London’s toxic air is an invisible killer.

We launched the world’s first Ultra Low Emission Zone to cut air pollution. Since then, there have been on average 9400 fewer polluting vehicles on our streets every day. #LetLondonBreathe #ULEZ pic.twitter.com/0mYcIGi1xP

— Mayor of London (@MayorofLondon) May 23, 2019

 

Last year six EU nations, including the UK, France and Germany, were referred to the highest court in Europe for failing to tackle air pollution — including illegally high levels of nitrogen dioxide produced by diesel-powered vehicles.

Around one in eight EU citizens who live in an urban area is exposed to air pollutant levels that exceed one or more of the region’s air quality standards, according to a briefing note published by the European Environment Agency (EEA) last year.

It also said up to 96% of EU urban citizens are exposed to levels of one or more air pollutants deemed damaging to health when measured against the World Health Organization’s more stringent guidelines.

There are multiple and sometimes interlinked factors impacting air quality in urban environments. Traffic fumes is a very big one. But changes in meteorological conditions due to climate change are also expected to increase certain concentrations of air pollutants. While emissions from wildfires is another problem exacerbated by drought conditions which are linked to climate change that can also degrade air quality in nearby cities.

Action to tackle climate change continues to lag far behind what’s needed to put a check on global warming. Even as far too little is still being done in most urban regions to reduce vehicular emissions at a local level.

In short, this problem isn’t going away anytime soon — and all too often air quality is still getting worse.

At the same time health risks from air pollution are omnipresent and can be especially dangerous for children. A landmark global study of the impact of traffic fumes on childhood asthma, published recently in the Lancet, estimates that four million children develop the condition every year primarily as a result of nitrogen dioxide air pollution emitted by vehicles.

The majority (64%) of these new cases were found to occur in urban centres — increasing to 90% when factoring in surrounding suburban areas.

The study also found that damage caused by air pollution is not limited to the most highly polluted cities in China and India. “Many high-income countries have high NO2 exposures, especially those in North America, western Europe, and Asia Pacific,” it notes.

The long and short of all this is that cities the world over are going to need to get radically great at managing air quality — especially traffic emissions — and fast. But, in the meanwhile, city dwellers who can’t or don’t want to quit the bright lights are stuck breathing dirty air. So it’s easy to imagine consumer demand growing for in-home devices that can sense and filter pollutants as urbanities try to find ways to balance living in a city with reducing their exposure to the bad stuff.

Cleaner air

That’s not to say that any commercial air purifier will be able to provide a complete fix. The overarching problem of air pollution is far too big and bad for that. A true fix would demand radical policy interventions, such as removing all polluting vehicles from urban living spaces. (And there’s precious little sign of anything so radical on the horizon.)

But at least at an individual home level, a large air purifier with decent filtration technology should reduce your exposure to pollution in the place you likely spend the most time.

If, as the Blueair Classic 480i model does, the filtration device also includes embedded sensors to give real-time feedback on air quality it can further help you manage pollution risk — by providing data so you can better understand the risks in and around your home and make better decisions about, for instance, when to open a window.

“Air quality does always change,” admits Blueair’s chief product officer, Jonas Holst, when we chat. “We cannot promise to our consumers that you will always have super, super, clean air. But we can promise to consumers that you will always have a lot cleaner air by having our product — because it depends on what happens around you. In the outdoor, by your neighbours, if you’re cooking, what your cat does or something. All of those things impact air quality.

“But by having high speeds, thanks to the HepaSilent technology that we use, we can make sure that we always constantly fight that bombardment of pollutants.”

On the technology front, Blueair is using established filtration technology — Hepa and active carbon filters to remove particular matter and gaseous pollutants — but with an ionizing twist (which it brands ‘HepaSilent’).

This involves applying mechanical and electrostatic filtration in combination to enhance performance of the air purifier without boosting noise levels or requiring large amounts of energy to run. Holst dubs it one of the “core strengths” of the Blueair product line.

“Mechanical filtration just means a filter [plus a fan to draw the air through it]. We have a filter but by using the ionization chamber we have inside the product we can boost the performance of the filter without making it very, very dense. And by doing that we can let more air through the product and simply then clean more air faster,” he explains.

“It’s also something that is constantly being developed,” he adds of the firm’s Hepa + ionizing technology, which it’s been developing in its products for some 20 years. “We have had many developments of this technology since but the base technical structure is there in the combination between a mechanical and electrostatical filtration. That is what allows us to have less noise and less energy because the fan doesn’t work as hard.”

On top of that, in the model I’m testing, Blueair has embedded air quality sensors — which connect via wi-fi to the companion app where the curious user can see real-time plots of things like PM 2.5 and tVOC levels, and start to join the dots between what’s going on in their home and what the machine is sniffing out.

The sensors mean the unit can step up and down the fan speed and filtration level automatically in response to pollution spikes (you can choose it to trigger on particulate matter only, or PM 2.5 and tVOC gaseous compounds, or turn automation off altogether). So if you’re really not at all curious that’s okay too. You can just plug it in, hook it to the wi-fi and let it work.

Sound, energy and sensing smarts in a big package

To give a ballpark of energy consumption for this model, Holst says the Blueair Classic 480i consumes “approximately” the same amount of energy as running a lightbulb — assuming it’s running mostly on lower fan speeds.

As and when the fan steps up in response to a spike in levels of potential pollutants he admits it will consume “a little bit more” energy.

The official specs list the model’s energy consumption at between 15-90 watts.

On the noise front it’s extremely quiet when on the lowest fan setting. To the point of being barely noticeable. You can sleep in the same room and certainly won’t be kept awake.

You will notice when the fan switches up to the second or, especially, the third (max) speed — where it can hit 52 dB(A)). The latter’s rushing air sounds are discernible from a distance, even in another room. But you hopefully won’t be stuck listening to level 3 fan noise for too long, unless you live in a really polluted place. Or, well, unless you run into an algorithmic malfunction (more on that below).

As noted earlier, the unit’s smart sensing capabilities mean fan speed can be set to automatically adjust in response to changing pollution levels — which is obviously the most useful mode to use since you won’t need to keep checking in to see whether or not the air is clean.

You can manually override the automation and fix/switch the fan at a speed of your choice via the app. And as I found there are scenarios where an override is essential. Which we’ll get to shortly.

The unit I was testing, a model that’s around two years old, arrived with instructions to let it run for a week without unplugging so that the machine learning algorithms could configure to local conditions and offer a more accurate read on gases and particles. Holst told us that the U.S. version of the 480i is  “slightly updated” — and, as such, this learning process has been eliminated. So you should be able to just plug it in and get the most accurate reads right away. 

The company recommends changing the filters every six months to “ensure performance”, or more if you live in a very polluted area. The companion app tracks days (estimated) remaining running time in the form of a days left countdown.

Looks wise, there’s no getting around the Blueair Classic 480i is a big device. Think ‘bedside table’ big.

You’re not going to miss it in your room and it does need a bigger footprint of free space around it so as not to block the air intake and outlet. Something in the region of ~80x60cm. Its lozenge shape helps by ensuring no awkward corners and with finding somewhere it can be parked parallel but not too close to a wall.

There’s not much more to say about the design of this particular model except that it’s thoughtful. The unit has a minimalist look which avoids coming across too much like a piece of ugly office furniture. While its white and gun metal grey hues plus curved flanks help it blend into the background. I haven’t found it to be an eyesore.

A neat flip up lid hides a set of basic physical controls. But once you’ve done the wi-fi set-up and linked it to the companion app you may never need to use these buttons as everything can be controlled in the app.

Real-time pollution levels at your fingertips

Warning: This app can be addictive! For weeks after installing the unit it was almost impossible to resist constantly checking the pollution levels. Mostly because it was fascinating to watch how domestic activity could send one or other level spiking or falling.

As well as PM 2.5 and tVOC pollutants this model tracks temperature and humidity levels. It offers day, week and monthly plots for everything it tracks.

The day view is definitely the most addictive — as it’s where you see instant changes and can try to understand what’s triggering what. So you can literally join the dots between, for example, hearing a street sweeper below your window and watching a rise in PM 2.5 levels in the app right after. Erk!

Though don’t expect a more detailed breakdown of the two pollutant categories; it’s an aggregated mix in both cases. (And some of the gases that make up the tVOC mix aren’t harmful.)

The month tab gives a longer overview which can be handy to spot regular pollution patterns (though the view is a little cramped on less phablet-y smartphone screens).

While week view offers a more recent snapshot if you’re trying to get a sense of your average pollution exposure over a shorter time frame.

That was one feature I thought the app could have calculated for you. But, equally, more granular quantification might risk over-egging the pudding. It would also risk being mislead if the sensor accuracy fails on you. The overarching problem with pollution exposure is that, sadly, there’s only so much an individual can do to reduce it. So it probably makes sense not to calculate your pollution exposure score.

The app could certainly provide more detail than it does but Holst told us the aim is to offer enough info to people who are interested without it being overwhelming. He also said many customers just want to plug it in and let it work, not be checking out daily charts. (Though if you’re geeky you will of course want the data.)

It’s clear there is lots of simplification going, as you’d expect with this being a consumer device, not a scientific instrument. I found the Blueair app satisfied my surface curiosity while seeing ways its utility could be extended with more features. But in the end I get that it’s designed to be an air-suck, not a time-suck, so I do think they’ve got the balance there pretty much right.

There are enough real-time signals to be able to link specific activities/events with changes in air quality. So you can literally watch as the tVOC level drops when you open a window. (Or rises if your neighbor is BBQing… ). And I very quickly learnt that opening a window will (usually) lower tVOC but send PM 2.5 rising — at least where I live in a dusty, polluted city. So, again, cleaner air is all you should expect.

Using the app you can try and figure out, for instance, optimal ventilation timings. I also found having the real-time info gave me a new appreciation for heavy rain — which seemed to be really great for clearing dust out of the air, frequently translating into “excellent” levels of PM 2.5 in the app for a while after.

Here are a few examples of how the sensors reacted to different events — and what the reaction suggests…

Cleaning products can temporarily spike tVOC levels:

 

Changing bed sheets can also look pretty disturbing…   

 

An evening BBQ on a nearby roof terrace appears much, much worse though:

 

And opening the balcony door to the street on a busy Saturday afternoon is just… insane… 

 

Uh-oh, algorithm malfunction…

After a few minutes of leaving the balcony door open one fateful Saturday afternoon, which almost instantly sent the unit into max fan speed overdrive, I was surprised to find the fan still blasting away an hour later, and then three hours later, and at bedtime, and in the morning. By which point I thought something really didn’t seem right.

The read from the app showed the pollution level had dropped down from the very high spike but it was still being rated as ‘polluted’ — a level which keeps the fan at the top speed. So I started to suspect something had misfired.

This is where being able to switch to manual is essential — meaning I could override the algorithm’s conviction that the air was really bad and dial the fan down to a lower setting.

That override provided a temporary ‘fix’ but the unnaturally elevated ‘pollution’ read continued for the best part of a week. This made it look like the whole sensing capacity had broken. And without the ability to automatically adapt to changing pollution levels the smart air purifier was now suddenly dumb…

 

It turned out Blueair has a fix for this sort of algorithmic malfunction. Though it’s not quick.

After explaining the issue to the company, laying out my suspicion that the sensors weren’t reading correctly, it told me the algorithms are programmed to respond to this type of situation by reseting around seven days after the event, assuming the read accuracy hasn’t already corrected itself by then.

Sure enough, almost a week later that’s exactly what happened. Though I couldn’t find anything to explain this might happen in the user manual, so it would be helpful if they include it in a troubleshooting section.

Here’s the month view showing the crazy PM 2.5 spike; the elevated extended (false) reading; then the correction; followed finally by (relatively) normal service…

 

For a while after this incident the algorithms also seemed overly sensitive — and I had to step in again several times to override the top gear setting as its read on pollution levels was back into the yellow without an obvious reason why.

When the level reads ‘polluted’ it automatically triggers the highest fan speed. Paradoxically, this sometimes seems to have the self-defeating effect of appearing to draw dust up into the air — thereby keeping the PM 2.5 level elevated. So at times manually lowering the fan when it’s only slightly polluted can reduce pollution levels quicker than just letting it blast away. Which is one product niggle.

When viewed in the app the sustained elevated pollution level did look pretty obviously wrong — to the human brain at least. So, like every ‘smart’ device, this one also benefits from having human logic involved to complete the loop.

Concluding thoughts after a month’s use

A few weeks on from the first algorithm malfunction the unit’s sensing capacity at first appeared to have stabilized — in that it was back to the not-so-hair-trigger-sensitivity that had been the case prior to balcony-door-gate.

For a while it seemed less prone to have a sustained freak out over relatively minor domestic activities like lifting clean sheets out of the cupboard, as if it had clicked into a smoother operating grove. Though I remained wary of trying the full bore Saturday balcony door.

I thought this period of relative tranquility might signal improved measurement accuracy, the learning algos having been through not just an initial training cycle but a major malfunction plus correction. Though of course there was no way to be sure.

It’s possible there had also been a genuine improvement in indoor air quality — i.e. as a consequence of, for example, better ventilation habits and avoiding key pollution triggers because I now have real-time air quality feedback to act on so can be smarter about when to open windows, where to shake sheets, which type of cat litter to buy and so on.

It’s a reassuring idea. Though one that requires putting your faith in algorithms that are demonstrably far from perfect. Even when they’re functioning they’re a simplification and approximation of what’s really going on. And when they fail, well, they are clearly getting it totally wrong.

Almost bang on the month mark of testing there was suddenly another crazy high PM 2.5 spike.

One rainy afternoon the read surged from ‘good’ to ‘highly polluted’ without any real explanation. I had opened a patio on the other side of the apartment but it does not open onto a street. This time the reading stuck at 400 even with the fan going full blast. So it looked like an even more major algorithm crash…

Really clean air is impossible to mistake. Take a walk in the mountains far from civilization and your lungs will thank you. But cleaner air is harder for humans to quantify. Yet, increasingly, we do need to know how clean or otherwise the stuff we’re breathing is, as more of us are packed into cities exposed to each others’ fumes — and because the harmful health impacts of pollution are increasingly clear.

Without radical policy interventions we’re fast accelerating towards a place where we could be forced to trust sensing algorithms to tell us whether what we’re breathing is harmful or not.

Machines whose algorithms are fallible and might be making rough guestimates, and/or prone to sensing malfunctions. And machines that also won’t be able to promise to make the air entirely safe to breathe. Frankly it’s pretty scary to contemplate.

So while I can’t now imagine doing without some form of in-home air purifier to help manage my urban pollution risk — I’d definitely prefer that this kind of smart hardware wasn’t necessary at all.

In Blueair’s case, the company clearly still has work to do to improve the robustness of its sensing algorithms. Operating conditions for this sort of product will obviously vary widely, so there’s loads of parameters for its algorithms to balance.

With all that stuff to juggle it just seems a bit too easy for the sensing function to spin out of control.

10-second take

The good

Easy to set up, thoughtful product design, including relatively clear in-app controls and content which lets you understand pollution triggers to manage risk. Embedded air quality sensors greatly extend the product’s utility by enabling autonomous response to changes in pollution levels. Quiet operation during regular conditions. Choice of automated or manual fan speed settings. Filtration is powerful and since using the device indoor air quality does seem cleaner.

The bad

Sensing accuracy is not always reliable. The algorithms appear prone to being confused by air pressure changes indoors, such as a large window being opened which can trigger unbelievably high pollution readings that lead to an extended period of inaccurate readings when you can’t rely on the automation to work at all. I also found the feedback in the app can sometimes lag. App content/features are on the minimalist side so you may want more detail. When the pollution level is marginal an elevated fan speed can sometimes appear to challenge the efficacy of the filtration as if it’s holding pollution levels in place rather than reducing them.

Bottom line

If you’re looking for a smart air purifier the Blueair Classic 480i does have a lot to recommend it. Quiet operation, ease of use and a tangible improvement in air quality, thanks to powerful filtration. However the accuracy of the sensing algorithms does pose a dilemma. For me this problem has recurred twice in a month. That’s clearly not ideal when it takes a full week to reset. If it were not for this reliability issue I would not hesitate to recommend the product, as — when not going crazy — the real-time feedback it provides really helps you manage a variety of pollution risks in and around your home. Hopefully the company will work on improving the stability of the algorithms. Or at least offer an option in the app so you can manually reset it if/when it does go wrong.

Powered by WPeMatico

Why is Facebook doing robotics research?

Posted by | artificial intelligence, Facebook, Gadgets, hardware, robotics, robots, science, Social, TC | No Comments

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy,” the hexapod robot

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the auto-didactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Powered by WPeMatico

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Posted by | animation, AR, ar/vr, artificial intelligence, augmented reality, Column, Computer Vision, computing, Developer, digital media, Gaming, gif, Global Positioning System, gps, mobile phones, neural network, starbucks, TC, Virtual reality, VR | No Comments
Alex Chuang
Contributor

Alex Chuang is the Managing Partner of Shape Immersive, a boutique studio that helps enterprise and brands transform their businesses by incorporating VR/AR solutions into their strategies.

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.

Powered by WPeMatico

Alexa, does the Echo Dot Kids protect children’s privacy?

Posted by | Advertising Tech, Amazon, Amazon Echo, Amazon.com, artificial intelligence, center for digital democracy, coppa, Disney, echo, echo dot kids, eCommerce, Federal Trade Commission, Gadgets, nickelodeon, privacy, privacy policy, smart assistant, smart speaker, Speech Recognition, terms of service, United States, voice assistant | No Comments

A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’s Echo smart speaker.

The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the e-commerce giant is violating the Children’s Online Privacy Protection Act (COPPA) — including by failing to obtain proper consents for the use of kids’ data.

As with its other smart speaker Echo devices, the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to U.S. privacy regulation intended to protect kids from commercial exploitation online.

The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.

They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under COPPA to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).

A number of other concerns about Amazon’s device are also being raised with the FTC.

Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.

There are differences in the software, though. In parallel, Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.

Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).

A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon .

At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).

But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the COPPA complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.

Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.

More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third-party services and skills.

Instead, the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.

They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.

“It does not verify that the person ‘consenting’ is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”

Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite COPPA requiring that children’s data be held for no longer than is reasonably necessary.

They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.

Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.

They further highlight that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s “remember” feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.

The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents’ consent — such as when a child has a friend or family member visiting on a play date and they end up playing with the Echo together.

Responding to the complaint, Amazon has denied it is in breach of COPPA. In a statement, a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”

An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.

At the time of writing, the FTC had not responded to a request for comment on the complaint.

In Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied,” and suggested profiling at such an early age could lead to a data-disadvantaged generation.

Responding to rising concerns the U.K. privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.

The U.K. government has also recently published a whitepaper setting out a policy plan to regulate internet content that has a heavy focus on child safety.

Powered by WPeMatico