machine learning

Week-in-Review: Alexa’s indefinite memory and NASA’s otherworldly plans for GPS

Posted by | 4th of July, AI assistant, alex wong, Amazon, Andrew Kortina, Android, andy rubin, appeals court, Apple, apple inc, artificial intelligence, Assistant, China, enterprise software, Getty-Images, gps, here, iPhone, machine learning, Online Music Stores, operating systems, Sam Lessin, social media, Speech Recognition, TC, Tim Cook, Twitter, United States, Venmo, voice assistant | No Comments

Hello, weekenders. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about the cult of Ive and the degradation of Apple design. On Sunday night, The Wall Street Journal published a report on how Ive had been moving away from the company, to the dismay of many on the design team. Tim Cook didn’t like the report very much. Our EIC gave a little breakdown on the whole saga in a nice piece.

Apple sans Ive


Amazon Buys Whole Foods For Over 13 Billion

The big story

This week was a tad restrained in its eventfulness; seems like the newsmakers went on 4th of July vacations a little early. Amazon made a bit of news this week when the company confirmed that Alexa request logs are kept indefinitely.

Last week, an Amazon public policy exec answered some questions about Alexa in a letter sent to U.S. Senator Coons. His office published the letter on its site a few days ago and most of the details aren’t all that surprising, but the first answer really sets the tone for how Amazon sees Alexa activity:

Q: How long does Amazon store the transcripts of user voice recordings?

A: We retain customers’ voice recordings and transcripts until the customer chooses to delete them.

What’s interesting about this isn’t that we’re only now getting this level of straightforward dialogue from Amazon on how long data is kept if not specifically deleted, but it makes one wonder why it is useful or feasible for them to keep it indefinitely. (This assumes that they actually are keeping it indefinitely; it seems likely that most of it isn’t, and that by saying this they’re protecting themselves legally, but I’m just going off the letter.)

After several years of “Hey Alexa,” the company doesn’t seem all that close to figuring out what it is.

Alexa seems to be a shit solution for commerce, so why does Amazon have 10,000 people working on it, according to a report this week in The Information? All signs are pointing to the voice assistant experiment being a short-term failure in terms of the short-term ambitions, though AI advances will push the utility.

Training data is a big deal across AI teams looking to educate models on data sets of relevant information. The company seems to say as much. “Our speech recognition and natural language understanding systems use machine learning to adapt to customers’ speech patterns and vocabulary, informed by the way customers use Alexa in the real world. To work well, machine learning systems need to be trained using real world data.”

The company says it doesn’t anonymize any of this data because it has to stay associated with a user’s account in order for them to delete it. I’d feel a lot better if Amazon just effectively anonymized the data in the first place and used on-device processing the build a profile on my voice. What I’m more afraid of is Amazon having such a detailed voiceprint of everyone who has ever used an Alexa device.

If effortless voice-based e-commerce isn’t really the product anymore, what is? The answer is always us, but I don’t like the idea of indefinitely leaving Amazon with my data until they figure out the answer.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • NASA’s GPS moonshot
    The U.S. government really did us a solid inventing GPS, but NASA has some bigger ideas on the table for the positioning platform, namely, taking it to the Moon. It might be a little complicated, but, unsurprisingly, scientists have some ideas here. Read more.
  • Apple has your eyes
    Most of the iOS beta updates are bug fixes, but the latest change to iOS 13 brought a very strange surprise: changing the way the eyes of users on iPhone XS or XS Max look to people on the other end of the call. Instead of appearing that you’re looking below the camera, some software wizardry will now make it look like you’re staring directly at the camera. Apple hasn’t detailed how this works, but here’s what we do know
  • Trump is having a Twitter party
    Donald Trump’s administration declared a couple of months ago that it was launching an exploratory survey to try to gain a sense of conservative voices that had been silenced on social media. Now @realdonaldtrump is having a get-together and inviting his friends to chat about the issue. It’s a real who’s who; check out some of the people attending here.
Amazon CEO And Blue Origin Founder Jeff Bezos Speaks At Air Force Association Air, Space And Cyber Conference

(Photo by Alex Wong/Getty Images)

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Amazon is responsible for what it sells:
    [Appeals court rules Amazon can be held liable for third-party products]
  2. Android co-creator gets additional allegations filed:
    [Newly unsealed court documents reveal additional allegations against Andy Rubin]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. TechCrunch reporter Kate Clark did a great interview with the ex-Facebook, ex-Venmo founding team behind Fin and how they’re thinking about the consumerization of the enterprise.

Sam Lessin and Andrew Kortina on their voice assistant’s workplace pivot

“…The thing is, developing an AI assistant capable of booking flights, arranging trips, teaching users how to play poker, identifying places to purchase specific items for a birthday party and answering wide-ranging zany questions like “can you look up a place where I can milk a goat?” requires a whole lot more human power than one might think. Capital-intensive and hard-to-scale, an app for “instantly offloading” chores wasn’t the best business. Neither Lessin nor Kortina will admit to failure, but Fin‘s excursion into B2B enterprise software eight months ago suggests the assistant technology wasn’t a billion-dollar idea.…”

Here are some of our other top reads this week for premium subscribers. This week, we talked a bit about asking for money and the future of China’s favorite tech platform:

Want more TechCrunch newsletters? Sign up here.

Powered by WPeMatico

At last, a camera app that automatically removes all people from your photos

Posted by | Apps, Art, artificial intelligence, machine learning, Mobile, Photography | No Comments

As a misanthrope living in a vibrant city, I’m never short of things to complain about. And in particular the problem of people crowding into my photos, whatever I happen to shoot, is a persistent one. That won’t be an issue any more with Bye Bye Camera, an app that simply removes any humans from photos you take. Finally!

It’s an art project, though a practical one (art can be practical!), by Do Something Good. The collective, in particular the artist damjanski, has worked on a variety of playful takes on the digital era, such as a CAPTCHA that excludes humans, and setting up a dialogue between two Google conversational agents.

The new app, damjanski told Artnome, is “an app for the post-human era… The app takes out the vanity of any selfie and also the person.” Fortunately, it leaves dogs intact.

Of course it’s all done in a self-conscious, arty way — are humans necessary? What defines one? What will the world be like without us? You can ponder those questions or not; fortunately, the app doesn’t require it of you.

Bye Bye Camera works using some of the AI tools that are already out there for the taking in the world of research. It uses YOLO (You Only Look Once), a very efficient object classifier that can quickly denote the outline of a person, and then a separate tool that performs what Adobe has called “context-aware fill.” Between the two of them a person is reliably — if a bit crudely — deleted from any picture you take and credibly filled in by background.

It’s a fun project (though the results are a mixed bag) and it speaks not only to the issues it supposedly raises about the nature of humanity, but also the accessibility of tools under the broad category of “AI” and what they can and should be used for.

You can download Bye Bye Camera for $3 on the iOS App Store.

Powered by WPeMatico

Krisp’s smart noise-cancelling gets official release and pricing

Posted by | Apps, artificial intelligence, Krisp, krisp.ai, machine learning, Mobile, noise cancellation, noise reduction, Startups | No Comments

Background noise on calls could be a thing of the past if Krisp has anything to do with it. The app, now available on Windows and Macs after a long beta, uses machine learning to silence the bustle of a home, shared office or coffee shop so your voice and the voices of others comes through clearly.

I first encountered Krisp in prototype form when we were visiting UC Berkeley’s Skydeck accelerator, which ended up plugging $500,000 into the startup alongside a $1.5 million round from Sierra Ventures and Shanda Group.

Like so many apps and services these days, Krisp uses machine learning. But unlike many of them, it uses the technology in a fairly straightforward, easily understandable way.

The machine learning model the company has created is trained to recognize the voice of a person talking into a microphone. By definition pretty much everything else is just noise — so the model just sort of subtracts it from the waveform, leaving your audio clean even if there’s a middle school soccer team invading the cafe where you’re running the call from.

It can also mute sound coming the other direction — that is, the noise on your friend’s side. So if they’re in a noisy street and you’re safe at home, you can apply the smart noise reduction to them as well.

Because it changes the audio signal before it gets to any apps or services, it’s compatible with pretty much everything: Skype, Messenger, Slack, whatever. You could even use it to record podcasts when there’s a leaf blower outside. A mobile version is on the way for release later this year.

It works — I’ve tested it, as have thousands of other users during the beta. But now comes the moment of truth: will anyone pay for it?

The new, official release of the app will let you mute the noise you hear on the line — that is, the noise coming from the microphones of people you talk to — for free, forever. But clearing the noise on your own line, like the baby crying next to you, after a two-week trial period, will cost you $20 per month, or $120 per year, or as low as $5 per month for group licenses. You can collect free time by referring people to the app, but eventually you’ll probably have to shell out.

Not that there’s anything wrong with that: A straightforward pay-as-you-go business model is refreshing in an age of intrusive data collection, pushy “freemium” platforms and services that lack any way to make money whatsoever.

Powered by WPeMatico

This is one smart device that every urban home could use

Posted by | air pollution, air purifier, artificial intelligence, Europe, Gadgets, GreenTech, machine learning, pollution, smart device, TC | No Comments

Living in a dense urban environment brings many startup-fuelled conveniences, be it near instant delivery of food — or pretty much whatever else you fancy — to a whole range of wheels that can be hopped on (or into) to whisk you around at the tap of an app.

But the biggest problem afflicting city dwellers is not some minor inconvenience. It’s bad, poor, terrible, horrible, unhealthy air. And there’s no app to fix that.

Nor can hardware solve this problem. But smart hardware can at least help.

For about a month I’ve been road-testing a wi-fi connected air purifier made by Swedish company, Blueair. It uses an Hepa filtration system combined with integrated air quality sensors to provide real-time in-app feedback which can be reassuring or alert you to unseen problems.

Flip to the bottom of this article for a speed take or continue reading for the full review of the Blueair Classic 480i with dual filters to reduce dust, smoke and pollen   

Review

If you’re even vaguely environmentally aware it’s fascinating and not a little horrifying to see how variable the air quality is inside your home. Everyday stuff like cooking, cleaning and changing the sheets can cause drastic swings in PM 2.5 and tVOC levels. Aka very small particles such as fine dust, smoke, odours and mite feces; and total volatile organic compounds, which refers to hundreds of different gases emitted by certain solids and liquids — including stuff humans breathe out by also harmful VOCs like formaldehyde.

What you learn from smart hardware can be not just informative but instructive. For instance I’ve switched to a less dusty cat litter after seeing how quickly the machine’s fan stepped up a gear after clearing the litter tray. I also have a new depth of understanding of quite how much pollution finds its way into my apartment when the upstairs neighbour is having a rooftop BBQ. Which makes it doubly offensive I wasn’t invited.

Though, I must admit, I’ve yet to figure out a diplomatic way to convince him to rethink his regular cook-out sessions. Again, some problems can’t be fixed by apps. Meanwhile city life means we’re all, to a greater or lesser degree, adding to the collectively polluted atmosphere. Changing that requires new politics.

You cannot hermetically seal your home against outdoor air pollution. It wouldn’t make for a healthy environment either. Indoor spaces must be properly ventilated. Adequate ventilation is also of course necessary to control moisture levels to prevent other nasty issues like mould. And using this device I’ve watched as opening a window almost instantly reduced tVOC levels.

Pretty much every city resident is affected by air pollution, to some degree. And it’s a heck of a lot harder to switch your home than change your brand of cat litter. But even on that far less fixable front, having an air quality sensor indoors can be really useful — to help you figure out the best (and worst) times to air out the house. I certainly won’t be opening the balcony doors on a busy Saturday afternoon any time soon, for example.

Blueair sells a range of air purifiers. The model I’ve been testing, the Blueair Classic 480i, is large enough to filter a room of up to 40m2. It includes filters capable of filtering both particulate matter and traffic fumes (aka its “SmokeStop” filter). The latter was important for me, given I live near a pretty busy road. But the model can be bought with just a particle filter if you prefer. The dual filtration model I’m testing is priced at €725 for EU buyers.

Point number one is that if you’re serious about improving indoor air quality the size of an air purifier really does matter. You need a device with a fan that’s powerful enough to cycle all the air in the room in a reasonable timeframe. (Blueair promises five air changes per hour for this model, per the correct room size).

So while smaller air filter devices might look cute, if a desktop is all the space you can stretch to you’d probably be better off getting a few pot plants.

Blueair’s hardware also has software in the mix too, of course. The companion Blueair Friend app serves up the real-time feedback on both indoor air quality and out. The latter via a third party service whose provider can vary depending on your location. Where I live in Europe it’s powered by BreezoMeter.

This is a handy addition for getting the bigger picture. If you find you have stubbornly bad air quality levels indoors and really can’t figure out why, most often a quick tab switch will confirm local pollution levels are indeed awful right now. It’s likely not just you but the whole neighbourhood suffering.

Dirty cities 

From Asia to America the burning of fossil fuels has consequences for air quality and health that are usually especially pronounced in dense urban environments where humans increasingly live. More than half the world’s population now lives in urban areas — with the UN predicting this will grow to around 70% by 2050.

In Europe, this is already true for more than 70% of the population which makes air pollution a major concern in many regional cities.

Growing awareness of the problem is beginning to lead to policy interventions — such as London’s ultra low emission charging zone and car free Sundays one day a month in Paris’ city center. But EU citizens are still, all too often, stuck sucking in unhealthy air.

London’s toxic air is an invisible killer.

We launched the world’s first Ultra Low Emission Zone to cut air pollution. Since then, there have been on average 9400 fewer polluting vehicles on our streets every day. #LetLondonBreathe #ULEZ pic.twitter.com/0mYcIGi1xP

— Mayor of London (@MayorofLondon) May 23, 2019

 

Last year six EU nations, including the UK, France and Germany, were referred to the highest court in Europe for failing to tackle air pollution — including illegally high levels of nitrogen dioxide produced by diesel-powered vehicles.

Around one in eight EU citizens who live in an urban area is exposed to air pollutant levels that exceed one or more of the region’s air quality standards, according to a briefing note published by the European Environment Agency (EEA) last year.

It also said up to 96% of EU urban citizens are exposed to levels of one or more air pollutants deemed damaging to health when measured against the World Health Organization’s more stringent guidelines.

There are multiple and sometimes interlinked factors impacting air quality in urban environments. Traffic fumes is a very big one. But changes in meteorological conditions due to climate change are also expected to increase certain concentrations of air pollutants. While emissions from wildfires is another problem exacerbated by drought conditions which are linked to climate change that can also degrade air quality in nearby cities.

Action to tackle climate change continues to lag far behind what’s needed to put a check on global warming. Even as far too little is still being done in most urban regions to reduce vehicular emissions at a local level.

In short, this problem isn’t going away anytime soon — and all too often air quality is still getting worse.

At the same time health risks from air pollution are omnipresent and can be especially dangerous for children. A landmark global study of the impact of traffic fumes on childhood asthma, published recently in the Lancet, estimates that four million children develop the condition every year primarily as a result of nitrogen dioxide air pollution emitted by vehicles.

The majority (64%) of these new cases were found to occur in urban centres — increasing to 90% when factoring in surrounding suburban areas.

The study also found that damage caused by air pollution is not limited to the most highly polluted cities in China and India. “Many high-income countries have high NO2 exposures, especially those in North America, western Europe, and Asia Pacific,” it notes.

The long and short of all this is that cities the world over are going to need to get radically great at managing air quality — especially traffic emissions — and fast. But, in the meanwhile, city dwellers who can’t or don’t want to quit the bright lights are stuck breathing dirty air. So it’s easy to imagine consumer demand growing for in-home devices that can sense and filter pollutants as urbanities try to find ways to balance living in a city with reducing their exposure to the bad stuff.

Cleaner air

That’s not to say that any commercial air purifier will be able to provide a complete fix. The overarching problem of air pollution is far too big and bad for that. A true fix would demand radical policy interventions, such as removing all polluting vehicles from urban living spaces. (And there’s precious little sign of anything so radical on the horizon.)

But at least at an individual home level, a large air purifier with decent filtration technology should reduce your exposure to pollution in the place you likely spend the most time.

If, as the Blueair Classic 480i model does, the filtration device also includes embedded sensors to give real-time feedback on air quality it can further help you manage pollution risk — by providing data so you can better understand the risks in and around your home and make better decisions about, for instance, when to open a window.

“Air quality does always change,” admits Blueair’s chief product officer, Jonas Holst, when we chat. “We cannot promise to our consumers that you will always have super, super, clean air. But we can promise to consumers that you will always have a lot cleaner air by having our product — because it depends on what happens around you. In the outdoor, by your neighbours, if you’re cooking, what your cat does or something. All of those things impact air quality.

“But by having high speeds, thanks to the HepaSilent technology that we use, we can make sure that we always constantly fight that bombardment of pollutants.”

On the technology front, Blueair is using established filtration technology — Hepa and active carbon filters to remove particular matter and gaseous pollutants — but with an ionizing twist (which it brands ‘HepaSilent’).

This involves applying mechanical and electrostatic filtration in combination to enhance performance of the air purifier without boosting noise levels or requiring large amounts of energy to run. Holst dubs it one of the “core strengths” of the Blueair product line.

“Mechanical filtration just means a filter [plus a fan to draw the air through it]. We have a filter but by using the ionization chamber we have inside the product we can boost the performance of the filter without making it very, very dense. And by doing that we can let more air through the product and simply then clean more air faster,” he explains.

“It’s also something that is constantly being developed,” he adds of the firm’s Hepa + ionizing technology, which it’s been developing in its products for some 20 years. “We have had many developments of this technology since but the base technical structure is there in the combination between a mechanical and electrostatical filtration. That is what allows us to have less noise and less energy because the fan doesn’t work as hard.”

On top of that, in the model I’m testing, Blueair has embedded air quality sensors — which connect via wi-fi to the companion app where the curious user can see real-time plots of things like PM 2.5 and tVOC levels, and start to join the dots between what’s going on in their home and what the machine is sniffing out.

The sensors mean the unit can step up and down the fan speed and filtration level automatically in response to pollution spikes (you can choose it to trigger on particulate matter only, or PM 2.5 and tVOC gaseous compounds, or turn automation off altogether). So if you’re really not at all curious that’s okay too. You can just plug it in, hook it to the wi-fi and let it work.

Sound, energy and sensing smarts in a big package

To give a ballpark of energy consumption for this model, Holst says the Blueair Classic 480i consumes “approximately” the same amount of energy as running a lightbulb — assuming it’s running mostly on lower fan speeds.

As and when the fan steps up in response to a spike in levels of potential pollutants he admits it will consume “a little bit more” energy.

The official specs list the model’s energy consumption at between 15-90 watts.

On the noise front it’s extremely quiet when on the lowest fan setting. To the point of being barely noticeable. You can sleep in the same room and certainly won’t be kept awake.

You will notice when the fan switches up to the second or, especially, the third (max) speed — where it can hit 52 dB(A)). The latter’s rushing air sounds are discernible from a distance, even in another room. But you hopefully won’t be stuck listening to level 3 fan noise for too long, unless you live in a really polluted place. Or, well, unless you run into an algorithmic malfunction (more on that below).

As noted earlier, the unit’s smart sensing capabilities mean fan speed can be set to automatically adjust in response to changing pollution levels — which is obviously the most useful mode to use since you won’t need to keep checking in to see whether or not the air is clean.

You can manually override the automation and fix/switch the fan at a speed of your choice via the app. And as I found there are scenarios where an override is essential. Which we’ll get to shortly.

The unit I was testing, a model that’s around two years old, arrived with instructions to let it run for a week without unplugging so that the machine learning algorithms could configure to local conditions and offer a more accurate read on gases and particles. Holst told us that the U.S. version of the 480i is  “slightly updated” — and, as such, this learning process has been eliminated. So you should be able to just plug it in and get the most accurate reads right away. 

The company recommends changing the filters every six months to “ensure performance”, or more if you live in a very polluted area. The companion app tracks days (estimated) remaining running time in the form of a days left countdown.

Looks wise, there’s no getting around the Blueair Classic 480i is a big device. Think ‘bedside table’ big.

You’re not going to miss it in your room and it does need a bigger footprint of free space around it so as not to block the air intake and outlet. Something in the region of ~80x60cm. Its lozenge shape helps by ensuring no awkward corners and with finding somewhere it can be parked parallel but not too close to a wall.

There’s not much more to say about the design of this particular model except that it’s thoughtful. The unit has a minimalist look which avoids coming across too much like a piece of ugly office furniture. While its white and gun metal grey hues plus curved flanks help it blend into the background. I haven’t found it to be an eyesore.

A neat flip up lid hides a set of basic physical controls. But once you’ve done the wi-fi set-up and linked it to the companion app you may never need to use these buttons as everything can be controlled in the app.

Real-time pollution levels at your fingertips

Warning: This app can be addictive! For weeks after installing the unit it was almost impossible to resist constantly checking the pollution levels. Mostly because it was fascinating to watch how domestic activity could send one or other level spiking or falling.

As well as PM 2.5 and tVOC pollutants this model tracks temperature and humidity levels. It offers day, week and monthly plots for everything it tracks.

The day view is definitely the most addictive — as it’s where you see instant changes and can try to understand what’s triggering what. So you can literally join the dots between, for example, hearing a street sweeper below your window and watching a rise in PM 2.5 levels in the app right after. Erk!

Though don’t expect a more detailed breakdown of the two pollutant categories; it’s an aggregated mix in both cases. (And some of the gases that make up the tVOC mix aren’t harmful.)

The month tab gives a longer overview which can be handy to spot regular pollution patterns (though the view is a little cramped on less phablet-y smartphone screens).

While week view offers a more recent snapshot if you’re trying to get a sense of your average pollution exposure over a shorter time frame.

That was one feature I thought the app could have calculated for you. But, equally, more granular quantification might risk over-egging the pudding. It would also risk being mislead if the sensor accuracy fails on you. The overarching problem with pollution exposure is that, sadly, there’s only so much an individual can do to reduce it. So it probably makes sense not to calculate your pollution exposure score.

The app could certainly provide more detail than it does but Holst told us the aim is to offer enough info to people who are interested without it being overwhelming. He also said many customers just want to plug it in and let it work, not be checking out daily charts. (Though if you’re geeky you will of course want the data.)

It’s clear there is lots of simplification going, as you’d expect with this being a consumer device, not a scientific instrument. I found the Blueair app satisfied my surface curiosity while seeing ways its utility could be extended with more features. But in the end I get that it’s designed to be an air-suck, not a time-suck, so I do think they’ve got the balance there pretty much right.

There are enough real-time signals to be able to link specific activities/events with changes in air quality. So you can literally watch as the tVOC level drops when you open a window. (Or rises if your neighbor is BBQing… ). And I very quickly learnt that opening a window will (usually) lower tVOC but send PM 2.5 rising — at least where I live in a dusty, polluted city. So, again, cleaner air is all you should expect.

Using the app you can try and figure out, for instance, optimal ventilation timings. I also found having the real-time info gave me a new appreciation for heavy rain — which seemed to be really great for clearing dust out of the air, frequently translating into “excellent” levels of PM 2.5 in the app for a while after.

Here are a few examples of how the sensors reacted to different events — and what the reaction suggests…

Cleaning products can temporarily spike tVOC levels:

 

Changing bed sheets can also look pretty disturbing…   

 

An evening BBQ on a nearby roof terrace appears much, much worse though:

 

And opening the balcony door to the street on a busy Saturday afternoon is just… insane… 

 

Uh-oh, algorithm malfunction…

After a few minutes of leaving the balcony door open one fateful Saturday afternoon, which almost instantly sent the unit into max fan speed overdrive, I was surprised to find the fan still blasting away an hour later, and then three hours later, and at bedtime, and in the morning. By which point I thought something really didn’t seem right.

The read from the app showed the pollution level had dropped down from the very high spike but it was still being rated as ‘polluted’ — a level which keeps the fan at the top speed. So I started to suspect something had misfired.

This is where being able to switch to manual is essential — meaning I could override the algorithm’s conviction that the air was really bad and dial the fan down to a lower setting.

That override provided a temporary ‘fix’ but the unnaturally elevated ‘pollution’ read continued for the best part of a week. This made it look like the whole sensing capacity had broken. And without the ability to automatically adapt to changing pollution levels the smart air purifier was now suddenly dumb…

 

It turned out Blueair has a fix for this sort of algorithmic malfunction. Though it’s not quick.

After explaining the issue to the company, laying out my suspicion that the sensors weren’t reading correctly, it told me the algorithms are programmed to respond to this type of situation by reseting around seven days after the event, assuming the read accuracy hasn’t already corrected itself by then.

Sure enough, almost a week later that’s exactly what happened. Though I couldn’t find anything to explain this might happen in the user manual, so it would be helpful if they include it in a troubleshooting section.

Here’s the month view showing the crazy PM 2.5 spike; the elevated extended (false) reading; then the correction; followed finally by (relatively) normal service…

 

For a while after this incident the algorithms also seemed overly sensitive — and I had to step in again several times to override the top gear setting as its read on pollution levels was back into the yellow without an obvious reason why.

When the level reads ‘polluted’ it automatically triggers the highest fan speed. Paradoxically, this sometimes seems to have the self-defeating effect of appearing to draw dust up into the air — thereby keeping the PM 2.5 level elevated. So at times manually lowering the fan when it’s only slightly polluted can reduce pollution levels quicker than just letting it blast away. Which is one product niggle.

When viewed in the app the sustained elevated pollution level did look pretty obviously wrong — to the human brain at least. So, like every ‘smart’ device, this one also benefits from having human logic involved to complete the loop.

Concluding thoughts after a month’s use

A few weeks on from the first algorithm malfunction the unit’s sensing capacity at first appeared to have stabilized — in that it was back to the not-so-hair-trigger-sensitivity that had been the case prior to balcony-door-gate.

For a while it seemed less prone to have a sustained freak out over relatively minor domestic activities like lifting clean sheets out of the cupboard, as if it had clicked into a smoother operating grove. Though I remained wary of trying the full bore Saturday balcony door.

I thought this period of relative tranquility might signal improved measurement accuracy, the learning algos having been through not just an initial training cycle but a major malfunction plus correction. Though of course there was no way to be sure.

It’s possible there had also been a genuine improvement in indoor air quality — i.e. as a consequence of, for example, better ventilation habits and avoiding key pollution triggers because I now have real-time air quality feedback to act on so can be smarter about when to open windows, where to shake sheets, which type of cat litter to buy and so on.

It’s a reassuring idea. Though one that requires putting your faith in algorithms that are demonstrably far from perfect. Even when they’re functioning they’re a simplification and approximation of what’s really going on. And when they fail, well, they are clearly getting it totally wrong.

Almost bang on the month mark of testing there was suddenly another crazy high PM 2.5 spike.

One rainy afternoon the read surged from ‘good’ to ‘highly polluted’ without any real explanation. I had opened a patio on the other side of the apartment but it does not open onto a street. This time the reading stuck at 400 even with the fan going full blast. So it looked like an even more major algorithm crash…

Really clean air is impossible to mistake. Take a walk in the mountains far from civilization and your lungs will thank you. But cleaner air is harder for humans to quantify. Yet, increasingly, we do need to know how clean or otherwise the stuff we’re breathing is, as more of us are packed into cities exposed to each others’ fumes — and because the harmful health impacts of pollution are increasingly clear.

Without radical policy interventions we’re fast accelerating towards a place where we could be forced to trust sensing algorithms to tell us whether what we’re breathing is harmful or not.

Machines whose algorithms are fallible and might be making rough guestimates, and/or prone to sensing malfunctions. And machines that also won’t be able to promise to make the air entirely safe to breathe. Frankly it’s pretty scary to contemplate.

So while I can’t now imagine doing without some form of in-home air purifier to help manage my urban pollution risk — I’d definitely prefer that this kind of smart hardware wasn’t necessary at all.

In Blueair’s case, the company clearly still has work to do to improve the robustness of its sensing algorithms. Operating conditions for this sort of product will obviously vary widely, so there’s loads of parameters for its algorithms to balance.

With all that stuff to juggle it just seems a bit too easy for the sensing function to spin out of control.

10-second take

The good

Easy to set up, thoughtful product design, including relatively clear in-app controls and content which lets you understand pollution triggers to manage risk. Embedded air quality sensors greatly extend the product’s utility by enabling autonomous response to changes in pollution levels. Quiet operation during regular conditions. Choice of automated or manual fan speed settings. Filtration is powerful and since using the device indoor air quality does seem cleaner.

The bad

Sensing accuracy is not always reliable. The algorithms appear prone to being confused by air pressure changes indoors, such as a large window being opened which can trigger unbelievably high pollution readings that lead to an extended period of inaccurate readings when you can’t rely on the automation to work at all. I also found the feedback in the app can sometimes lag. App content/features are on the minimalist side so you may want more detail. When the pollution level is marginal an elevated fan speed can sometimes appear to challenge the efficacy of the filtration as if it’s holding pollution levels in place rather than reducing them.

Bottom line

If you’re looking for a smart air purifier the Blueair Classic 480i does have a lot to recommend it. Quiet operation, ease of use and a tangible improvement in air quality, thanks to powerful filtration. However the accuracy of the sensing algorithms does pose a dilemma. For me this problem has recurred twice in a month. That’s clearly not ideal when it takes a full week to reset. If it were not for this reliability issue I would not hesitate to recommend the product, as — when not going crazy — the real-time feedback it provides really helps you manage a variety of pollution risks in and around your home. Hopefully the company will work on improving the stability of the algorithms. Or at least offer an option in the app so you can manually reset it if/when it does go wrong.

Powered by WPeMatico

Xprize names two grand prize winners in $15 million Global Learning Challenge

Posted by | Android, bangalore, california, carnegie mellon, carnegie mellon university, cci, Education, Elon Musk, Google, kenya, machine learning, musk, New York, pittsburgh, Seoul, south korea, Speech Recognition, Tanzania, TC, technology, transhumanism, United Kingdom, United States, XPRIZE | No Comments

Xprize, the nonprofit organization developing and managing competitions to find solutions to social challenges, has named two grand prize winners in the Elon Musk-backed Global Learning Xprize.

The companies, KitKit School out of South Korea and the U.S., and onebillion, operating in Kenya and the U.K., were announced at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, Calif.

Xprize set each of the competing teams the task of developing scalable services that could enable children to teach themselves basic reading, writing and arithmetic skills within 15 months.

Musk himself was on hand to award $5 million checks to each of the winning teams.

Five finalists, including New York-based CCI, which developed lesson plans and a development language so non-coders could create lessons; Chimple, a Bangalore-based learning platform enabling children to learn reading, writing and math on a tablet; RobotTutor, a Pittsburgh-based company, which used Carnegie Mellon research to develop an app for Android tablets that would teach lessons in reading and writing with speech recognition, machine learning and human computer interactions; and the two grand prize winners all received $1 million to continue developing their projects.

The tests required each product to be field-tested in Swahili, reaching nearly 3,000 children in 170 villages across Tanzania.

All of the final solutions from each of the five teams that made it to the final round of competition have been open-sourced so anyone can improve on and develop local solutions using the toolkits developed by each team in competition.

Kitkit School, with a team from Berkeley, Calif. and Seoul, developed a program with a game-based core and flexible learning architecture to help kids learn independently, while onebillion merged numeracy content with literacy material to provide directed learning and activities alongside monitoring to personalize responses to children’s needs.

Both teams are going home with $5 million to continue their work.

The problem of access to basic education affects more than 250 million children around the world, who can’t read or write, and one-in-five children around the world aren’t in school, according to data from UNESCO.

The problem of access is compounded by a shortage of teachers at the primary and secondary school levels. Some research, cited by Xprize , indicates that the world needs to recruit another 68.8 million teachers to provide every child with a primary and secondary education by 2040.

Before the Global Learning Xprize field test, 74% of the children who participated were reported as never having attended school; 80% were never read to at home; and 90% couldn’t read a single word of Swahili.

After the 15-month program working on donated Google Pixel C tablets and pre-loaded with software, the number was cut in half.

“Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of Xprize, in a statement. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.”

After the grand prize announcement, Xprize said it will work to secure and load the software onto tablets; localize the software; and deliver preloaded hardware and charging stations to remote locations so all finalist teams can scale their learning software across the world.

Powered by WPeMatico

Are women better gamers than men? This startup’s AI-driven research says yes

Posted by | artificial intelligence, Dota 2, gamer, Gaming, machine learning, Mobalytics, player, Runa Capital, Startups, stereotypes, TC, video games, virtual assistant | No Comments

Last year the Gosu.ai startup, which has developed an AI assistant to help gamers play smarter and improve their skills, raised $1.9 million. Using machine learning, it analyzes matches and makes personal recommendations, and allows gamers to be taught by a virtual assistant.

Because they have this virtual assistant they can now do some interesting research. For the first time ever, we can actually peer over the shoulder of a gamer and find out what makes them good or not. The findings are fascinating.

Gosu.ai surveyed nearly 5,000 gamers playing Dota 2 to understand which factors separate successful and less-successful gamers.

They found that although only 4 percent of respondents to the survey were women, it turned out that those women that responded had a 44 percent higher win rate on average than the men.

Does this suggest women are better gamers than men? This isn’t a scientific study, but it is a tantalizing idea…

The study also found that the higher your skills in foreign languages, the slower your skills improve. They also found that people without a university degree, people who don’t travel and people who play sports increase their game ratings faster. Similarly, having a job also slows growth. Well, duh.

Gosu.ai’s main competitors are Mobalytics, Dojo Madness and MoreMMR. But the main difference is that these competitors make analytics of raw statistics, and find the generalized weak spots in comparison with other players, giving general recommendations. Gosu.ai analyzes the specific actions of each player, down to the movement of their mouse, to cater direct recommendations for the player. So it’s more like a virtual assistant than a training platform.

The startup is funded by Runa Capital, Ventech and Sistema_VC. Previously, the startup was backed by Gagarin Capital.

Powered by WPeMatico

OpenAI Five crushes Dota2 world champs, and soon you can lose to it too

Posted by | artificial intelligence, Gaming, machine learning, OpenAI, science | No Comments

Dota2 is one of the most popular, and complex, online games in the world, but an AI has once again shown itself to supersede human skill. In matches over the weekend, OpenAI’s “Five” system defeated two pro teams soundly, and soon you’ll be able to test your own mettle against — or alongside — the ruthless agent.

In a blog post, OpenAI detailed how its game-playing agent has progressed from its younger self — it seems wrong to say previous version, since it really is the same extensive neural network as many months ago, but with much more training.

The version that played at Dota2’s premiere tournament, The International, gets schooled by the new version 99 percent of the time. And it’s all down to more practice:

In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day.

To the best of our knowledge, this is the first time an RL [reinforcement learning] agent has been trained using such a long-lived training run.

One is tempted to cry foul at a data center-spanning intelligence being allowed to train for 600 human lifespans. But really it’s more of a compliment to human cognition that we can accomplish the same thing with a handful of months or years, while still finding time to eat, sleep, socialize (well, some of us) and so on.

Dota2 is an intense and complex game with some rigid rules but a huge amount of fluidity, and representing it in a way that makes sense to a computer isn’t easy (which likely accounts partly for the volume of training required). Controlling five “heroes” at once on a large map with so much going on at any given time is enough to tax a team of five human brains. But teams work best when they’re acting as a single unit, which is more or less what Five was doing from the start. Rather than five heroes, it was more like five fingers of a hand to the AI.

Interestingly, OpenAI also discovered lately that Five is capable of playing cooperatively with humans as well as in competition. This was far from a sure thing — the whole system might have frozen up or misbehaved if it had a person in there gumming up the gears. But in fact it works pretty well.

You can watch the replays or get the pro commentary on the games if you want to hear exactly how the AI won (I’ve played but I’m far from good. I’m not even bad yet). I understand they had some interesting buy-back tactics and were very aggressive. Or, if you’re feeling masochistic, you can take on the AI yourself in a limited-time event later this week.

We’re launching OpenAI Five Arena, a public experiment where we’ll let anyone play OpenAI Five in both competitive and cooperative modes. We’d known that our 1v1 bot would be exploitable through clever strategies; we don’t know to what extent the same is true of OpenAI Five, but we’re excited to invite the community to help us find out!

Although a match against pros would mean all-out war using traditional tactics, low-stakes matches against curious players might reveal interesting patterns or exploits that the AI’s creators aren’t aware of. Results will be posted publicly, so be ready for that.

You’ll need to sign up ahead of time, though: The system will only be available to play from Thursday night at 6 PM to the very end of Sunday, Pacific time. They need to reserve the requisite amount of computing resources to run the thing, so sign up now if you want to be sure to get a spot.

OpenAI’s team writes that this is the last we’ll hear of this particular iteration of the system; it’s done competing (at least in tournaments) and will be described more thoroughly in a paper soon. They’ll continue to work in the Dota2 environment because it’s interesting, but what exactly the goals, means or limitations will be are yet to be announced.

Powered by WPeMatico

This little translator gadget could be a traveling reporter’s best friend

Posted by | Crowdfunding, Gadgets, hardware, Kickstarter, machine learning, TC, Translation | No Comments

If you’re lucky enough to get to travel abroad, you know it’s getting easier and easier to use our phones and other gadgets to translate for us. So why not do so in a way that makes sense to you? This little gadget seeking funds on Kickstarter looks right up my alley, offering quick transcription and recording — plus music playback, like an iPod Shuffle with superpowers.

The ONE Mini is really not that complex of a device — a couple of microphones and a wireless board in tasteful packaging — but that combination allows for a lot of useful stuff to happen both offline and with its companion app.

You activate the device, and it starts recording and both translating and transcribing the audio via a cloud service as it goes (or later, if you choose). That right there is already super useful for a reporter like me — although you can always put your phone on the table during an interview, this is more discreet, and of course a short-turnaround translation is useful, as well.

Recordings are kept on the phone (no on-board memory, alas) and there’s an option for a cloud service, but that probably won’t be necessary, considering the compact size of these audio files. If you’re paranoid about security, this probably isn’t your jam, but for everyday stuff it should be just fine.

If you want to translate a conversation with someone whose language you don’t speak, you pick two of the 12 built-in languages in the app and then either pass the gadget back and forth or let it sit between you while you talk. The transcript will show on the phone and the ONE Mini can bleat out the translation in its little robotic voice.

Right now translation online only works, but I asked and offline is in the plans for certain language pairs that have reliable two-way edge models, probably Mandarin-English and Korean-Japanese.

It has a headphone jack, too, which lets it act as a wireless playback device for the recordings or for your music, or to take calls using the nice onboard mics. It’s lightweight and has a little clip, so it’s probably better than connecting directly to your phone in many cases.

There’s also a 24/7 interpreter line that charges two bucks a minute that I probably wouldn’t use. I think I would feel weird about it. But in an emergency it could be pretty helpful to have a panic button that sends you directly to a person who speaks both the languages you’ve selected.

I have to say, normally I wouldn’t highlight a random crowdfunded gadget, but I happen to have met the creator of this one, Wells Tu, at one of our events, and trust him and his team to actually deliver. The previous product he worked on was a pair of translating wireless earbuds that worked surprisingly well, so this isn’t their first time shipping a product in this category — that makes a lot of difference for a hardware startup. You can see it in action here:

He pointed out in an email to me that obviously wireless headphones are hot right now, but the translation functions aren’t good and battery life is short. This adds a lot of utility in a small package.

Right now you can score a ONE Mini for $79, which seems reasonable to me. They’ve already passed their goal and are planning on shipping in June, so it shouldn’t be a long wait.

Powered by WPeMatico

MIT’s ‘cyber-agriculture’ optimizes basil flavors

Posted by | agriculture, artificial intelligence, food, Gadgets, GreenTech, hardware, hydroponics, machine learning, MIT, science | No Comments

The days when you could simply grow a basil plant from a seed by placing it on your windowsill and watering it regularly are gone — there’s no point now that machine learning-optimized hydroponic “cyber-agriculture” has produced a superior plant with more robust flavors. The future of pesto is here.

This research didn’t come out of a desire to improve sauces, however. It’s a study from MIT’s Media Lab and the University of Texas at Austin aimed at understanding how to both improve and automate farming.

In the study, published today in PLOS ONE, the question being asked was whether a growing environment could find and execute a growing strategy that resulted in a given goal — in this case, basil with stronger flavors.

Such a task is one with numerous variables to modify — soil type, plant characteristics, watering frequency and volume, lighting and so on — and a measurable outcome: concentration of flavor-producing molecules. That means it’s a natural fit for a machine learning model, which from that variety of inputs can make a prediction as to which will produce the best output.

“We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” explained MIT’s Caleb Harper in a news release. The better you understand those interactions, the better you can design the plant’s lifecycle, perhaps increasing yield, improving flavor or reducing waste.

In this case the team limited the machine learning model to analyzing and switching up the type and duration of light experienced by the plants, with the goal of increasing flavor concentration.

A first round of nine plants had light regimens designed by hand based on prior knowledge of what basil generally likes. The plants were harvested and analyzed. Then a simple model was used to make similar but slightly tweaked regimens that took the results of the first round into account. Then a third, more sophisticated model was created from the data and given significantly more leeway in its ability to recommend changes to the environment.

To the researchers’ surprise, the model recommended a highly extreme measure: Keep the plant’s UV lights on 24/7.

Naturally this isn’t how basil grows in the wild, since, as you may know, there are few places where the sun shines all day long and all night strong. And the arctic and antarctic, while fascinating ecosystems, aren’t known for their flavorful herbs and spices.

Nevertheless, the “recipe” of keeping the lights on was followed (it was an experiment, after all), and incredibly, this produced a massive increase in flavor molecules, doubling the amount found in control plants.

“You couldn’t have discovered this any other way,” said co-author John de la Parra. “Unless you’re in Antarctica, there isn’t a 24-hour photoperiod to test in the real world. You had to have artificial circumstances in order to discover that.”

But while a more flavorful basil is a welcome result, it’s not really the point. The team is more happy that the method yielded good data, validating the platform and software they used.

“You can see this paper as the opening shot for many different things that can be applied, and it’s an exhibition of the power of the tools that we’ve built so far,” said de la Parra. “With systems like ours, we can vastly increase the amount of knowledge that can be gained much more quickly.”

If we’re going to feed the world, it’s not going to be done with amber waves of grain, i.e. with traditional farming methods. Vertical, hydroponic, computer-optimized — we’ll need all these advances and more to bring food production into the 21st century.

Powered by WPeMatico

Blind users can now explore photos by touch with Microsoft’s Seeing AI

Posted by | accessibility, Apps, artificial intelligence, augmented reality, Blindness, Computer Vision, Disabilities, machine learning, Microsoft, Mobile | No Comments

Microsoft’s Seeing AI is an app that lets blind and limited-vision folks convert visual data into audio feedback, and it just got a useful new feature. Users can now use touch to explore the objects and people in photos.

It’s powered by machine learning, of course, specifically object and scene recognition. All you need to do is take a photo or open one up in the viewer and tap anywhere on it.

“This new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them,” wrote Seeing AI lead Saqib Shaikh in a blog post. “The app can even describe the physical appearance of people and predict their mood.”

Because there’s facial recognition built in as well, you could very well take a picture of your friends and hear who’s doing what and where, and whether there’s a dog in the picture (important) and so on. This was possible on an image-wide scale already, as you can see in this image:

But the app now lets users tap around to find where objects are — obviously important to understanding the picture or recognizing it from before. Other details that may not have made it into the overall description may also appear on closer inspection, such as flowers in the foreground or a movie poster in the background.

In addition to this, the app now natively supports the iPad, which is certainly going to be nice for the many people who use Apple’s tablets as their primary interface for media and interactions. Lastly, there are a few improvements to the interface so users can order things in the app to their preference.

Seeing AI is free — you can download it for iOS devices here.

Powered by WPeMatico