voice assistant

Where is voice tech going?

Posted by | Alexa, artificial intelligence, Baidu, Column, COVID-19, Extra Crunch, Gadgets, hardware, Headspace, Market Analysis, Media, Mobile, Podcasts, siri, smart speaker, Speech Recognition, Startups, TC, Venture Capital, virtual assistant, voice, voice assistant, voice search, voice technology, Wearables | No Comments
Mark Persaud
Contributor

Mark Persaud is digital product manager and practice lead at Moonshot by Pactera, a digital innovation company that leads global clients through the next era of digital products with a heavy emphasis on artificial intelligence, data and continuous software delivery.

2020 has been all but normal. For businesses and brands. For innovation. For people.

The trajectory of business growth strategies, travel plans and lives have been drastically altered due to the COVID-19 pandemic, a global economic downturn with supply chain and market issues, and a fight for equality in the Black Lives Matter movement — amongst all that complicated lives and businesses already.

One of the biggest stories in emerging technology is the growth of different types of voice assistants:

  • Niche assistants such as Aider that provide back-office support.
  • Branded in-house assistants such as those offered by BBC and Snapchat.
  • White-label solutions such as Houndify that provide lots of capabilities and configurable tool sets.

With so many assistants proliferating globally, voice will become a commodity like a website or an app. And that’s not a bad thing — at least in the name of progress. It will soon (read: over the next couple years) become table stakes for a business to have voice as an interaction channel for a lovable experience that users expect. Consider that feeling you get when you realize a business doesn’t have a website: It makes you question its validity and reputation for quality. Voice isn’t quite there yet, but it’s moving in that direction.

Voice assistant adoption and usage are still on the rise

Adoption of any new technology is key. A key inhibitor of technology is often distribution, but this has not been the case with voice. Apple, Google, and Baidu have reported hundreds of millions of devices using voice, and Amazon has 200 million users. Amazon has a slightly more difficult job since they’re not in the smartphone market, which allows for greater voice assistant distribution for Apple and Google.

Image Credits: Mark Persaud

But are people using devices? Google said recently there are 500 million monthly active users of Google Assistant. Not far behind are active Apple users with 375 million. Large numbers of people are using voice assistants, not just owning them. That’s a sign of technology gaining momentum — the technology is at a price point and within digital and personal ecosystems that make it right for user adoption. The pandemic has only exacerbated the use as Edison reported between March and April — a peak time for sheltering in place across the U.S.

Powered by WPeMatico

Pandora launches interactive voice ads into beta testing

Posted by | Adtech, advertising, Advertising Tech, Media, Mobile, Music, Pandora, streaming music, voice, Voice Ads, voice assistant | No Comments

Pandora is launching interactive voice ads into wider public testing, the company announced this morning. The music streaming service first introduced the new advertising format, where users verbally respond to advertiser prompts, back in December with help from a small set of early adopters, including Doritos, Ashley HomeStores, Unilever, Wendy’s, Turner Broadcasting, Comcast and Nestlé.

The ads begin by explaining to listeners what they are and how they work. They then play a short and simple message followed by a question that listeners can respond to. For example, a Wendy’s ad asked listeners if they were hungry, and if they say “yes,” the ad continued with a recommendation of what to eat. An Ashley HomeStores ads engaged listeners by offering tips on a better night’s sleep.

The format is meant in particular to aid advertisers in connecting with users who are not looking at their phone. For example, when people are listening to Pandora while driving, cooking, cleaning the house or doing some other hands-free activity.

Since their debut, Pandora’s own data indicated the ads have been fairly well-received, in terms of the voice format; 47% of users said they either liked or loved the concept of responding with their voice, and 30% felt neutral. The stats paint a picture of an overall more positive reception, given that users don’t typically like ads at all. In addition, 72% of users also said they found the ad format easy to engage with.

However, Pandora cautioned advertisers that more testing is needed to understand which ads get users to respond and which do not. Based on early alpha testing, ads with higher engagement seemed be those that were entertaining, humorous or used a recognizable brand voice, it says.

As the new ad format enters into beta testing, the company is expanding access to more advertisers. Advertisers including Acura, Anheuser-Busch, AT&T, Doritos, KFC, Lane Bryant, Purex Laundry Detergent, Purple, Unilever, T-Mobile, The Home Depot, Volvo and Xfinity, among others, are signed up to test the interactive ads.

This broader test aims to determine what the benchmarks should be for voice ads, whether the ads need tweaking to optimize for better engagement, and whether ads are better for driving conversions at the upper funnel or if consumers are ready to take action based on the ads’ content.

Related to the rollout of interactive voice ads, Pandora is also upgrading its “Voice Mode” feature, launched last year and made available to all users last July. The feature will now offer listeners on-demand access to specific tracks and albums in exchange for watching a brand video via Pandora’s existing Video Plus ad format, the same as for text-based searches.

 

Powered by WPeMatico

The 7 most important announcements from Microsoft Ignite

Posted by | Android, Assistant, AWS, Bing, chromium, cloud computing, cloud infrastructure, computing, Cortana, Developer, Enterprise, GitHub, Google, google cloud, linux, machine learning, Microsoft, Microsoft Ignite 2019, microsoft windows, San Francisco, Satya Nadella, TC, voice assistant, Windows 10, Windows Phone | No Comments

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.

Powered by WPeMatico

Google Assistant, navigation and apps coming to GM vehicles starting in 2021

Posted by | Android, automotive, automotive industry, Chevrolet, connected car, General-Motors, Google, Google Play Store, onstar, smart home devices, Transportation, voice assistant | No Comments

GM is turning to Google to provide in-vehicle voice, navigation and other apps in its Buick, Cadillac, Chevrolet and GMC vehicles starting in 2021.

GM began shipping vehicles with Google Android Automotive OS in 2017, starting with the Cadillac CTS and expanding to other brands. Android Automotive OS shouldn’t be confused with Android Auto, which is a secondary interface that lies on top of an operating system. Android Automotive OS is modeled after its open-source mobile operating system that runs on Linux. But instead of running smartphones and tablets, Google modified it so it could be used in cars.

Now, GM is taking the additional step of embedding the Google services that so many people already use through their phones and smart speakers. GM was convinced by its own customer research to bring Google into its cars, Santiago Chamorro, GM’s vice president for global connected customer experience, told TechCrunch.

Google voice, navigation and apps found in the Google Play Store will be in compatible GM brands starting in 2021. Broad deployment across all GM brands is expected to occur in the years following.

Future GM infotainments, powered by Android, will have a built-in Google Assistant that drivers can use to make calls, text, play a radio station, change the climate in the car or close the garage door, if they have the requisite connected smart home device. The Google Assistant integration will continue to evolve over time, so that drivers in the future will be able to simply use their voice to engage with their vehicle, which could include renewing their OnStar or Connected Services plans, checking their tire pressure or, scheduling service, according to GM and Google.

Google Maps will also be embedded in the vehicle to help drivers navigate with real-time traffic information, automatic re-routing and lane guidance. Google Assistant is tied into maps, allowing drivers to use voice to
navigate home, share their ETA or find the nearest gas station and EV charging stations.

The infotainment system will include in-vehicle apps from the Google Pay store.

GM isn’t ditching all of its own features for Google, Chamorro said, adding that the automaker will continue to offer its own infotainment features such as service recommendations, vehicle health status, in-vehicle commerce and more, with the Google applications and services complementing their offerings.

In May, Google announced that it was opening its Android Automotive operating system to third-party developers to bring music and other entertainment apps into vehicle infotainment systems. Media app developers are now able to create new entertainment experiences for Android Automotive OS.

Google has been pushing its way into the automotive world, first through Android Auto and then with its operating system, for several years now.

In 2017, Volvo announced plans to incorporate into its car infotainment systems a version of its Android operating system. A year later, the company said it would embed voice-controlled Google Assistant, Google  Play Store, Google Maps and other Google services into its next-generation Sensus infotainment system.

Polestar 2, an all-electric vehicle developed by Volvo’s standalone electric performance brand, also has the Android OS. Renault-Nissan-Mitsubishi Alliance and Fiat Chrysler Automobiles also announced plans for Android Automotive OS.

“Cars are quickly transforming and opening up a lot of opportunity,” Patrick Brady, vice president of engineering at Google, said in a recent interview. “It’s the beautiful thing about having a platform like this. There are services that we might not be thinking about today and that may be here tomorrow.”

Powered by WPeMatico

Week-in-Review: Alexa’s indefinite memory and NASA’s otherworldly plans for GPS

Posted by | 4th of July, AI assistant, alex wong, Amazon, Andrew Kortina, Android, andy rubin, appeals court, Apple, apple inc, artificial intelligence, Assistant, China, enterprise software, Getty-Images, gps, here, iPhone, machine learning, Online Music Stores, operating systems, Sam Lessin, social media, Speech Recognition, TC, Tim Cook, Twitter, United States, Venmo, voice assistant | No Comments

Hello, weekenders. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about the cult of Ive and the degradation of Apple design. On Sunday night, The Wall Street Journal published a report on how Ive had been moving away from the company, to the dismay of many on the design team. Tim Cook didn’t like the report very much. Our EIC gave a little breakdown on the whole saga in a nice piece.

Apple sans Ive


Amazon Buys Whole Foods For Over 13 Billion

The big story

This week was a tad restrained in its eventfulness; seems like the newsmakers went on 4th of July vacations a little early. Amazon made a bit of news this week when the company confirmed that Alexa request logs are kept indefinitely.

Last week, an Amazon public policy exec answered some questions about Alexa in a letter sent to U.S. Senator Coons. His office published the letter on its site a few days ago and most of the details aren’t all that surprising, but the first answer really sets the tone for how Amazon sees Alexa activity:

Q: How long does Amazon store the transcripts of user voice recordings?

A: We retain customers’ voice recordings and transcripts until the customer chooses to delete them.

What’s interesting about this isn’t that we’re only now getting this level of straightforward dialogue from Amazon on how long data is kept if not specifically deleted, but it makes one wonder why it is useful or feasible for them to keep it indefinitely. (This assumes that they actually are keeping it indefinitely; it seems likely that most of it isn’t, and that by saying this they’re protecting themselves legally, but I’m just going off the letter.)

After several years of “Hey Alexa,” the company doesn’t seem all that close to figuring out what it is.

Alexa seems to be a shit solution for commerce, so why does Amazon have 10,000 people working on it, according to a report this week in The Information? All signs are pointing to the voice assistant experiment being a short-term failure in terms of the short-term ambitions, though AI advances will push the utility.

Training data is a big deal across AI teams looking to educate models on data sets of relevant information. The company seems to say as much. “Our speech recognition and natural language understanding systems use machine learning to adapt to customers’ speech patterns and vocabulary, informed by the way customers use Alexa in the real world. To work well, machine learning systems need to be trained using real world data.”

The company says it doesn’t anonymize any of this data because it has to stay associated with a user’s account in order for them to delete it. I’d feel a lot better if Amazon just effectively anonymized the data in the first place and used on-device processing the build a profile on my voice. What I’m more afraid of is Amazon having such a detailed voiceprint of everyone who has ever used an Alexa device.

If effortless voice-based e-commerce isn’t really the product anymore, what is? The answer is always us, but I don’t like the idea of indefinitely leaving Amazon with my data until they figure out the answer.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • NASA’s GPS moonshot
    The U.S. government really did us a solid inventing GPS, but NASA has some bigger ideas on the table for the positioning platform, namely, taking it to the Moon. It might be a little complicated, but, unsurprisingly, scientists have some ideas here. Read more.
  • Apple has your eyes
    Most of the iOS beta updates are bug fixes, but the latest change to iOS 13 brought a very strange surprise: changing the way the eyes of users on iPhone XS or XS Max look to people on the other end of the call. Instead of appearing that you’re looking below the camera, some software wizardry will now make it look like you’re staring directly at the camera. Apple hasn’t detailed how this works, but here’s what we do know
  • Trump is having a Twitter party
    Donald Trump’s administration declared a couple of months ago that it was launching an exploratory survey to try to gain a sense of conservative voices that had been silenced on social media. Now @realdonaldtrump is having a get-together and inviting his friends to chat about the issue. It’s a real who’s who; check out some of the people attending here.
Amazon CEO And Blue Origin Founder Jeff Bezos Speaks At Air Force Association Air, Space And Cyber Conference

(Photo by Alex Wong/Getty Images)

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Amazon is responsible for what it sells:
    [Appeals court rules Amazon can be held liable for third-party products]
  2. Android co-creator gets additional allegations filed:
    [Newly unsealed court documents reveal additional allegations against Andy Rubin]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. TechCrunch reporter Kate Clark did a great interview with the ex-Facebook, ex-Venmo founding team behind Fin and how they’re thinking about the consumerization of the enterprise.

Sam Lessin and Andrew Kortina on their voice assistant’s workplace pivot

“…The thing is, developing an AI assistant capable of booking flights, arranging trips, teaching users how to play poker, identifying places to purchase specific items for a birthday party and answering wide-ranging zany questions like “can you look up a place where I can milk a goat?” requires a whole lot more human power than one might think. Capital-intensive and hard-to-scale, an app for “instantly offloading” chores wasn’t the best business. Neither Lessin nor Kortina will admit to failure, but Fin‘s excursion into B2B enterprise software eight months ago suggests the assistant technology wasn’t a billion-dollar idea.…”

Here are some of our other top reads this week for premium subscribers. This week, we talked a bit about asking for money and the future of China’s favorite tech platform:

Want more TechCrunch newsletters? Sign up here.

Powered by WPeMatico

Alexa, does the Echo Dot Kids protect children’s privacy?

Posted by | Advertising Tech, Amazon, Amazon Echo, Amazon.com, artificial intelligence, center for digital democracy, coppa, Disney, echo, echo dot kids, eCommerce, Federal Trade Commission, Gadgets, nickelodeon, privacy, privacy policy, smart assistant, smart speaker, Speech Recognition, terms of service, United States, voice assistant | No Comments

A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’s Echo smart speaker.

The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the e-commerce giant is violating the Children’s Online Privacy Protection Act (COPPA) — including by failing to obtain proper consents for the use of kids’ data.

As with its other smart speaker Echo devices, the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to U.S. privacy regulation intended to protect kids from commercial exploitation online.

The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.

They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under COPPA to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).

A number of other concerns about Amazon’s device are also being raised with the FTC.

Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.

There are differences in the software, though. In parallel, Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.

Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).

A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon .

At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).

But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the COPPA complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.

Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.

More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third-party services and skills.

Instead, the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.

They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.

“It does not verify that the person ‘consenting’ is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”

Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite COPPA requiring that children’s data be held for no longer than is reasonably necessary.

They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.

Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.

They further highlight that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s “remember” feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.

The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents’ consent — such as when a child has a friend or family member visiting on a play date and they end up playing with the Echo together.

Responding to the complaint, Amazon has denied it is in breach of COPPA. In a statement, a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”

An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.

At the time of writing, the FTC had not responded to a request for comment on the complaint.

In Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied,” and suggested profiling at such an early age could lead to a data-disadvantaged generation.

Responding to rising concerns the U.K. privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.

The U.K. government has also recently published a whitepaper setting out a policy plan to regulate internet content that has a heavy focus on child safety.

Powered by WPeMatico

The damage of defaults

Posted by | AirPods, algorithmic accountability, algorithmic bias, Apple, Apple earbuds, apple inc, artificial intelligence, Bluetooth, Diversity, Gadgets, headphones, hearables, iphone accessories, mobile computing, siri, smartphone, TC, voice assistant, voice computing | No Comments

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

Hi @tim_cook, I fixed that sketch for you. Introducing #InPods — because one size doesn’t fit all 😉pic.twitter.com/jubagMnwjt

— Natasha (@riptari) March 20, 2019

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: “The creators and designers of AI must be broadly representative of humanity.”

121 faculty members listed.

Not a single faculty member is Black. pic.twitter.com/znCU6zAxui

— Chad Loder ❁ (@chadloder) March 21, 2019

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.

Powered by WPeMatico

Over a quarter of US adults now own a smart speaker, typically an Amazon Echo

Posted by | Amazon, Amazon Echo, apple inc, artificial intelligence, Assistant, Gadgets, Google, Google Assistant, HomePod, smart speaker, smart speakers, smartphone, smartphones, Sonos, Speaker, TC, United States, virtual assistant, voice assistant, voice computing | No Comments

U.S. smart speaker owners grew 40 percent over 2018 to now reach 66.4 million — or 26.2 percent of the U.S. adult population — according to a new report from Voicebot.ai and Voicify released this week, which detailed adoption patterns and device market share. The report also reconfirmed Amazon Echo’s lead, noting the Alexa-powered smart speaker grew to a 61 percent market share by the end of last year — well above Google Home’s 24 percent share.

These findings fall roughly in line with other analysts’ reports on smart speaker market share in the U.S. However, because of varying methodology, they don’t all come back with the exact same numbers.

For example, in December 2018, eMarketer reported the Echo had accounted for nearly 67 percent of all U.S. smart speaker sales in 2018. Meanwhile, CIRP last month put Echo further ahead, with a 70 percent share of the installed base in the U.S.

Though the percentages differ, the overall trend is that Amazon Echo remains the smart speaker to beat.

While on the face of things this appears to be great news for Amazon, Voicebot’s report did note that Google Home has been closing the gap with Echo in recent months.

Amazon Echo’s share dropped nearly 11 percent over 2018, while Google Home made up for just over half that decline with a 5.5 percent gain, and “other” devices making up the rest. This latter category, which includes devices like Apple’s HomePod and Sonos One, grew last year to now account for 15 percent of the market.

That said, the Sonos One has Alexa built-in, so it may not be as bad for Amazon as the numbers alone seem to indicate. After all, Amazon is selling its Echo devices at cost or even a loss to snag more market share. The real value over time will be in controlling the ecosystem.

The growth in smart speakers is part of a larger trend toward voice computing and smart voice assistants — like Siri, Bixby and Google Assistant — which are often accessed on smartphones.

A related report from Juniper Research last month estimated there will be 8 billion digital voice assistants in use by 2023, up from the 2.5 billion in use at the end of 2018. This is due to the increased use of smartphone assistants as well as the smart speaker trend, the firm said.

Voicebot’s report also saw how being able to access voice assistance on multiple platforms was helping to boost usage numbers.

It found that smart speaker owners used their smartphone’s voice assistant more than those who didn’t have a smart speaker in their home. It seems consumers get used to being able to access their voice assistants across platforms — now that Siri has made the jump to speakers and Alexa to phones, for instance.

The full report is available on Voicebot.ai’s website here.

Powered by WPeMatico

You can now ask Alexa to control your Roku devices

Posted by | Alexa, Amazon, amazon alexa, Amazon Echo, artificial intelligence, echo, Gadgets, Media, roku, Streaming Media, virtual assistant, voice assistant, voice search | No Comments

Roku this morning announced its devices will now be compatible with Amazon’s Alexa. Through a new Roku skill for Alexa, Roku owners will be able to control their devices in order to do things like launch a channel, play or pause a show, search for entertainment options and more. Roku TV owners will additionally be able to control various functions related to their television, like adjusting the volume, turning on and off the TV, switching inputs and changing channels if there is an over-the-air antenna attached.

The added support for Amazon Alexa will be available to devices running Roku OS 8.1 or higher, and will require that customers enable the new Roku skill, which will link their account to Amazon.

Roku has developed its own voice assistant designed specifically for its platform, which is available with a touch of a button on its voice remote as well as through optional accessories like its voice-powered wireless speakers, tabletop Roku Touch remote or TCL’s Roku-branded Smart Soundbar. However, it hasn’t ignored the needs of those who have invested in other voice platforms.

Already, Roku devices work with Google Assistant-powered devices, like Google Home and Google Home Mini, through a similar voice app launched last fall.

Support for the dominant streaming media platform — Amazon Alexa — was bound to be next. EMarketer said Amazon took two-thirds of smart speaker sales last year, and CIRP said Echo has a 70 percent U.S. market share.

The Roku app will work with any Alexa-enabled device, including the Amazon Echo, Echo Show, Echo Dot, Echo Spot and Echo Plus, as well as those powered by Alexa from third parties, the company confirmed to TechCrunch.

Once enabled, you’ll be able to say things like “Alexa, pause Roku,” or “Alexa, open Hulu on Roku,” or “Alexa, find comedies on Roku,” and more. The key will be starting the command with “Alexa,” as usual, then specify “Roku” is where the action should take place (e.g. “on Roku”).

One change with the launch of voice support via Alexa is that the commands are a bit more natural, in some cases. Whereas Google Assistant required users to say “Hey Google, pause on Roku,” the company today says the same command for Alexa users is “Alexa, pause Roku.” That’s a lot easier to remember and say. However, most of the other commands are fairly consistent between the two platforms.

“Consumers often have multiple voice ecosystems in their homes,” said Ilya Asnis, senior vice president of Roku OS at Roku, in a statement about the launch. “By allowing our customers to choose Alexa, in addition to Roku voice search and controls, and other popular voice assistants, we are strengthening the value Roku offers as a neutral platform in home entertainment.”

Powered by WPeMatico

Amazon stops selling stick-on Dash buttons

Posted by | Amazon, amazon dash, api, button, connected objects, Dash, dash button, Dash Replenishment, e-commerce, eCommerce, Gadgets, Germany, Internet of Things, IoT, voice assistant | No Comments

Amazon has confirmed it has retired physical stick-on Dash buttons from sale — in favor of virtual alternatives that let Prime Members tap a digital button to reorder a staple product.

It also points to its Dash Replenishment service — which offers an API for device makers wanting to build internet-connected appliances that can automatically reorder the products they need to function, be it cat food, batteries or washing power — as another reason why physical Dash buttons, which launched back in 2015 (costing $5 a pop), are past their sell-by date.

Amazon says “hundreds” of IoT devices capable of self-ordering on Amazon have been launched globally to date by brands including Beko, Epson, illy, Samsung and Whirlpool, to name a few.

So why press a physical button when a digital one will do? Or, indeed, why not do away with the need to push a button all and just let your gadgets rack up your grocery bill all by themselves while you get on with the importance business of consuming all the stuff they’re ordering?

You can see where Amazon wants to get to with its “so customers don’t have to think at all about restocking” line. Consumption that entirely removes the consumer’s decision-making process from the transactional loop is quite the capitalist wet dream. Though the company does need to be careful about consumer protection rules as it seeks to excise friction from the buying process.

The e-commerce behemoth also claims customers are “increasingly” using its Alexa voice assistant to reorder staples, such as via the Alexa Shopping voice shopping app (Amazon calls it “hands-free shopping”) that lets people inform the machine about a purchase intent and it will suggest items to buy based on their Amazon order history.

Albeit, it offers no actual usage metrics for Alexa Shopping. So that’s meaningless PR.

A less flashy but perhaps more popular option than “hands-free shopping,” which Amazon also says has contributed to making physical Dash buttons redundant, is its Subscribe & Save program.

This “lets customers automatically receive their favorite items every month,” as Amazon puts it. It offers an added incentive of discounts that kick in if the user signs up to buy five or more products per month. But the mainstay of the sales pitch is convenience with Amazon touting time saved by subscribing to “essentials” — and time saved from compiling boring shopping lists once again means more time to consume the stuff being bought on Amazon…

In a statement about retiring physical Dash buttons from global sale on February 28, Amazon also confirmed it will continue to support existing Dash owners — presumably until their buttons wear down to the bare circuit board from repeat use.

“Existing Dash Button customers can continue to use their Dash Button devices,” it writes. “We look forward to continuing support for our customers’ shopping needs, including growing our Dash Replenishment product line-up and expanding availability of virtual Dash Buttons.”

So farewell then clunky Dash buttons. Another physical push-button bites the dust. Though plastic-y Dash buttons were quite unlike the classic iPhone home button — always seeming temporary and experimental rather than slick and coolly reassuring. Even so, the end of both buttons points to the need for tech businesses to tool up for the next wave of contextually savvy connected devices. More smarts, and more controllable smarts is key.

Amazon’s statement about “shifting focus” for Dash does not mention potential legal risks around the buttons related to consumer rights challenges — but that’s another angle here.

In January a court in Germany ruled Dash buttons breached local e-commerce rules, following a challenge by a regional consumer watchdog that raised concerns about T&Cs that allow Amazon to substitute a product of a higher price or even a different product entirely than what the consumer had originally selected. The watchdog argued consumers should be provided with more information about price and product before taking the order — and the judges agreed — though Amazon said it would seek to appeal.

While it’s not clear whether or not that legal challenge contributed to Amazon’s decision to shutter Dash, it’s clear that virtual Dash buttons offer more opportunities for displaying additional information prior to a purchase than a screen-less physical Dash button. They also are more easily adaptable to any tightening legal requirements across different markets.

The demise of the physical Dash was reported earlier by CNET.

Powered by WPeMatico