voice assistant

The damage of defaults

Posted by | AirPods, algorithmic accountability, algorithmic bias, Apple, Apple earbuds, apple inc, artificial intelligence, Bluetooth, Diversity, Gadgets, headphones, hearables, iphone accessories, mobile computing, siri, smartphone, TC, voice assistant, voice computing | No Comments

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

Hi @tim_cook, I fixed that sketch for you. Introducing #InPods — because one size doesn’t fit all 😉pic.twitter.com/jubagMnwjt

— Natasha (@riptari) March 20, 2019

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: “The creators and designers of AI must be broadly representative of humanity.”

121 faculty members listed.

Not a single faculty member is Black. pic.twitter.com/znCU6zAxui

— Chad Loder ❁ (@chadloder) March 21, 2019

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.

Powered by WPeMatico

Over a quarter of US adults now own a smart speaker, typically an Amazon Echo

Posted by | Amazon, Amazon Echo, apple inc, artificial intelligence, Assistant, Gadgets, Google, Google Assistant, HomePod, smart speaker, smart speakers, smartphone, smartphones, Sonos, Speaker, TC, United States, virtual assistant, voice assistant, voice computing | No Comments

U.S. smart speaker owners grew 40 percent over 2018 to now reach 66.4 million — or 26.2 percent of the U.S. adult population — according to a new report from Voicebot.ai and Voicify released this week, which detailed adoption patterns and device market share. The report also reconfirmed Amazon Echo’s lead, noting the Alexa-powered smart speaker grew to a 61 percent market share by the end of last year — well above Google Home’s 24 percent share.

These findings fall roughly in line with other analysts’ reports on smart speaker market share in the U.S. However, because of varying methodology, they don’t all come back with the exact same numbers.

For example, in December 2018, eMarketer reported the Echo had accounted for nearly 67 percent of all U.S. smart speaker sales in 2018. Meanwhile, CIRP last month put Echo further ahead, with a 70 percent share of the installed base in the U.S.

Though the percentages differ, the overall trend is that Amazon Echo remains the smart speaker to beat.

While on the face of things this appears to be great news for Amazon, Voicebot’s report did note that Google Home has been closing the gap with Echo in recent months.

Amazon Echo’s share dropped nearly 11 percent over 2018, while Google Home made up for just over half that decline with a 5.5 percent gain, and “other” devices making up the rest. This latter category, which includes devices like Apple’s HomePod and Sonos One, grew last year to now account for 15 percent of the market.

That said, the Sonos One has Alexa built-in, so it may not be as bad for Amazon as the numbers alone seem to indicate. After all, Amazon is selling its Echo devices at cost or even a loss to snag more market share. The real value over time will be in controlling the ecosystem.

The growth in smart speakers is part of a larger trend toward voice computing and smart voice assistants — like Siri, Bixby and Google Assistant — which are often accessed on smartphones.

A related report from Juniper Research last month estimated there will be 8 billion digital voice assistants in use by 2023, up from the 2.5 billion in use at the end of 2018. This is due to the increased use of smartphone assistants as well as the smart speaker trend, the firm said.

Voicebot’s report also saw how being able to access voice assistance on multiple platforms was helping to boost usage numbers.

It found that smart speaker owners used their smartphone’s voice assistant more than those who didn’t have a smart speaker in their home. It seems consumers get used to being able to access their voice assistants across platforms — now that Siri has made the jump to speakers and Alexa to phones, for instance.

The full report is available on Voicebot.ai’s website here.

Powered by WPeMatico

You can now ask Alexa to control your Roku devices

Posted by | Alexa, Amazon, amazon alexa, Amazon Echo, artificial intelligence, echo, Gadgets, Media, roku, Streaming Media, virtual assistant, voice assistant, voice search | No Comments

Roku this morning announced its devices will now be compatible with Amazon’s Alexa. Through a new Roku skill for Alexa, Roku owners will be able to control their devices in order to do things like launch a channel, play or pause a show, search for entertainment options and more. Roku TV owners will additionally be able to control various functions related to their television, like adjusting the volume, turning on and off the TV, switching inputs and changing channels if there is an over-the-air antenna attached.

The added support for Amazon Alexa will be available to devices running Roku OS 8.1 or higher, and will require that customers enable the new Roku skill, which will link their account to Amazon.

Roku has developed its own voice assistant designed specifically for its platform, which is available with a touch of a button on its voice remote as well as through optional accessories like its voice-powered wireless speakers, tabletop Roku Touch remote or TCL’s Roku-branded Smart Soundbar. However, it hasn’t ignored the needs of those who have invested in other voice platforms.

Already, Roku devices work with Google Assistant-powered devices, like Google Home and Google Home Mini, through a similar voice app launched last fall.

Support for the dominant streaming media platform — Amazon Alexa — was bound to be next. EMarketer said Amazon took two-thirds of smart speaker sales last year, and CIRP said Echo has a 70 percent U.S. market share.

The Roku app will work with any Alexa-enabled device, including the Amazon Echo, Echo Show, Echo Dot, Echo Spot and Echo Plus, as well as those powered by Alexa from third parties, the company confirmed to TechCrunch.

Once enabled, you’ll be able to say things like “Alexa, pause Roku,” or “Alexa, open Hulu on Roku,” or “Alexa, find comedies on Roku,” and more. The key will be starting the command with “Alexa,” as usual, then specify “Roku” is where the action should take place (e.g. “on Roku”).

One change with the launch of voice support via Alexa is that the commands are a bit more natural, in some cases. Whereas Google Assistant required users to say “Hey Google, pause on Roku,” the company today says the same command for Alexa users is “Alexa, pause Roku.” That’s a lot easier to remember and say. However, most of the other commands are fairly consistent between the two platforms.

“Consumers often have multiple voice ecosystems in their homes,” said Ilya Asnis, senior vice president of Roku OS at Roku, in a statement about the launch. “By allowing our customers to choose Alexa, in addition to Roku voice search and controls, and other popular voice assistants, we are strengthening the value Roku offers as a neutral platform in home entertainment.”

Powered by WPeMatico

Amazon stops selling stick-on Dash buttons

Posted by | Amazon, amazon dash, api, button, connected objects, Dash, dash button, Dash Replenishment, e-commerce, eCommerce, Gadgets, Germany, Internet of Things, IoT, voice assistant | No Comments

Amazon has confirmed it has retired physical stick-on Dash buttons from sale — in favor of virtual alternatives that let Prime Members tap a digital button to reorder a staple product.

It also points to its Dash Replenishment service — which offers an API for device makers wanting to build internet-connected appliances that can automatically reorder the products they need to function, be it cat food, batteries or washing power — as another reason why physical Dash buttons, which launched back in 2015 (costing $5 a pop), are past their sell-by date.

Amazon says “hundreds” of IoT devices capable of self-ordering on Amazon have been launched globally to date by brands including Beko, Epson, illy, Samsung and Whirlpool, to name a few.

So why press a physical button when a digital one will do? Or, indeed, why not do away with the need to push a button all and just let your gadgets rack up your grocery bill all by themselves while you get on with the importance business of consuming all the stuff they’re ordering?

You can see where Amazon wants to get to with its “so customers don’t have to think at all about restocking” line. Consumption that entirely removes the consumer’s decision-making process from the transactional loop is quite the capitalist wet dream. Though the company does need to be careful about consumer protection rules as it seeks to excise friction from the buying process.

The e-commerce behemoth also claims customers are “increasingly” using its Alexa voice assistant to reorder staples, such as via the Alexa Shopping voice shopping app (Amazon calls it “hands-free shopping”) that lets people inform the machine about a purchase intent and it will suggest items to buy based on their Amazon order history.

Albeit, it offers no actual usage metrics for Alexa Shopping. So that’s meaningless PR.

A less flashy but perhaps more popular option than “hands-free shopping,” which Amazon also says has contributed to making physical Dash buttons redundant, is its Subscribe & Save program.

This “lets customers automatically receive their favorite items every month,” as Amazon puts it. It offers an added incentive of discounts that kick in if the user signs up to buy five or more products per month. But the mainstay of the sales pitch is convenience with Amazon touting time saved by subscribing to “essentials” — and time saved from compiling boring shopping lists once again means more time to consume the stuff being bought on Amazon…

In a statement about retiring physical Dash buttons from global sale on February 28, Amazon also confirmed it will continue to support existing Dash owners — presumably until their buttons wear down to the bare circuit board from repeat use.

“Existing Dash Button customers can continue to use their Dash Button devices,” it writes. “We look forward to continuing support for our customers’ shopping needs, including growing our Dash Replenishment product line-up and expanding availability of virtual Dash Buttons.”

So farewell then clunky Dash buttons. Another physical push-button bites the dust. Though plastic-y Dash buttons were quite unlike the classic iPhone home button — always seeming temporary and experimental rather than slick and coolly reassuring. Even so, the end of both buttons points to the need for tech businesses to tool up for the next wave of contextually savvy connected devices. More smarts, and more controllable smarts is key.

Amazon’s statement about “shifting focus” for Dash does not mention potential legal risks around the buttons related to consumer rights challenges — but that’s another angle here.

In January a court in Germany ruled Dash buttons breached local e-commerce rules, following a challenge by a regional consumer watchdog that raised concerns about T&Cs that allow Amazon to substitute a product of a higher price or even a different product entirely than what the consumer had originally selected. The watchdog argued consumers should be provided with more information about price and product before taking the order — and the judges agreed — though Amazon said it would seek to appeal.

While it’s not clear whether or not that legal challenge contributed to Amazon’s decision to shutter Dash, it’s clear that virtual Dash buttons offer more opportunities for displaying additional information prior to a purchase than a screen-less physical Dash button. They also are more easily adaptable to any tightening legal requirements across different markets.

The demise of the physical Dash was reported earlier by CNET.

Powered by WPeMatico

Apple acquires talking Barbie voicetech startup PullString

Posted by | Apple, Apps, artificial intelligence, Developer, Entertainment, Exit, Fundings & Exits, Gadgets, hardware, M&A, pullstring, Startups, TC, toytalk, voice apps, voice assistant | No Comments

Apple has just bought up the talent it needs to make talking toys a part of Siri, HomePod, and its voice strategy. Apple has acquired PullString, also known as ToyTalk, according to Axios’ Dan Primack and Ina Fried. TechCrunch has received confirmation of the acquistion from sources with knowledge of the deal. The startup makes voice experience design tools, artificial intelligence to power those experiences, and toys like talking Barbie and Thomas The Tank Engine toys in partnership with Mattel. Founded in 2011 by former Pixar executives, PullString went on to raise $44 million.

Apple’s Siri is seen as lagging far behind Amazon Alexa and Google Assistant, not only in voice recognition and utility, but also in terms of developer ecosystem. Google and Amazon has built platforms to distribute Skills from tons of voice app makers, including storytelling, quizzes, and other games for kids. If Apple wants to take a real shot at becoming the center of your connected living room with Siri and HomePod, it will need to play nice with the children who spend their time there. Buying PullString could jumpstart Apple’s in-house catalog of speech-activated toys for kids as well as beef up its tools for voice developers.

PullString did catch some flack for being a “child surveillance device” back in 2015, but countered by detailing the security built intoHello Barbie product and saying it’d never been hacked to steal childrens’ voice recordings or other sensitive info. Privacy norms have changed since with so many people readily buying always-listening Echos and Google Homes.

In 2016 it rebranded as PullString with a focus on developers tools that allow for visually mapping out conversations and publishing finished products to the Google and Amazon platforms. Given SiriKit’s complexity and lack of features, PullString’s Converse platform could pave the way for a lot more developers to jump into building voice products for Apple’s devices.

We’ve reached out to Apple and PullString for more details about whether PullString and ToyTalk’s products will remain available.

The startup raised its cash from investors including Khosla Ventures, CRV, Greylock, First Round, and True Ventures, with a Series D in 2016 as its last raise that PitchBook says valued the startup at $160 million. While the voicetech space has since exploded, it can still be difficult for voice experience developers to earn money without accompanying physical products, and many enterprises still aren’t sure what to build with tools like those offered by PullString. That might have led the startup to see a brighter future with Apple, strengthening one of the most ubiquitous though also most detested voice assistants.

Powered by WPeMatico

Amazon upgrades its Fire TV Stick with the new Alexa Voice Remote

Posted by | Alexa, Amazon, cord cutting, Fire TV, Gadgets, streaming, streaming media player, TC, voice, voice assistant | No Comments

Amazon is giving its Fire TV Stick an upgrade. The company announced today it will now ship the Fire TV Stick with the new version of the Alexa Voice Remote launched last fall. The remote allows users to control other devices besides their Fire TV, thanks to its support for both Bluetooth and multi-directional infrared. However, the upgraded remote won’t impact the Fire TV Stick’s price, which remains $39.99.

The new Alexa remote arrived alongside the $49.99 Fire TV Stick 4K in October. It’s capable of controlling the TV, soundbar and other AV equipment, and can do things like switch inputs or tune to a channel on your cable box. As a standalone purchase for older Amazon Fire TV devices, the remote was retailing yesterday for $29.99. But today, Amazon is slashing the price by 50 percent, it says.

The voice remote also includes the ability to speak to Alexa with the press of a button, which can help you find shows and movies, control smart home devices, get the news and weather, stream music and more.

Amazon notes the inclusion of the next-gen remote makes the Fire TV Stick the only streaming media player under $40 that includes a remote capable of controlling other AV equipment besides the TV. This could be a selling point for Fire TV Stick versus Roku, whose high-end voice remotes are focused on controlling power and volume on TVs, or its own Roku wireless speakers.

At CES this year, Amazon said its Fire TV platform as a whole had now topped 30 million active users, which seemed to put it just ahead of Roku’s 27 million. By swapping in a better remote with the flagship Fire TV Stick device, Amazon is looking to solidify its lead gained by steep discounts on its devices over Black Friday and the larger 2018 holiday shopping season.

The updated Fire TV Stick will also be the first to ship with Amazon’s just-launched, free streaming service IMDb Freedive included. Announced at CES, the service offers a range of free, ad-supported movies and TV shows — a challenge to its rival’s service, The Roku Channel. It will come to other Fire TV devices by way of a software update.

The Fire TV Stick with the new Alexa Voice Remote goes on pre-order today for $39.99 (or £39.99 in the U.K.), and will be available in a bundle with the Echo Dot for $69.98.

Powered by WPeMatico

Pandora launches a personalized voice assistant on iOS and Android

Posted by | Apps, Media, Mobile, Music, Pandora, personalization, streaming, streaming service, voice, voice assistant | No Comments

Pandora today announced the launch of its own, in-app voice assistant that you can call up at any time by saying “Hey Pandora,” followed by a request to play the music or podcasts you want to hear. The feature will allow you to not only control music playback with commands to play a specific artist, album, radio or playlist, but will also be capable of delivering results customized to you when responding to vague commands or those related to activity or mood. For example, you’ll get personalized results for requests like “play something new,” “play more like this,” “play music for relaxing,” “play workout music,” “play something I like” and others.

The company reports strong adoption of its service on voice-activated speakers, like Amazon Echo devices, where now millions of listeners launch Pandora music by speaking — a trend that inspired the move to launch in-app voice control.

“Voice is just an expected new way that you engage with any app,” notes Pandora Chief Product Officer Chris Phillips. “On the mobile app, we’re doing more than just your typical request against the catalog… asking: ‘hey, Pandora,’ to search and play or pause or skip,” he says. “What we’re doing that we think is pretty special is we’re taking that voice utterance of what someone asks for, and we’re applying our personalized recommendations to the response,” Phillips explains.

That means when you ask Pandora to play you something new, the app will return a selection that won’t resemble everyone else’s music, but will rather be informed by your own listening habits and personal tastes.

The way that result is returned may also vary — for some, it could be a playlist, for others an album and for others, it could be just a new song, a personalized soundtrack or a radio station.

“Play something new” isn’t the only command that will yield a personalized response, Pandora says. It will also return personalized results for commands related to your mood or activity — like workout music, something to relax to, music for cooking and more.

For podcasts, it can dig up episodes with a specific guest, play shows by title, or even deliver show recommendations, among other things.

Voice commands can be used in lieu of pressing buttons, too, in order to do things like add songs to a playlist or giving a song you like a thumbs up, for instance.

The new feature, called “Voice Mode,” taps into Pandora’s machine learning and data science capabilities, which is an active battleground between music services.

Spotify, for example, is well known for its deep personalization with its Discover Weekly and other custom playlists, like its Daily Mixes. But its own “voice mode” option is only available for its Premium users, according to a FAQ on the company’s website.

Pandora, meanwhile, is planning to roll out Voice Mode to all users — both free and paid.

For free users, the feature will work in conjunction with an existing ad product that allows users to opt in to watch a video in order to gain temporary access to Pandora’s on-demand service.

While this option is not live at launch, the plan is to allow any user to use the “Hey Pandora” command, then redirect free users with a request to play music on demand to instead play the opt-in ad first.

Pandora Voice Mode will launch today, January 15, to a percentage of the iOS and Android user base — around a million listeners. The company will track the speed, accuracy and performance of its results before rolling it out more broadly over the next couple of months.

Users with a Google Home device can also cast from their Pandora app to their smart speaker, and a similar feature will arrive on Alexa devices soon, the company believes.

Pandora works with Siri Shortcuts, too. That means you can now use voice to launch the app itself, then play a personalized selection of music without having to touch your phone at all.

Voice Mode will be available in the Pandora app via the search bar next to the magnifying glass.

Powered by WPeMatico

China’s Baidu says its answer to Alexa is now on 200M devices

Posted by | Alexa, alibaba, alibaba group, Android, apollo, artificial intelligence, Asia, AutoNavi, Baidu, China, Ford, Microsoft, search engine, smart home devices, smartphones, Transportation, voice assistant, volvo, Weibo | No Comments

A Chinese voice assistant has been rapidly gaining ground in recent months. DuerOS, Baidu’s answer to Amazon’s Alexa, reached over 200 million devices, China’s top search engine announced on its Weibo official account last Friday.

To put that number into context, more than 100 million devices pre-installed with Alexa have been sold, Amazon recently said. Google just announced it expected Assitant to be on 1 billion devices by the end of this month.

Voice interaction technology is part of Baidu’s strategy to reposition itself from a heavy reliance on search businesses towards artificial intelligence. The grand plan took a hit when the world-renown scientist Lu Qi stepped down as Baidu’s chief operating officer, though the segment appears to have scored healthy growth lately, with DuerOS more than doubling from a base of 90 million installs since last June.

When it comes to how many devices actually use DuerOS regularly, the number is much less significant: 35 million machines a month at the time Baidu’s general manager for smart home devices announced the figure last November.

Like Alexa, which has made its way into both Amazon-built Echo speakers and OEMs, DuerOS also takes a platform play to power both Baidu-built and third-party devices.

Interestingly, DuerOS has achieved all that with fewer capabilities and a narrower partnership network than its American counterpart. By the end of 2018, Alexa could perform more than 56,000 skills. Devices from over 4,500 brands can now be controlled with Alexa, says Amazon. By comparison, Baidu’s voice assistant had 800 different skills, its chief architect Zhong Lei revealed at the company’s November event. It was compatible with 85 brands at the time.

This may well imply that DuerOS’s allies include heavy-hitters with outsize user bases. Baidu itself could be one as it owns one of China’s biggest navigation app, which is second to Alibaba’s AutoNavi in terms of number of installs, according to data from iResearch. Baidu said in October that at least 140 million people had activated the voice assistant of its Maps service.

Furthermore, Baidu speakers have managed to crack a previously duopolistic market. A report from Canalys shows that Baidu clocked in a skyrocketing 711 percent quarter-to-quarter growth to become China’s third-biggest vendor of smart speakers during Q3 last year. Top players Alibaba and Xiaomi, on the other hand, both had a sluggish season.

While Baidu deploys DuerOS to get home appliances talking, it has doubled down on smart vehicles with Apollo . The system, which the company calls the Android for autonomous driving, counted 130 OEMs, parts suppliers and other forms of partners as of last October. It’s attracted global automakers Volvo and Ford who want a foothold in China’s self-driving movement. Outside China, Apollo has looked to Microsoft Azure Cloud as it hunts for international partnerships.

Baidu has yet to prove commercial success for its young AI segment, but its conversational data trove holds potential for a lucrative future. Baidu became China’s top advertising business in part by harnessing what people search on its engine. Down the road, its AI-focused incarnation could apply the same data-crunching process to what people say to their machines.

Powered by WPeMatico

Google Assistant iOS update lets you say ’Hey Siri, OK Google’

Posted by | Apps, Google, Google Assistant, Mobile, TC, voice assistant | No Comments

Apple probably didn’t intend to let competitors take advantage of Siri Shortcuts this way, but you can now launch Google Assistant on your iPhone by saying “Hey Siri, OK Google .”

But don’t expect a flawless experience — it takes multiple steps. After updating the Google Assistant app on iOS, you need to open the app to set up a new Siri Shortcut for Google Assistant.

As the name suggests, Siri Shortcuts lets you record custom phrases to launch specific apps or features. For instance, you can create Siri Shortcuts to play your favorite playlist, launch directions to a specific place, text someone and more. If you want to chain multiple actions together, you can even create complicated algorithms using Apple’s Shortcuts app.

By default, Google suggests the phrase “OK Google.” You can choose something shorter, or “Hey Google,” for instance. After setting that up, you can summon Siri and use this custom phrase to launch Google’s app.

You may need to unlock your iPhone or iPad to let iOS open the app. The Google Assistant app then automatically listens to your query. Again, you need to pause and wait for the app to appear before saying your query.

This is quite a cumbersome walk-around and I’m not sure many people are going to use it. But the fact that “Hey Siri, OK Google” exists is still very funny.

On another note, Google Assistant is still the worst when it comes to your privacy. The app pushes you to enable “web & app activity,” the infamous all-encompassing privacy destroyer. If you activate that setting, Google will collect your search history, your Chrome browsing history, your location, your credit card purchases and more.

It’s a great example of dark pattern design. If you haven’t enabled web & app activity, there’s a flashy blue banner at the bottom of the app that tells you that you can “unlock more Assistant features.”

When you tap it, you get a cute little animated drawing to distract you from the text. There’s only one button, which says “More,” If you tap it, the “More” button becomes “Turn on” — many people are not even going to see “No thanks” on the bottom left.

It’s a classic persuasion method. If somebody asks you multiple questions and you say yes every time, you’ll tend to say yes to the last question even if you don’t agree with it. You tapped on “Get started” and “More” so you want to tap on the same button one more time. If you say no, Google asks you one more time if you’re 100 percent sure.

So make sure you read everything and you understand that you’re making a privacy trade-off by using Google Assistant.

Powered by WPeMatico

Mobvoi launches new $200 smartwatch and $130 AirPods alternative

Posted by | Android, Apple, artificial intelligence, Asia, Assistant, China, computing, Gadgets, Google, indiegogo, Kickstarter, mobvoi, Qualcomm, smartwatches, TC, voice assistant, wearable devices | No Comments

Chinese AI company Mobvoi has consistently been one of the best also-rans in the smartwatch game, which remains dominated by Apple. Today, it launched a sequel to its 2016 TicWatch, which was a viral hit raising over $2 million on Kickstarter, and it unveiled a cheaper take on Apple’s AirPods.

The new TicWatch C2 was outed at a London event and is priced at $199.99. Unlike its predecessor, it has shifted from Mobvoi’s own OS to Google’s Wear OS. That isn’t a huge surprise, though, since Mobvoi’s newer budget watches and ‘pro’ watch have both already made that jump.

The C2 — which stands for classic 2 — packs NFC, Bluetooth, NFC and a voice assistant. It comes in black, platinum and rose gold. The latter color option — shown below — is thinner so presumably it is designed for female wrists.

However, there’s a compromise since the watch isn’t shipping with Qualcomm’s newest Snapdragon Wear 3100 chip. Mobvoi has instead picked the older 2100 processor. That might explain the price, but it will mean that newer Android Wear watches shipping in the company months have better performance, particularly around battery life. As it stands, the TicWatch C2 claims a day-two life but the processor should be a consideration for would-be buyers.

Mobvoi also outed TicPods Free, its take on Apple’s wireless AirPods. They are priced at $129.99 and available in red, white and blue.

The earbuds already raised over $2.8 million from Indiegogo — Mobvoi typically uses crowdfunding to gather feedback and assess customer interest — and early reviews have been positive.

They work on Android and iOS and include support for Alex and Google Assistant. They also include gesture-based controls beyond the Apple-style taps for skipping music, etc. Battery life without the case, which doubles as a charger, is estimated at 18 hours, or four hours of listening time.

The TicPods are available to buy online now. The TicWatch C2 is up for pre-sale ahead of a “wide” launch that’s planned for December 6.

Mobvoi specializes in AI and it includes Google among its investors. It also has a joint venture with VW that is focused on bringing Ai into the automotive industry. In China it is best known for AI services but globally, in the consumer space, it also offers a Google Assistant speaker called TicHome Mini.

Powered by WPeMatico