siri

Where is voice tech going?

Posted by | Alexa, artificial intelligence, Baidu, Column, COVID-19, Extra Crunch, Gadgets, hardware, Headspace, Market Analysis, Media, Mobile, Podcasts, siri, smart speaker, Speech Recognition, Startups, TC, Venture Capital, virtual assistant, voice, voice assistant, voice search, voice technology, Wearables | No Comments
Mark Persaud
Contributor

Mark Persaud is digital product manager and practice lead at Moonshot by Pactera, a digital innovation company that leads global clients through the next era of digital products with a heavy emphasis on artificial intelligence, data and continuous software delivery.

2020 has been all but normal. For businesses and brands. For innovation. For people.

The trajectory of business growth strategies, travel plans and lives have been drastically altered due to the COVID-19 pandemic, a global economic downturn with supply chain and market issues, and a fight for equality in the Black Lives Matter movement — amongst all that complicated lives and businesses already.

One of the biggest stories in emerging technology is the growth of different types of voice assistants:

  • Niche assistants such as Aider that provide back-office support.
  • Branded in-house assistants such as those offered by BBC and Snapchat.
  • White-label solutions such as Houndify that provide lots of capabilities and configurable tool sets.

With so many assistants proliferating globally, voice will become a commodity like a website or an app. And that’s not a bad thing — at least in the name of progress. It will soon (read: over the next couple years) become table stakes for a business to have voice as an interaction channel for a lovable experience that users expect. Consider that feeling you get when you realize a business doesn’t have a website: It makes you question its validity and reputation for quality. Voice isn’t quite there yet, but it’s moving in that direction.

Voice assistant adoption and usage are still on the rise

Adoption of any new technology is key. A key inhibitor of technology is often distribution, but this has not been the case with voice. Apple, Google, and Baidu have reported hundreds of millions of devices using voice, and Amazon has 200 million users. Amazon has a slightly more difficult job since they’re not in the smartphone market, which allows for greater voice assistant distribution for Apple and Google.

Image Credits: Mark Persaud

But are people using devices? Google said recently there are 500 million monthly active users of Google Assistant. Not far behind are active Apple users with 375 million. Large numbers of people are using voice assistants, not just owning them. That’s a sign of technology gaining momentum — the technology is at a price point and within digital and personal ecosystems that make it right for user adoption. The pandemic has only exacerbated the use as Edison reported between March and April — a peak time for sheltering in place across the U.S.

Powered by WPeMatico

Apple unveils iOS 14 and macOS Big Sur features for India, China and other international markets

Posted by | Apple, Asia, China, india, Indonesia, iOS, ireland, Japan, macos, messages, Mobile, Netflix, Norway, operating systems, siri | No Comments

Apple will roll out a range of new features and improvements that are aimed at users in India, China and other international markets with its yearly updates to iOS, iPadOS, and macOS operating systems, it unveiled today.

iOS 14, which is rolling out to developers today and will reach general users later this year, introduces new bilingual dictionaries to support French and German; Indonesia and English; Japanese and Simplified Chinese; and Polish and English. For its users in China, one of Apple’s biggest overseas markets, the iPhone-maker said the new operating system will introduce support for Wubi keyboard.

For users in India, Apple is adding 20 new document fonts and upgrading 18 existing fonts with “more weights and italics” to give people greater choices. For those living in the world’s second largest internet market, Mail app now supports email addresses in Indian script.

Apple said it will also deliver a range of additional features for India, building on the big momentum it kickstarted last year.

Messages now feature corresponding full-screen effects when users send greetings such as “Happy Holi” in one of the 23 Indian local languages.

More interestingly, iOS 14 will include smart downloads, which will allow users in India to download Indian Siri voices and software updates as well as download and stream Apple TV+ shows over cellular networks — a feature that is not available elsewhere in the world.

The feature further addresses the patchy networks that are prevalent in India — despite major improvements in recent years. Last year, Apple beamed a feature for users in India that enabled users in the nation to set an optimized time of the day in on-demand streaming apps such as Hotstar and Netflix for downloading videos.

New improvements further shows Apple’s growing focus on India, the world’s second largest smartphone market. Apple chief executive Tim Cook said earlier this year that the company will launch its online store in the country later this year, and open its first physical store next year. A source familiar with the matter told TechCrunch last month that the global pandemic had not affected the plan.

iOS 14 will also allow users in Ireland and Norway to utilize the autocorrection feature as the new update adds support for Irish Gaelic and Norwegian Nynorsk. And there’s also a redesigned Kana keyboard for Japan, which will enable users there to type numbers with repeated digits more easily on the redesigned Numbers and Symbols plane.

All the aforementioned features — except email addresses in Indian script in Mail and smart downloads for users in India — will also ship with iPadOS 14. And the aforementioned new bilingual dictionaries, new fonts for India, and localized messages are coming to macOS Big Sur.

Additionally, Apple says on the desktop operating system it has also enhanced predictive input for Chinese and Japanese results in more accurate and contextual predictions.

Powered by WPeMatico

Apple said to be working on modular, high-end, noise-cancelling over-ear headphones

Posted by | AirPods, Apple, apple inc, audio engineering, Beats Electronics, electrical engineering, Federal Communications Commission, Gadgets, hardware, headphones, iphone accessories, noise cancelling, noise cancelling headphones, powerbeats pro, siri, Sony, TC | No Comments

Apple is said to be developing its own competitors to popular over-ear noise-cancelling headphones like those made by Bose and Sony, Bloomberg reports, but with similar technology on board to that used in the AirPod and AirPod Pro lines. These headphones would also include a design with interchangeable parts that would allow some modification with customizable accessories for specific uses like workouts and long-term wear, for instance.

The prototype designs of the new headphones, which are set to potentially be released some time later this year (though timing is clearly up in the air as a result of the ongoing COVID-19 crisis, and Apple’s general tendency to move things around depending on other factors), are said to feature a “retro look” by Bloomberg, and include oval ear cups which connect directly to thin arms that extend to the headband. The swappable parts include the ear pads and headband cushion, both of which are said to attach to the headphone frame using magnetic connectors.

These will support Siri on board, along with active noise cancellation and touch controls, but most importantly for iOS and Mac users, they’ll also feature the simple connection across multiple devices that are featured on AirPods and some of Apple’s Beats line of headphones.

Apple has already released Beats over- and on-ear headphone models with AirPod-like features, including cross-connectivity, and that feature onboard noise cancellation. The Bloomberg report doesn’t seem to indicate these new models would be Beats-branded, however, and their customization features would also be new in terms of Apple’s available existing options.

Bloomberg also previously reported that Apple was working on a smaller HomePod speaker as part of its forthcoming product lineup, and a new FCC filing made public this week could indicate the impending release of a success to its PowerBeats Pro fully wireless in-ear sport headphones.

Powered by WPeMatico

WorldGaze uses smartphone cameras to help voice AIs cut to the chase

Posted by | apple inc, artificial intelligence, Assistant, augmented reality, carnegie mellon university, Chris Harrison, Computer Vision, Emerging-Technologies, iPhone, machine learning, Magic Leap, Mobile, siri, smartphone, smartphones, virtual assistant, voice AI, Wearables, WorldGaze | No Comments

If you find voice assistants frustratingly dumb, you’re hardly alone. The much-hyped promise of AI-driven vocal convenience very quickly falls through the cracks of robotic pedantry.

A smart AI that has to come back again (and sometimes again) to ask for extra input to execute your request can seem especially dumb — when, for example, it doesn’t get that the most likely repair shop you’re asking about is not any one of them but the one you’re parked outside of right now.

Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Gierad Laput, a machine learning engineer at Apple, have devised a demo software add-on for voice assistants that lets smartphone users boost the savvy of an on-device AI by giving it a helping hand — or rather a helping head.

The prototype system makes simultaneous use of a smartphone’s front and rear cameras to be able to locate the user’s head in physical space, and more specifically within the immediate surroundings — which are parsed to identify objects in the vicinity using computer vision technology.

The user is then able to use their head as a pointer to direct their gaze at whatever they’re talking about — i.e. “that garage” — wordlessly filling in contextual gaps in the AI’s understanding in a way the researchers contend is more natural.

So, instead of needing to talk like a robot in order to tap the utility of a voice AI, you can sound a bit more, well, human. Asking stuff like “‘Siri, when does that Starbucks close?” Or — in a retail setting — “are there other color options for that sofa?” Or asking for an instant price comparison between “this chair and that one.” Or for a lamp to be added to your wish-list.

In a home/office scenario, the system could also let the user remotely control a variety of devices within their field of vision — without needing to be hyper-specific about it. Instead they could just look toward the smart TV or thermostat and speak the required volume/temperature adjustment.

The team has put together a demo video (below) showing the prototype — which they’ve called WorldGaze — in action. “We use the iPhone’s front-facing camera to track the head in 3D, including its direction vector. Because the geometry of the front and back cameras are known, we can raycast the head vector into the world as seen by the rear-facing camera,” they explain in the video.

“This allows the user to intuitively define an object or region of interest using the head gaze. Voice assistants can then use this contextual information to make enquiries that are more precise and natural.”

In a research paper presenting the prototype they also suggest it could be used to “help to socialize mobile AR experiences, currently typified by people walking down the street looking down at their devices.”

Asked to expand on this, CMU researcher Chris Harrison told TechCrunch: “People are always walking and looking down at their phones, which isn’t very social. They aren’t engaging with other people, or even looking at the beautiful world around them. With something like WorldGaze, people can look out into the world, but still ask questions to their smartphone. If I’m walking down the street, I can inquire and listen about restaurant reviews or add things to my shopping list without having to look down at my phone. But the phone still has all the smarts. I don’t have to buy something extra or special.”

In the paper they note there is a long body of research related to tracking users’ gaze for interactive purposes — but a key aim of their work here was to develop “a functional, real-time prototype, constraining ourselves to hardware found on commodity smartphones.” (Although the rear camera’s field of view is one potential limitation they discuss, including suggesting a partial workaround for any hardware that falls short.)

“Although WorldGaze could be launched as a standalone application, we believe it is more likely for WorldGaze to be integrated as a background service that wakes upon a voice assistant trigger (e.g., ‘Hey Siri’),” they also write. “Although opening both cameras and performing computer vision processing is energy consumptive, the duty cycle would be so low as to not significantly impact battery life of today’s smartphones. It may even be that only a single frame is needed from both cameras, after which they can turn back off (WorldGaze startup time is 7 sec). Using bench equipment, we estimated power consumption at ~0.1 mWh per inquiry.”

Of course there’s still something a bit awkward about a human holding a screen up in front of their face and talking to it — but Harrison confirms the software could work just as easily hands-free on a pair of smart spectacles.

“Both are possible,” he told us. “We choose to focus on smartphones simply because everyone has one (and WorldGaze could literally be a software update), while almost no one has AR glasses (yet). But the premise of using where you are looking to supercharge voice assistants applies to both.”

“Increasingly, AR glasses include sensors to track gaze location (e.g., Magic Leap, which uses it for focusing reasons), so in that case, one only needs outwards facing cameras,” he added.

Taking a further leap it’s possible to imagine such a system being combined with facial recognition technology — to allow a smart spec-wearer to quietly tip their head and ask “who’s that?” — assuming the necessary facial data was legally available in the AI’s memory banks.

Features such as “add to contacts” or “when did we last meet” could then be unlocked to augment a networking or socializing experience. Although, at this point, the privacy implications of unleashing such a system into the real world look rather more challenging than stitching together the engineering. (See, for example, Apple banning Clearview AI’s app for violating its rules.)

“There would have to be a level of security and permissions to go along with this, and it’s not something we are contemplating right now, but it’s an interesting (and potentially scary idea),” agrees Harrison when we ask about such a possibility.

The team was due to present the research at ACM CHI — but the conference was canceled due to the coronavirus.

Powered by WPeMatico

Apple said to be planning fall iPhone refresh with iPad Pro-like design

Posted by | Apple, apple inc, Bluetooth, computing, Gadgets, hardware, HomePod, iOS, ios 11, iPad, iPads, iPhone, macintosh, mobile phones, siri, TC, tile | No Comments

Apple is readying a new iPhone for fall to replace the iPhone 11 Pro this fall, Bloomberg reports, as well as follow-ups to the iPhone 11, a new smaller HomePod and a locator tag accessory. The top-end iPhone 11 Pro successors at least will have a new industrial design that more closely resembles the iPad Pro, with flat screens and sides instead of the current rounded edge design, and they’ll also include the 3D LIDAR sensing system that Apple introduced with the most recent iPad Pro refresh in March.

The new high-end iPhone design will look more like the iPhone 5, Bloomberg says, with “flat stainless steel edges,” and the screen on the larger version will be slightly bigger than the 6.5-inch display found on the current iPhone 11 Pro Max. It could also feature a smaller version of the current ‘notch’ camera cutout in at the top end of the display, the report claims.

Meanwhile, the LIDAR tracking system added to the rear camera array will be combined with processor speed and performance improvements, which should add up to significant improvements in augmented reality (AR) performance. The processor improvements are also designed to help boost on-device AI performance, the report notes.

These phones are still planned for a fall launch and release, though some of them could be available “multiple weeks later than normal,” Bloomberg claims, owing to disruptions caused by the ongoing coronavirus pandemic.

Other updates to the company’s product line on the horizon include a new smaller HomePod that’s around 50 percent smaller than the current version, with a planned launch sometime later this year. It’ll offer a price advantage versus the current model, and the report claims it’ll also come alongside Siri improvements and expansion of music streaming service support beyond Apple’s own. There’s also Apple Tags, which Apple itself has accidentally tipped as coming – a Tile-like Bluetooth location tracking accessory. Bloomberg says that could come out this year.

Finally, the report says there are updates to the MacBook Pro, Apple TV, lower-end iPads and iMac on the way, which is not surprising given Apple’s usual hardware update cadence. There’s no timeline for release on any of those, and it remains to be seen how the COVID-19 situation impacts these plans.

Powered by WPeMatico

Report: Apple’s iOS 14 contains code that would let you sample apps before download

Posted by | Android, app-store, Apple, apple inc, Apps, developers, DoorDash, Google, instagram, iOS, ios 11, iOS 14, ios 7, iTunes, macos, Mobile, OpenTable, operating systems, siri, Sony, Yelp | No Comments

Apple has under development a feature that would allow iOS users to interact with a third-party app, even if the app wasn’t yet installed on your device, according to a report from 9to5Mac. The report is based on information discovered in the iOS 14 code, which is not necessarily an indication of launch plans on Apple’s part — but rather an insight into some of Apple’s work in progress.

The feature is referenced internally as the “Clips” API — not to be confused with Apple’s video editing app of the same name. Based on 9to5Mac’s analysis, the new API works in conjunction with the QR Code reader, allowing a user to scan a code linked to an app, then interact with that app from a card that appears on their screen.

Described like this, the feature sounds like a marketing tool for app publishers, as it would offer a way for users to try out new apps before they download them to get a better feel for the experience than a banner ad would allow. In addition to offering some interactivity with an app before it’s downloaded, the card could also be used to redirect users to the App Store if they choose to download the full version. The card could also be used to open the app directly to the content, in the case of apps the user already had installed.

Google’s Android, the report noted, offers a similar feature called “Slices,” launched in 2018. While Google had already introduced a way to interact with small pieces of an app in an experience called Instant Apps, the newer Slices feature was meant to drive usage of apps — like booking a ride or hotel room, for example, without having to first locate the app and launch it. On iOS, perhaps, these app “clips” could be pulled up by Siri or in Spotlight search — but that functionality wasn’t demonstrated by the code the report referenced today.

It’s unclear what Apple’s intentions are with the Clips API or how experimental its efforts are at this time.

However, the report found the feature was being tested with OpenTable, Yelp, DoorDash, Sony (the PS4 Second Screen app) and YouTube. This could indicate a plan to demo examples of the app’s functionality in a future reveal to developers.

Powered by WPeMatico

Spotify gains Siri support on iOS 13, arrives on Apple TV

Posted by | apple tv, Apps, iOS 13, Media, Mobile, siri, Spotify | No Comments

In a long-awaited move, Spotify announced this morning its iOS 13 app would now offer Siri support and its streaming music service would also become available on Apple TV. That means you can now request your favorite music or podcasts using Siri voice commands, by preferencing the command with “Hey Siri, play…,” followed by the audio you want and concluding the command with “on Spotify.”

The Siri support had been spotted earlier while in beta testing, but the company hadn’t confirmed when it would be publicly available.

According to Spotify, the Siri support will also work over Apple AirPods, on CarPlay and via AirPlay on Apple HomePod.

In addition, the Spotify iOS app update will include support for iPhone’s new data-saver mode, which aids when bandwidth is an issue.

Spotify is also today launching on Apple TV, joining other Spotify apps for TV platforms, including Roku, Android TV, Samsung Tizen and Amazon Fire TV.

The app updates are still rolling out, so you may need to wait to take advantage of the Apple TV support and other new features.

The lack of Siri support for Spotify was not the streaming music service’s fault — it wasn’t until iOS 13 that such support even became an option. With the new mobile operating system launched in September, Apple finally opened up its SiriKit framework to third-party apps, allowing end-users to better control their apps using voice commands. That includes audio playback on music services like Spotify, as well as the ability to like and dislike tracks, skip or go to the next song and get track information.

Pandora, Google Maps and Waze were among the first to adopt Siri integration when it became available in iOS 13 — a clear indication that some of Apple’s chief rivals have been ready and willing to launch Siri support as soon as it was possible.

Though the integration with Siri will be useful for end-users and beneficial to Spotify’s business, it may also weaken the streaming company’s antitrust claims against Apple.

Spotify has long stated that Apple engages in anti-competitive business practices when it comes to its app platform, which is designed to favor its own apps and services, like Apple Music, it says. Among its chief complaints was the inability of third-party apps to work with Siri, which gave Apple’s own apps a favored position. Spotify also strongly believes the 30% revenue share required by the App Store hampers its growth potential.

The streamer filed an antitrust complaint against Apple in the European Union in March. And now, U.S. lawmakers have reached out to Spotify to request information as a part of an antitrust probe here in the states, reports claim. 

Despite its new ability to integrate with Siri in iOS 13, Spotify could argue that it’s still not enough. Users will have to say “on Spotify” to take advantage of the new functionality, instead of being able to set their default music app to Spotify, which would be easier. It could also point out that the support is only available to iOS 13 devices, not the entire iOS market.

Along with the Apple-related news, Spotify today also announced support for Google Nest Home Max, Sonos Move, Sonos One SL, Samsung Galaxy Fold, and preinstallation on Michael Kors Access, Diesel and Emporio Armani Wear OS smartwatches.

 

Powered by WPeMatico

The time is right for Apple to buy Sonos

Posted by | AirPlay, Amazon, amazon alexa, Apple, apple inc, apple music, controller, echo, Gadgets, Google, hardware, HomePod, ios devices, iPhone, siri, smart speakers, Sonos, Spotify, TC, video streaming services, virtual assistant | No Comments

It’s been a busy couple of months for smart speakers — Amazon released a bunch just this week, including updated versions of its existing Echo hardware and a new Echo Studio with premium sound. Sonos also introduced its first portable speaker with Bluetooth support, the Sonos Move, and in August launched its collaboration collection with Ikea. Meanwhile, Apple didn’t say anything about the HomePod at its latest big product event — an omission that makes it all the more obvious the smart move would be for Apple to acquire a company that knows what they’re doing in this category: Sonos.

Highly aligned

From an outsider perspective, it’s hard to find two companies that seem more philosophically aligned than Sonos and Apple when it comes to product design and business model. Both are clearly focused on delivering premium hardware (at a price point that’s generally at the higher end of the mass market) and both use services to augment and complement the appeal of their hardware, even if Apple’s been shifting that mix a bit with a fast-growing services business.

Sonos, like Apple, clearly has a strong focus and deep investment in industrial design, and puts a lot of effort into truly distinctive product look and feel that stands out from the crowd — and is instantly identifiable once you know what to look for. Even the company’s preference for a mostly black and white palette feels distinctly Apple — at least Apple leading up to the prior renaissance of multi-color palettes for some of its more popular devices, including the iPhone.

airplay2 headerFrom a technical perspective, Apple and Sonos seem keen to work together — and the results of their collaboration has been great for consumers who use both ecosystems. AirPlay 2 support is effectively standard on all modern Sonos hardware, and really Sonos is essentially the default choice already for anyone looking to do AirPlay 2-based multiform audio, thanks to the wide range of options available in different form factors and at different price points. Sonos and Apple also offer an Apple Music integration for Sonos’ controller app, and now you can use voice control via Alexa to play Apple Music, too.

Competitive moves

The main issue that an Apple-owned Sonos hasn’t made much sense before now, at least from Sonos’ perspective, is that the speaker maker has reaped the benefits of being a platform that plays nice with all the major streaming service providers and virtual assistants. Recent Sonos speakers offer both Amazon Alexa and Google Assistant support, for instance, and Sonos’ software has connections with virtually every major music and audio streaming service available.

What’s changed, especially in light of Amazon’s slew of announcements this week, is that competitors like Amazon are looking more like they want to own more of the business that currently falls within Sonos’ domain. Amazon’s Echo Studio is a new premium speaker that directly competes with Sonos in a way that previous Echos really haven’t, and the company has consistently been releasing better-sounding versions of its other, more affordable Echos. It’s also been rolling out more feature-rich multi-room audio features, including wireless surround support for home theater use — all things squarely in the Sonos wheelhouse.

alexa echo amazon 9250064

For now, Sonos and Amazon seem to be comfortably in “frenemy” territory, but increasingly, it doesn’t seem like Amazon is content to leave them their higher-end market segment when it comes to the speaker hardware category. Amazon still probably will do whatever it can to maximize use of Alexa, on both its own and third-party devices, but it also seems to be intent on strengthening and expanding its own first-party device lineup, with speakers as low-hanging fruit.

Other competitors, including Google and Apple, don’t seem to have had as much success with their products that line up as direct competitors to Sonos, but the speaker-maker also faces perennial challenges from hi-fi and audio industry stalwarts, and also seems likely to go up against newer device makers with audio ambitions and clear cost advantages, like Anker.

Missing ingredients/work to be done

Of course, there are some big challenges and potential red flags that stand in the way of Apple ever buying Sonos, or of that resulting union working out well for consumers. Sonos works so well because it’s service-agnostic, for instance, and the key to its success with recent products seems to also be integration with the smart home assistants that people seem to actually want to use most — namely Alexa and Google Assistant.

Under Apple ownership, it’s highly possible that Apple Music would at least get preferential treatment, if not become the lone streaming service on offer. It’s probable that Siri would replace Alexa and Assistant as the only virtual voice service available, and almost unthinkable that Apple would continue to support competing services if it did make this buy.

That said, there’s probably significant overlap between Apple and Sonos customers already, and as long as there was some service flexibility (in the same way there is for streaming competitors on iOS devices, including Spotify), then being locked into Siri probably wouldn’t sting as much. And it would serve to give Siri the foothold at home that the HomePod hasn’t managed to provide. Apple would also be better incentivized to work on improving Siri’s performance as a general home-based assistant, which would ultimately be good for Apple ecosystem customers.

Another smart adjacency

Apple’s bigger acquisitions are few and far between, but the ones it does make are typically obviously adjacent to its core business. A Sonos acquisition has a pretty strong precedent in the Beats purchase Apple made in 2014, albeit without the strong motivator of providing the underlying product and relationship basis for launching a streaming service.

What Sonos is, however, is an inversion of the historical Apple model of using great services to sell hardware. The Sonos ecosystem is a great, easy to use, premium-feel means of making the most of Apple’s music and video streaming services (and brand new games subscription offering), all of which are more important than ever to the company as it diversifies from its monolithic iPhone business.

I’m hardly the first to suggest an Apple-Sonos deal makes sense: J.P. Morgan analyst Samik Chatterjee suggested it earlier this year, in fact. From my perspective, however, the timing has never been better for this acquisition to take place, and the motivations never stronger for either party involved.

Disclosure: I worked briefly for Apple in its communications department in 2015-2016, but the above analysis is based entirely on publicly available information, and I hold no stock in either company.

Powered by WPeMatico

Apple still has work to do on privacy

Posted by | Android, Apple, artificial intelligence, data processing, data protection, data security, digital privacy, digital rights, Europe, european union, General Data Protection Regulation, human rights, identity management, iPhone, privacy, Security, siri, TC, Tim Cook | No Comments

There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

The implant was used to steal location data and files like databases of WhatsApp, Telegram, iMessage. So all the user messages, or emails. Copies of contacts, photos, https://t.co/AmWRpbcIHw pic.twitter.com/vUNQDo9noJ

— Lukasz Olejnik (@lukOlejnik) August 30, 2019

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)

Powered by WPeMatico

Siri recordings ‘regularly’ sent to Apple contractors for analysis, claims whistleblower

Posted by | Apple, artificial intelligence, Gadgets, Mobile, privacy, siri | No Comments

Apple has joined the dubious company of Google and Amazon in secretly sharing with contractors audio recordings of its users, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature and are paired with other revealing data, like location, app data and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than 1% of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of 1% rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google likewise sends clips to be analyzed, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (under which Siri queries would fall):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.

Powered by WPeMatico