Google I/O 2019

What Pixel 3a tells us about the state of the smartphone — and Google

Posted by | Google, Google I/O 2019, hardware, Mobile, PIXEL, pixel 3a, smartphones | No Comments

Announced yesterday at Google’s opening I/O keynote, the Pixel 3a arrives at a tenuous time for the smartphone industry. Sales figures have stagnated for most of the major players in the industry — a phenomenon from which Google certainly isn’t immune.

CEO Sundar Pichai discussed exactly that on the company’s Q1 earnings call last week. “While the first quarter results reflect pressure in the premium smartphone industry,” he explained, “we are pleased with the ongoing momentum of Assistant-enabled Home devices, particularly the Home Hub and Mini devices, and look forward to our May 7 announcement at I/O from our hardware team.”

That last bit was a clear reference to the arrival of the new budget tier of Google’s flagship offering. The 3a is a clear push to address one of the biggest drivers of slowing smartphone sales. With a starting price of $399, it’s a fraction of the price of top handsets from competitors like Apple and Samsung.

There’s been a fairly rapid creep in flagship prices in recent years. Handsets starting at north of $1,000 hardly warrant a second glance anymore, while many forthcoming foldables are hovering around double that.

As Google VP of Product Management Mario Queiroz told me ahead of launch, “The smartphone market has started to flatten. We think one of the reasons is because, you know, the premium segment of the market is a very large segment, but premium phones have gotten more and more expensive, you know, three, four years ago, you could buy a premium phone for $500.”

Inflated prices have certainly made device purchases more burdensome for buyers. That, coupled with a relative lack of compelling new features has gone a ways toward slowing down upgrade cycles, hurting sales in the process.

I’ve enjoyed my early hands-on time with the 3a — more to come on that later. It’s important to note the different factors that have allowed Google to get to this stage. A key driver is, of course, Google’s purchase of massive R&D resources from HTC. That result of HTC’s dip into sub-replacement level hardware manufacturer has resulted in the ability to develop hardware in house, on the relatively cheap at a new campus in Taipei.

Also important is Google’s ongoing quest to further uncouple the importance of hardware from smartphone upgrades. The company’s big investments in machine learning and artificial intelligence particularly are driving many of the innovations best demonstrated on the imaging side of things. Devin captured this sentiment in this piece written in the wake of the iPhone XS announcement.

Notably, the Pixel 3a has essentially the same camera hardware as the pricier 3. Google cut some corners here, but that wasn’t one. There are still and will continue to be some limitations to what the 3a is able to do, based on processing power, but the line between what the two devices can do is already pretty blurry when it comes to taking photos.

There’s another factor that’s been looming over Pixel sales in all of this — but for several reasons, Pichai wasn’t ready to discuss it on the call. For years, the line has been hampered by carrier exclusivity, something that feels like it ought to be relegated to the smartphone past.

Certainly that sort of arrangement makes sense for young companies like OnePlus or Palm, which are looking for a way into a market, while seeking to maintain manageable growth. But Google certainly has the resources to grow outside of a single carrier deal. And the fact of the matter (as Huawei has discovered the hard way) is that carrier distribution and contracts as still key drivers of smartphone distribution here in the States, even as most manufacturers also offer unlocked devices. I suspect those upfront costs are enough to make many consumers do a double take — even though we all know in our hearts the contract is ultimately where they get you.

Thankfully, Google announced that it will be making the Pixel 3 and 3a available on a lot more carriers, starting this week. That move ought to have a marked impact on the Pixel’s sales figures going forward. The addition of Sprint and T-Mobile among others means a lot more retail shelf space and ad dollars across the U.S. Devices are a harder sell when your average consumer has to go out of their way to find them — not to mention the difficulty of convincing users to switch carriers for a new device.

I’d caution against using Q2 results as a direct measure of the 3a’s appeal and Google’s move toward a six-month device release cycle. At this early stage it’s too early to uncouple that from new customers who are coming on board courtesy of those carrier additions. Even so, the device is an interesting litmus test for the current state of the smartphone, right down to the return of the headphone jack.

Powered by WPeMatico

Google Play is changing how app ratings work

Posted by | Android, android apps, Apps, developers, Google, Google I/O 2019, Mobile | No Comments

Two years ago, Apple changed the way its app store ratings worked by allowing developers to decide whether or not their ratings would be reset with their latest app update — a feature that Apple suggests should be used sparingly. Today, Google announced it’s making a change to how its Play Store app ratings work, too. But instead of giving developers the choice of when ratings will reset, it will begin to weight app ratings to favor those from more recent releases.

“You told us you wanted a rating based on what your app is today, not what it was years ago, and we agree,” said Milena Nikolic, an engineering director leading Google Play Console, who detailed the changes at the Google I/O Developer conference today.

She explained that, soon, the average rating calculation for apps will be updated for all Android apps on Google Play. Instead of a lifetime cumulative value, the app’s average rating will be recalculated to “give more weight” to the most recent users’ ratings.

With this update, users will be able to better see, at a glance, the current state of the app — meaning, any fixes and changes that made it a better experience over the years will now be taken into account when determining the rating.

“It will better reflect all your hard work and improvements,” touted Nikolic, of the updated ratings.

On the flip side, however, this change also means that once high-quality apps that have since failed to release new updates and bug fixes will now have a rating that reflects their current state of decline.

It’s unclear how much the change will more broadly impact Google Play Store SEO, where today app search results are returned based on a combination of factors, including app names, descriptions, keywords, downloads, reviews and ratings, among other factors.

The updated app ratings was one of numerous Google Play changes announced today, along with the public launch of dynamic delivery features, new APIs, refreshed Google Play Console data, custom listings and even “suggested replies” — like those found in Gmail, but for responding to Play Store user reviews.

End users of the Google Play Store won’t see the new, recalculated rating until August, but developers can preview their new rating today in their Play Store Console.

Powered by WPeMatico

Facebook talked privacy, Google actually built it

Posted by | Apps, artificial intelligence, Developer, Facebook, facebook privacy, Google, Google I/O 2019, google privacy, Mark Zuckerberg, Mobile, Opinion, Policy, privacy, Sundar Pichai, TC | No Comments

Mark Zuckerberg: “The future is private”. Sundar Pichai: ~The present is private~. While both CEO’s made protecting user data a central theme of their conference keynotes this month, Facebook’s product updates were mostly vague vaporware while Google’s were either ready to ship or ready to demo. The contrast highlights the divergence in strategy between the two tech giants.

For Facebook, privacy is a talking point meant to boost confidence in sharing, deter regulators, and repair its battered image. For Google, privacy is functional, going hand-in-hand with on-device data processing to make features faster and more widely accessible.

Everyone wants tech to be more private, but we must discern between promises and delivery. Like “mobile”, “on-demand”, “AI”, and “blockchain” before it, “privacy” can’t be taken at face value. We deserve improvements to the core of how our software and hardware work, not cosmetic add-ons and instantiations no one is asking for.

AMY OSBORNE/AFP/Getty Images

At Facebook’s F8 last week, we heard from Zuckerberg about how “Privacy gives us the freedom to be ourselves” and he reiterated how that would happen through ephemerality and secure data storage. He said Messenger and Instagram Direct will become encrypted…eventually…which Zuckerberg had already announced in January and detailed in March. We didn’t get the Clear History feature that Zuckerberg made the privacy centerpiece of his 2018 conference, or anything about the Data Transfer Project that’s been silent for the 10 months since it’s reveal.

What users did get was a clumsy joke from Zuckerberg about how “I get that a lot of people aren’t sure that we’re serious about this. I know that we don’t exactly have the strongest reputation on privacy right now to put it lightly. But I’m committed to doing this well.” No one laughed. At least he admitted that “It’s not going to happen overnight.”

But it shouldn’t have to. Facebook made its first massive privacy mistake in 2007 with Beacon, which quietly relayed your off-site ecommerce and web activity to your friends. It’s had 12 years, a deal with the FTC promising to improve, countless screwups and apologies, the democracy-shaking Cambridge Analytica scandal, and hours of being grilled by congress to get serious about the problem. That makes it clear that if “the future is private”, then the past wasn’t. Facebook is too late here to receive the benefit of the doubt.

At Google’s I/O, we saw demos from Pichai showing how “our work on privacy and security is never done. And we want to do more to stay ahead of constantly evolving user expectations.” Instead of waiting to fall so far behind that users demand more privacy, Google has been steadily working on it for the past decade since it introduced Chrome incognito mode. It’s changed directions away from using Gmail content to target ads and allowing any developer to request access to your email, though there are plenty of sins to atone for. Now when the company is hit with scandals, it’s typically over its frightening efficiency as with its cancelled Project Maven AI military tech, not its creepiness.

Google made more progress on privacy in low-key updates in the runup to I/O than Facebook did on stage. In the past month it launched the ability to use your Android device as a physical security key, and a new auto-delete feature rolling out in the coming weeks that erases your web and app activity after 3 or 18 months. Then in its keynote today, it published “privacy commitments” for Made By Google products like Nest detailing exactly how they use your data and your control over that. For example, the new Nest Home Max does all its Face Match processing on device so facial recognition data isn’t sent to Google. Failing to note there’s a microphone in its Nest security alarm did cause an uproar in February, but the company has already course-corrected

That concept of on-device processing is a hallmark of the new Android 10 Q operating system. Opening in beta to developers today, it comes with almost 50 new security and privacy features like TLS 1.3 support and Mac address randomization. Google Assistant will now be better protected, Pichai told a cheering crowd. “Further advances in deep learning have allowed us to combine and shrink the 100 gigabyte models down to half a gigabyte — small enough to bring it onto mobile devices.” This makes Assistant not only more private, but fast enough that it’s quicker to navigate your phone by voice than touch. Here, privacy and utility intertwine.

The result is that Google can listen to video chats and caption them for you in real-time, transcribe in-person conversations, or relay aloud your typed responses to a phone call without transmitting audio data to the cloud. That could be a huge help if you’re hearing or vision impaired, or just have your hands full. A lot of the new Assistant features coming to Google Pixel phones this year will even work in Airplane mode. Pichai says that “Gboard is already using federated learning to improve next word prediction, as well as emoji prediction across 10s of millions of devices” by using on-phone processing so only improvements to Google’s AI are sent to the company, not what you typed.

Google’s senior director of Android Stephanie Cuthbertson hammered the idea home, noting that “On device machine learning powers everything from these incredible breakthroughs like Live Captions to helpful everyday features like Smart Reply. And it does this with no user input ever leaving the phone, all of which protects user privacy.” Apple pioneered much of the on-device processing, and many Google features still rely on cloud computing, but it’s swiftly progressing.

When Google does make privacy announcements about things that aren’t about to ship, they’re significant and will be worth the wait. Chrome will implement anti-fingerprinting tech and change cookies to be more private so only the site that created them can use them. And Incognito Mode will soon come to the Google Maps and Search apps.

Pichai didn’t have to rely on grand proclamations, cringey jokes, or imaginary product changes to get his message across. Privacy isn’t just a means to an end for Google. It’s not a PR strategy. And it’s not some theoretical part of tomorrow like it is for Zuckerberg and Facebook. It’s now a natural part of building user-first technology…after 20 years of more cavalier attitudes towards data. That new approach is why the company dedicated to organizing the world’s information has been getting so little backlash lately.

With privacy, it’s all about show, don’t tell.

Powered by WPeMatico

Google launches new Assistant developer tools

Posted by | Android, artificial intelligence, Assistant, Banking, belkin wemo, Developer, Finance, Google, Google Assistant, Google Cast, google home, Google I/O 2019, lifx, Nike, Philips, smart devices, smart home devices, tp-link, wemo | No Comments

At its I/O conference, Google today announced a slew of new tools for developers who want to build experiences for the company’s Assistant platform. These range from the ability to build games for smart displays, like the Google Home Hub and the launch of App Actions for taking users from an Assistant answer to their native apps, to a new Local Home SDK that allows developers to run their smart home code locally on Google Home Speakers and Nest Displays.

This Local Home SDK may actually be the most important announcement in this list, given that it turns these devices into a real hardware hub for these smart home devices and provides local compute capacity without the round-trip to the cloud. The first set of partners include Philips, Wemo, TP-Link and LIFX, but the SDK will become available to all developers next month.

In addition, this SDK will make it easier for new users to set up their smart devices in the Google Home app. Google tested this feature with GE last October and is now ready to roll it out to additional partners.

For developers who want to take people from the Assistant to the right spot inside of their native apps, Google announced a preview of App Actions last year. Health and fitness, finance, banking, ridesharing and food ordering apps can now make use of these built-in intents. “If I wanted to track my run with Nike Run Club, I could just say ‘Hey Google, start my run in Nike Run Club’ and the app will automatically start tracking my run,” Google explains in today’s announcement.

For how-to sites, Google also announced extended markup support that allows them to prepare their content for inclusion in Google Assistant answers on smart displays and in Google Search using standard schema.org markup.

You can read more about the new ability to write games for smart displays here, but this is clearly just a first step and Google plans to open up the platform to more third-party experiences over time.

Powered by WPeMatico

Google launches Jetpack Compose, an open-source, Kotlin-based UI development toolkit

Posted by | Android, Developer, developers, Google, Google I/O 2019, jetpack, Kotlin | No Comments

Google today announced the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers who want to use a reactive programming model similar to React Native and Vue.js.

Jetpack Compose is an unbundled toolkit that is part of Google’s overall Android Jetpack set of software components for Android developers, but there is no requirement to use any other Jetpack components. With Jetpack Compose, Google is essentially bringing the UI-as-code philosophy to Android development. Compose’s UI components are fully declarative and allow developers to create layouts by simply describing what the UI should look like in their code. The Compose framework will handle all the gory details of UI optimization for the developer.

Developers can mix and match the Jetpack Compose APIs and view those based on Android’s native APIs. Out of the box, Jetpack Compose also natively supports Google’s Material Design.

As part of today’s overall Jetpack update, Google also is launching a number of new Jetpack components and features. These range from support for building apps for Android for Cars and Android Auto to an Enterprise library for making it easier to integrate apps with Enterprise Mobility Management solutions and built-in benchmarking tools

The standout feature, though, is probably CameraX, a new library that allows developers to build camera-centric features and applications that gives developers access to essentially the same features as the native Android camera app.

Powered by WPeMatico

Android developers can now force app updates

Posted by | Android, Developer, developers, Google, Google I/O 2019 | No Comments

Half a year ago, at the Android Dev Summit, Google announced a new way for developers to force their users to update their apps when they launch new features or important bug fixes. It’s only now, at Google I/O, though, that the company is actually making this feature available to developers. Previously, it was only available to a few select Google partners.

In addition, Google is launching its dynamics updates feature out of beta. This allows developers to deliver some of their apps’ modules on demand, reducing the file size for the initial install.

“Right now, if you have an update, either you have auto-update or you need to go to the Play Store to even know that there is an update, or maybe the Play Store will give you a notification,” Chet Haase, chief advocate for Android, said. “But what if you have a really critical feature that you want people to get or, let’s say, a security issue you want to address, or a payment issue and you really want all of your users to get that as quickly as they can.”

This new feature, called Inline Updates, gives developers access to a new API that they can then use to force users to update. Developers can force users to update, say with a full-screen blocking message, force-install the update in the background and restart the app when the download has completed or create their own custom update flows.

Powered by WPeMatico

Google’s latest Android Studio release focuses on speed and stability

Posted by | Android, Android developers, android studio, Developer, Google, Google I/O 2019, Kotlin | No Comments

At last year’s I/O developer conference, Google announced Project Marble, an effort to bring more speed and stability to the company’s Android Studio IDE. That was in marked contrast to previous updates, where the focus was very much on adding new features. Over time, though, as Google extended Android Studio, it started to slow down. Android Studio 3.5, which the company is launching today, is the result of these efforts.

“We are certainly not done improving quality with Android Studio, but with the work and new infrastructure put into Project Marble we hope that you are even more productive in developing Android apps,” the company notes in today’s announcement.

The most important updates probably focus on speed. One of the things that slowed Android Studio down were memory leaks, for example. Over the last year, the team fixed 33 major memory leaks and a new feature allows the IDE to collect more information about how it uses memory and suggest memory settings for you. It’s now also easier for developers to share their memory problems with Google.

The team also addressed user interface freezes and improved both build and overall IDE speed. The Android Emulator now also uses fewer CPU resources, often by up to 3x.

One interesting update will bring a welcome change to Android Studio users on Windows. Developers on Microsoft’s platform often complained about how their build times were getting slower. The reason for this, it turned out, was that many anti-virus programs would scan Android Studio’s build targets — and these have a lot of small files. Scanning those takes up a lot of I/O and CPU bandwidth. With this update, the IDE now check the directories that could be impacted by this and recommends how to fix this issue.

In addition to these updates that focus on speed and stability, the team also polished numerous existing features, ranging from improved IntelliJ support to Layout Editor improvement. Android Studio 3.5 is now also officially supported on Chrome OS 72 and high-end x86-based Chromebooks.

Powered by WPeMatico

Live transcription and captioning in Android are a boon to the hearing-impaired

Posted by | accessibility, Android Q, artificial intelligence, deafness, Gadgets, Google, Google I/O 2019, Mobile, natural language processing, Speech Recognition, TC | No Comments

A set of new features for Android could alleviate some of the difficulties of living with hearing impairment and other conditions. Live transcription, captioning and relay use speech recognition and synthesis to make content on your phone more accessible — in real time.

Announced today at Google’s I/O event in a surprisingly long segment on accessibility, the features all rely on improved speech-to-text and text-to-speech algorithms, some of which now run on-device rather than sending audio to a data center to be decoded.

The first feature to be highlighted, live transcription, was already mentioned by Google. It’s a simple but very useful tool: open the app and the device will listen to its surroundings and simply display as text on the screen any speech it recognizes.

We’ve seen this in translator apps and devices, like the One Mini, and the meeting transcription highlighted yesterday at Microsoft Build. One would think that such a straightforward tool is long overdue, but, in fact, everyday circumstances like talking to a couple of friends at a cafe can be remarkably difficult for natural language systems trained on perfectly recorded single-speaker audio. Improving the system to the point where it can track multiple speakers and display accurate transcripts quickly has no doubt been a challenge.

Another feature enabled by this improved speech recognition ability is live captioning, which essentially does the same thing as above, but for video. Now when you watch a YouTube video, listen to a voice message or even take a video call, you’ll be able to see what the person in it is saying, in real time.

That should prove incredibly useful not just for the millions of people who can’t hear what’s being said, but also those who don’t speak the language well and could use text support, or anyone watching a show on mute when they’re supposed to be going to sleep, or any number of other circumstances where hearing and understanding speech just isn’t the best option.

Gif showing a phone conversation being captioned live.Captioning phone calls is something CEO Sundar Pichai said is still under development, but the “live relay” feature they demoed onstage showed how it might work. A person who is hearing-impaired or can’t speak will certainly find an ordinary phone call to be pretty worthless. But live relay turns the call immediately into text, and immediately turns text responses into speech the person on the line can hear.

Live captioning should be available on Android Q when it releases, with some device restrictions. Live transcribe is available now, but a warning states that it is currently in development. Live relay is yet to come, but showing it onstage in such a complete form suggests it won’t be long before it appears.

Powered by WPeMatico

Google expands digital well-being tools to include a new ‘Focus mode,’ adds improved parental controls to Android

Posted by | Apps, digital wellbeing, Google, Google I/O 2019, Mobile | No Comments

Last year at Google I/O, Google introduced a host of new digital well-being tools aimed at helping people better manage their screen time, track app usage and configure their device’s “do not disturb” settings. Today, Google is updating its suite of tools to include a new feature called “Focus Mode” that lets you temporarily disable distracting apps while not missing critical information, as well as a few new features for users of its parental control software, Family Link, which is now part of the Android OS.

With Focus Mode, a new feature for Android devices, you can turn off the apps you personally find distracting while you’re trying to sit down and get things done. For example, you could disable updates from distracting social media apps or email, but could choose to leave texting on so family members could reach you in an emergency.

Though not mentioned during the announcement, the feature also could help people enjoy their devices in their downtime — like streaming from Netflix without getting bothered by Slack notifications and work email. That’s not necessarily a way to reduce screen time — which is what a lot of today’s digital well-being features provide. Instead, it’s about finding balance between when it’s time to work and when it’s not, and what things deserve our attention at a given time.

Also unveiled today at Google I/O were new features for Family Link, Google’s software that lets parents control what kids can do on their devices, and track their usage.

Now, parents can set time limits on specific apps instead of just “screen time” in general. This is similar in a way to what Amazon’s FreeTime parental controls offer, as they allow parents to require that kids finish their reading before they can play games, for example. In Google’s case, it’s instead allowing parents to limit certain apps they believe are distractions to children.

google I/O 2019 focus app

Another new feature will allow parents to give kids extra screen time, or “bonus time.”

This could help kids who need just a few more minutes to wrap up what they’re doing on their device, or could be doled out as a reward, depending on how parents wanted to use the feature.

The company also announced it’s making Family Link part of every Android device, beginning with Android Q. That means Family Link will become accessible from device settings, instead of being an optional app parents can choose to download. You’ll find it under the “digital well-being and parental controls” in Android Q devices rolling out later this summer, says Google.

“We’re spending a lot of time on phones, and people tell us, sometimes they wish they spent more time on other things. We want to help people find balance and digital well-being. And yes, sometimes this means making it easier to put your device away entirely, and focus on the times that really matter,” said Stephanie Cuthbertson, senior director for Android. 

She said these tools were already proving useful, as 90% of app timers helped users stick to their goals and there was a 27% drop in nightly usage thanks to Wind Down. However, the company didn’t share how many users were taking advantage of the digital well-being features as a whole.

Powered by WPeMatico

Android Q devices will get over-the-air security updates — but there’s a catch

Posted by | Android, computing, Google, Google I/O 2019, motorola droid, operating system, operating systems, Security, smartphones | No Comments

Devices shipping with Android Q will receive over-the-air security patches without having to go through device manufacturers.

A lack of steady security updates has been a major pain point for Android users over the years. Google finally has a fix for the problem. At its annual developer conference Tuesday, the tech giant said it’ll bypass mobile makers and push security updates directly to devices.

The benefit is that users won’t have to wait lengthy periods for device manufacturers to test and quality assure the patches for their devices for fixes to critical security vulnerabilities that put users at risk.

Better yet, the updates won’t require Android to restart.

Security updates for Android Q will be focused on 14 modules crucial to the operating system’s functioning — including media codecs, which have long plagued the Android software with a steady stream of security flaws.

There’s a catch — two, in fact.

Devices updating to Android Q will not work with over-the-air security updates and some manufacturers can opt-out altogether, according to The Verge, which first reported the news, rendering the feature effectively useless. The new feature will also not be backported to earlier versions of Android. According to distribution data, close to half of all Android users are still on Android 5.0 Lollipop and earlier, it could take years for Android Q to match the same usage share.

Still, Google has to start somewhere. Android Q is expected out later this year.

Powered by WPeMatico