Developer

Android developers can now force users to update their apps

Posted by | Android, Developer, developers, Google | No Comments

At its Android Dev Summit, Google today announced a number of new tools and features for developers that write apps for its mobile operating system. Some of those are no surprise, including support for the latest release of the Kotlin language, which is becoming increasingly popular in the Android developer ecosystem, as well as new features for the Android Jetpack tools and APIs, as well as the Android Studio IDE. The biggest surprise, though, is likely the launch of the In-app Updates API.

While the name doesn’t exactly make it sound like a break-through feature, it’s actually a big deal. With this new API, developers now get two new ways to push users to update their app.

“This is something that developers have asked us for a long time is — say you own an app and you want to make sure the user is running the latest version,” Google senior director for Android product management and developer relations Stephanie Saad Cuthbertson told me. “This is something developers really fret.”

Say you shipped your application with a major bug (it happens…) and want to make sure that every user upgrades immediately; you will soon be able to show them a full-screen blocking message that will be displayed when they first start the app again and while the update is applied. That’s obviously only meant for major bugs. The second option allows for more flexibility and allows the user to continue using the app while the update is downloaded. Developers can fully customize these update flows.

Right now, the new updates API is in early testing with a few partners and the plan is to open it to more developers soon.

As Cuthbertson stressed, the team’s focus in recent years has been on giving developers what they want. The poster child for that, she noted, is the Kotlin languages. “It wasn’t a Google-designed language and maybe not the obvious choice — but it really was the best choice,” she told me. “When you look at the past several years, you can really see an investment that started with the IDE. It’s actually only five years old and since then, we’ve been building it out, completely based on developer feedback.”

Today, the company announced that 46 percent of professional developers now use Kotlin and more than 118,000 new Kotlin projects were started in Android Studio in the last month alone (and that’s just from users who opt in to share metrics with Google), so that investment is definitely paying off.

One thing developers have lately been complaining about, though, is that build times in Android Studio have slowed down. “What we saw internally was that build times are getting faster, but what we heard from developers externally is that they are getting slower,” Cuthbertson said. “So we started benchmarking, both internally in controlled circumstances, but also for anybody who opted in, we started benchmarking the whole ecosystem.” What the team found was that Gradle, the core of the Android Studio build system, is getting a lot faster, but the system and platform you build on also has a major impact. Cuthbertson noted that the Spectre and Meltdown fixes had a major impact on Windows and Linux users, for example, as do custom plugins. So going forward, the team is building new profiling and analysis tools to allow developers to get more insights into their build times and Google will build more of its own plugins to accelerate performance.

Most of this isn’t in the current Android Studio 3.3 beta yet (and beta 3 of version 3.3 is launching today, too), but one thing Android Studio users will likely be happy to hear is that Chrome OS will get official support for the IDE early next year, using Chrome OS’s new ability to run Linux applications.

Other updates the company announced today are new Jetpack Architecture Component libraries for Navigation and Work Manager, making it easier for developers to add Android’s navigation principles into their apps and perform background tasks without having to write a lot of boilerplate code. Android App Bundles, which allow developers to modularize their applications and ship parts of them on demand, are also getting some updates, as are Instant Apps, which users can run without installing them. Using web URLs for Instant Apps is now optional and building them in Android Studio has become easier.

Powered by WPeMatico

How to watch the live stream for Apple’s iPad and Mac keynote

Posted by | AirPods, AirPower, Apple, Apple Fall Event 2018, Apps, Developer, Gadgets, imac, iPad, iPad Pro, macbook, macbook air, Mobile | No Comments

Apple is holding a keynote today at the Brooklyn Academy of Music’s Howard Gilman Opera House, and the company is expected to unveil a brand new iPad Pro as well as updated Mac computers. The event starts at 10 AM in New York (7 AM in San Francisco, 2 PM in London, 3 PM in Paris), you’ll be able to watch the event as the company is streaming it live.

If you live in Europe and already put a note in your calendar, make sure you got the time right as daylight saving time has yet to happen in the U.S. New York is currently 4 hours behind London, 5 hours behind Paris, etc.

Apple is likely to unveil a new iPad Pro to replace the 10.5-inch and 12.9-inch iPad Pro. Rumor has it that it’ll look nothing like your current iPad. The device should get rounded corners, thinner bezels and a Face ID sensor. Apple could also switch to USB-C instead of Lightning and refresh the Apple Pencil.

On the Mac front, the MacBook Air could get a refresh. This could be Apple’s new entry-level laptop. But it should sport a retina display for the first time. There could also be a new Mac Mini of some sort after all those years without an update.

Finally, maybe Apple will tell us why the AirPower charging mat is still not available. Apple might also update the AirPods. But maybe it’ll happen later.

If you have a recent Apple TV, you can download the Apple Events app in the App Store. It lets you stream today’s event and rewatch old events. Users with pre-App Store Apple TVs can simply turn on their devices. Apple is pushing out the “Apple Events” channel so that you can watch the event.

And if you don’t have an Apple TV, the company also lets you live-stream the event from the Apple Events section on its website. This video feed now works in all major browsers — Safari, Google Chrome, Mozilla Firefox and Microsoft Edge.

So to recap, here’s how you can watch today’s Apple event:

  • On iOS: Safari.
  • On the Mac: Safari, Google Chrome or Mozilla Firefox.
  • On Windows: Google Chrome, Mozilla Firefox or Microsoft Edge.
  • An Apple TV with the Apple Events app in the App Store.

Of course, you also can read TechCrunch’s live blog if you’re stuck at work and really need our entertaining commentary track to help you get through your day. We have a big team in the room this year.

Apple Fall Event 2018

Powered by WPeMatico

TC Sessions: AR/VR surveys an industry in transition

Posted by | augmented reality, Developer, Entertainment, Gadgets, Gaming, hardware, Media, Startups, TC, tc sessions, TC Sessions: AR/VR 2018, Venture Capital, Virtual reality, Wearables | No Comments

Industry vets and students alike crammed into UCLA’s historic Royce Hall last week for TC Sessions: AR/VR, our one-day event on the fast-moving (and hype-plagued) industry and the people in it. Disney, Snap, Oculus and more stopped by to chat and show off their latest; if you didn’t happen to be in LA that day, read on and find out what we learned — and follow the links to watch the interviews and panels yourself.

To kick off the day we had Jon Snoddy from Walt Disney Imagineering. As you can imagine, this is a company deeply invested in “experiences.” But he warned that VR and AR storytelling isn’t ready for prime time: “I don’t feel like we’re there yet. We know it’s extraordinary, we know it’s really interesting, but it’s not yet speaking to us deeply the way it will.”

Next came Snap’s Eitan Pilipski. Snapchat wants to leave augmented reality creativity up to the creators rather than prescribing what they should build. AR headsets people want to wear in real life might take years to arrive, but nevertheless Snap confirmed that it’s prototyping new AI-powered face filters and VR experiences in the meantime.

I was onstage next with a collection of startups which, while very different from each other, collectively embody a willingness to pursue alternative display methods — holography and projection — as businesses. Ashley Crowder from VNTANA and Shawn Frayne from Looking Glass explained how they essentially built the technology they saw demand for: holographic display tech that makes 3D visualization simple and real. And Lightform’s Brett Jones talked about embracing and extending the real world and creating shared experiences rather than isolated ones.

Frayne’s holographic desktop display was there in the lobby, I should add, and very impressive it was. People were crowding three or four deep to try to understand how the giant block of acrylic could hold 3D characters and landscapes.

Maureen Fan from BaoBab Studios touched on the importance of conserving cash for entertainment-focused virtual reality companies. Previewing her new film, Crow, Fan noted that new modes of storytelling need to be explored for the medium, such as the creative merging of gaming and cinematic experiences.

Up next was a large panel of investors: Niko Bonatsos (General Catalyst), Jacob Mullins (Shasta Ventures), Catherine Ulrich (FirstMark Capital) and Stephanie Zhan (Sequoia). The consensus of this lively discussion was that (as Fan noted earlier) this is a time for startups to go lean. Competition has been thinned out by companies burning VC cash and a bootstrapped, efficient company stands out from the crowd.

Oculus is getting serious about non-gaming experiences in virtual reality. In our chat with Oculus Executive Producer Yelena Rachitsky, we heard more details about how the company is looking to new hardware to deepen the interactions users can have in VR and that new hardware like the Oculus Quest will allow users to go far beyond the capabilities of 360-degree VR video.

Of course if Oculus is around, its parent company can’t be far away. Facebook’s Ficus Kirkpatrick believes it must build exemplary “lighthouse” AR experiences to guide independent developers toward use cases they could enhance. Beyond creative expression, AR is progressing slowly because no one wants to hold a phone in the air for too long. But that’s also why Facebook is already investing in efforts to build its own AR headset.

Matt Miesnieks, from 6d.ai, announced the opening of his company’s augmented reality development platform to the public and made a case of the creation of an open mapping platform and toolkit for opening augmented reality to collaborative experiences and the masses.

Augmented reality headsets like Magic Leap and HoloLens tend to hog the spotlight, but phones are where most people will have their first taste. Parham Aarabi (ModiFace), Kirin Sinha (Illumix) and Allison Wood (Camera IQ) agreed that mainstreaming the tech is about three to five years away, with a successful standalone device like a headset somewhere beyond that. They also agreed that while there are countless tech demos and novelties, there’s still no killer app for AR.

Derek Belch (STRIVR), Clorama Dorvilias (DebiasVR) and Morgan Mercer (Vantage Point) took on the potential of VR in commercial and industrial applications. They concluded that making consumer technology enterprise-grade remains one of the most significant adoptions to virtual reality applications in business. (Companies like StarVR are specifically targeting businesses, but it remains to be seen whether that play will succeed.)

With Facebook running the VR show, how are small VR startups making a dent in social? The CEOs of TheWaveVR, Mindshow and SVRF all say that part of the key is finding the best ways for users to interact and making experiences that bring people together in different ways.

After a break, we were treated to a live demo of the VR versus boxing game Creed: Rise to Glory, by developer Survios co-founders Alex Silkin and James Iliff. They then joined me for a discussion of the difficulties and possibilities of social and multiplayer VR, both in how they can create intimate experiences and how developers can inoculate against isolation or abuse in the player base.

Early-stage investments are key to the success of any emerging industry, and the VR space is seeing a slowdown in that area. Peter Rojas of Betaworks and Greg Castle from Anorak offered more details on their investment strategies and how they see success in the AR space coming along as the tech industry’s biggest companies continue to pump money into the technologies.

UCLA contributed a moderator with Anderson’s Jay Tucker, who talked with Mariana Acuna (Opaque Studios) and Guy Primus (Virtual Reality Company) about how storytelling in VR may be in very early days, but that this period of exploration and experimentation is something to be encouraged and experienced. Movies didn’t begin with Netflix and Marvel — they started with picture palaces and one-reel silent shorts. VR is following the same path.

And what would an AR/VR conference be without the creators of the most popular AR game ever created? Niantic already has some big plans as it expands its success beyond Pokémon GO. The company, which is deep in development of Harry Potter: Wizards Unite, is building out a developer platform based on their cutting-edge AR technologies. In our chat, AR research head Ross Finman talks about privacy in the upcoming AR age and just how much of a challenger Apple is to them in the space.

That wrapped the show; you can see more images (perhaps of yourself) at our Flickr page. Thanks to our sponsors, our generous hosts at UCLA, the motivated and interesting speakers and most of all the attendees. See you again soon!

Powered by WPeMatico

Google improves Android App Bundles and makes building Instant Apps easier

Posted by | Android, android studio, Apps, Developer, Firebase, Google, Google Play | No Comments

Google is launching a number of new features for Android app developers today that will make it easier for them to build smaller apps that download faster and to release instant apps that allow potential users to trial a new app without having to install it.

Android App Bundles, a feature that allows developers to modularize their apps and deliver features on demand, isn’t a new feature. The company announced it a while ago; there are now “thousands of app bundles” in production with an average file size reduction of 35 percent. With today’s update, Google is making some changes to how app bundles handle uncompressed native libraries that are already on a device. Those will lead to downloads that are on average 8 percent smaller and take up 16 percent less space on a device.

Talking about size, Google now lets developers upload app bundles with installed APK sizes of up to 500 megabytes, though this is currently still in early access.

In addition, App Bundles are now supported in Android Studio 3.2 stable and Unity 2018.3 beta.

While small app sizes are nice, another feature Google is announcing today will likely have a larger impact on developers and users alike. That’s because the company is making some changes to Instant Apps, a feature that allows developers to ship a small part of their apps as a trial or to show a part of the app experience when users come in from search results — and there’s no need to download the full app and go through the (slow) install procedure.

With this update, Google is now using App Bundles to let developers build their instant apps. That means they don’t have to publish both an instant app and an installable app. Instead, they can enable their App Bundles to include an instant app and publish a single app to the store. Thanks to that, there’s also no additional code to maintain.

Developers also can now build instant apps for their premium titles and publish them for their pre-registration campaigns, something that wasn’t previously an option.

Other updates for Android developers include improved crash reports that now combine real-world data from users with that from the Firebase Test Lab when Google sees those crashes under both circumstances. There also are updates to how developers can set up subscription billing for their apps and a couple of other minor changes you can read about here.

Powered by WPeMatico

Committed to privacy, Snips founder wants to take on Alexa and Google, with blockchain

Posted by | Alexa, Amazon, blockchain, cryptocurrency, Developer, Europe, Gadgets, Google, siri, Snips, Startups, TC | No Comments

Earlier this year we saw the headlines of how the users of popular voice assistants like Alexa and Siri and continue to face issues when their private data is compromised, or even sent to random people. In May it was reported that Amazon’s Alexa recorded a private conversation and sent it to a random contact. Amazon insists its Echo devices aren’t always recording, but it did confirm the audio was sent.

The story could be a harbinger of things to come when voice becomes more and more ubiquitous. After all, Amazon announced the launch of Alexa for Hospitality, its Alexa system for hotels, in June. News stories like this simply reinforce the idea that voice control is seeping into our daily lives.

The French startup Snips thinks it might have an answer to the issue of security and data privacy. Its built its software to run 100% on-device, independently from the cloud. As a result, user data is processed on the device itself, acting as a potentially stronger guarantor of privacy. Unlike centralized assistants like Alexa and Google, Snips knows nothing about its users.

Its approach is convincing investors. To date, Snips has raised €22 million in funding from investors like Korelya Capital, MAIF Avenir, BPI France and Eniac Ventures. Created in 2013 by 3 PhDs, and now employing more than 60 people in Paris and New York, Snips offers its voice assistant technology as a white-labelled solution for enterprise device manufacturers.

It’s tested its theories about voice by releasing the result of a consumer poll. The survey of 410 people found that 66% of respondents said they would be apprehensive of using a voice assistant in a hotel room, because of concerns over privacy, 90% said they would like to control the ways corporations use their data, even if it meant sacrificing convenience.

“Сonsumers are increasingly aware of the privacy concerns with voice assistants that rely on cloud storage — and that these concerns will actually impact their usage,” says Dr Rand Hindi, co-founder and CEO at Snips. “However, emerging technologies like blockchain are helping us to create safer and fairer alternatives for voice assistants.”

Indeed, blockchain is very much part of Snip’s future. As Hindi told TechCrunch in May, the company will release a new set of consumer devices independent of its enterprise business. The idea is to create a consumer business that will prompt further enterprise development. At the same time, they will issue a cryptographic token via an ICO to incentivize developers to improve the Snips platform, as an alternative to using data from consumers. The theory goes that this will put it at odds with the approach used by Google and Amazon, who are constantly criticised for invading our private lives merely to improve their platforms.

As a result Hindi believes that as voice-controlled devices become an increasingly common sight in public spaces, there could be a significant shift in public opinion about how their privacy is being protected.

In an interview conducted last month with TechCrunch, Hindi told me the company’s plans for its new consumer product are well advanced, and will be designed from the beginning to be improved over time using a combination of decentralized machine learning and cryptography.

By using blockchain technology to share data, they will be able to train the network “without ever anybody sending unencrypted data anywhere,” he told me.

And ‘training the network” is where it gets interesting. By issuing a cryptographic token for developers to use, Hindi says they will incentivize devs to work on their platform and process data in a decentralized fashion. They are starting from a good place. He claims they already have 14,000 developers on the platform who will be further incentivized by a token economy.

“Otherwise people have no incentive to process that data in a decentralized fashion, right?” he says.

“We got into blockchain because we’re trying to find a way to get people to participate in decentralized machine learning. We’ve been wanting to get into consumer [devices] for a couple of years but didn’t really figure out the end goal because we had always had this missing element which was: how do you keep making it better over time.”

“This is the main argument for Google and Amazon to pretend that you need to send your data to them, to make the service better. If we can fix this [by using blockchain] then we can offer a real alternative to Alexa that guarantees Privacy by Design,” he says.

“We now have over 14000 developers building for us and that’s really completely organic growth, zero marketing, purely word of mouth, which is really nice because it shows that there’s a very big demand for decentralized voice assistance, effectively.”

It could be a high-risk strategy. Launching a voice-controlled device is one thing. Layering it with applications produced by developed supposedly incentivized by tokens, especially when crypto prices have crashed, is quite another.

It does definitely feel like a moonshot idea, however, and we’ll really only know if Snips can live up to such lofty ideals after the launch.

Powered by WPeMatico

Anaxi brings more visibility to the development process

Posted by | Anaxi, Android, api, Apple, Atlassian, Developer, Docker, Enterprise, GitHub, software development, TC, version control | No Comments

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.

Powered by WPeMatico

What makes Apple’s design culture so special

Posted by | Creative Selection, Developer, disrupt sf 2018, Gadgets, Ken Kocienda | No Comments

A few days ago, I interviewed Ken Kocienda at TechCrunch Disrupt SF — he just released a book called Creative Selection. After working at Apple during some of the company’s best years, Kocienda looks back at what makes Apple such a special place.

The book in particular starts with a demo. Kocienda is invited to demo to Steve Jobs his prototype of what is about to become the iPad software keyboard.

And it’s the first of a long string of demos punctuating the book. As a reader, you follow along all the ups and downs of this design roller coaster. Sometimes, a demo clearly shows the way forward. Sometimes, it’s the equivalent of hitting a wall of bricks over and over again.

Kocienda’s career highlights include working on WebKit and Safari for the Mac right after he joined the company as well as working on iOS before the release of the first iPhone. He’s the one responsible of autocorrect and the iPhone keyboard in general.

If you care about user interfaces and design processes, it’s a good read. And it feels refreshing to read a book with HTML code, keyboard drawings and other nerdy things. It’s much better than the average business book.

Powered by WPeMatico

PoLTE lets you track devices using LTE signal

Posted by | Battlefield, Developer, Disrupt, disrupt sf, disrupt sf 2018, Enterprise, Gadgets, PoLTE, Startups | No Comments

Meet PoLTE, a Dallas-based startup that wants to make location-tracking more efficient. Thanks to PoLTE’s software solution, logistics and shipment companies can much more easily track packages and goods. The startup is participating in TechCrunch’s Startup Battlefield at Disrupt SF.

If you want to use a connected device to track a package, you currently need a couple of things — a way to determine the location of the package, and a way to transmit this information over the air. The most straightforward way of doing it is by using a GPS chipset combined with a cellular chipset.

Systems-on-chip have made this easier as they usually integrate multiple modules. You can get a GPS signal and wireless capabilities in the same chip. While GPS is insanely accurate, it also requires a ton of battery just to position a device on a map. That’s why devices often triangulate your position using Wi-Fi combined with a database of Wi-Fi networks and their positions.

And yet, using GPS or Wi-Fi as well as an LTE modem doesn’t work if you want to track a container over multiple weeks or months. At some point, your device will run out of battery. Or you’ll have to spend a small fortune to buy a ton of trackers with big batteries.

PoLTE has developed a software solution that lets you turn data from the cell modem into location information. It works with existing modems and only requires a software update. The company has been working with Riot Micro for instance.

Behind the scene PoLTE’s magic happens on their servers. IoT devices don’t need to do any of the computing. They just need to send a tiny sample of LTE signals and PoLTE can figure out the location from their servers. Customers can then get this data using an API.

It only takes 300 bytes of data to get location information with precision of less than a few meters. You don’t need a powerful CPU, Wi-Fi, GPS or Bluetooth.

“We offer 80 percent cost reduction on IoT devices together with longer battery life,” CEO Ed Chao told me.

On the business side, PoLTE is using a software-as-a-service model. You can get started for free if you don’t need a lot of API calls. You then start paying depending on the size of your fleet of devices and the number of location requests.

It doesn’t really matter if the company finds a good business opportunity. PoLTE is a low-level technology company at heart. Its solution is interesting by itself and could help bigger companies that are looking for an efficient location-tracking solution.


Powered by WPeMatico

Say ‘Aloha’: A closer look at Facebook’s voice ambitions

Posted by | Apps, Developer, Facebook, Facebook Aloha, facebook messenger, Facebook Patents, Facebook Portal, Facebook Smart Speaker, Facebook Voice, instagram, Instagram Voice Messaging, Mobile, Social, Speech Recognition, TC, voice assistant, voice transcription | No Comments

Facebook has been a bit slow to adopt the voice computing revolution. It has no voice assistant, its smart speaker is still in development, and some apps like Instagram aren’t fully equipped for audio communication. But much of that is set to change judging by experiments discovered in Facebook’s code, plus new patent filings.

Developing voice functionality could give people more ways to use Facebook in their home or on the go. Its forthcoming Portal smart speaker is reportedly designed for easy video chatting with distant family, including seniors and kids that might have trouble with phones. Improved transcription and speech-to-text-to-speech features could connect Messenger users across input mediums and keep them on the chat app rather than straying back to SMS.

But Facebook’s voice could be drowned out by the din of the crowd if it doesn’t get moving soon. All the major mobile hardware and operating system makers now have their own voice assistants like Siri, Alexa, Google Assistant and Samsung Bixby, as well as their own smart speakers. In Q2 2018, Canalys estimates that Google shipped 5.4 million Homes, and Amazon shipped 4.1 million Echoes. Apple’s HomePod is off to a slow start with less than 6 percent of the market, behind Alibaba’s smart speaker, according to Strategy Analytics. Facebook’s spotty record around privacy might deflect potential customers to its competitors.

Given Facebook is late to the game, it will need to arrive with powerful utility that solves real problems. Here’s a look at Facebook’s newest developments in the voice space, and how its past experiments lay the groundwork for its next big push.

Aloha voice

Facebook is developing its own speech recognition feature under the name Aloha for both the Facebook and Messenger apps, as well as external hardware — likely the video chat smart speaker it’s developing. Code inside the Facebook and Messenger Android apps dug up by frequent TechCrunch tipster and mobile researcher Jane Manchun Wong gives the first look at a prototype for the Aloha user interface.

Labeled “Aloha Voice Testing,” as a user speaks while in a message thread, a horizontal blue bar expands and contracts to visualize the volume of speech while recognizing and transcribing into text. The code describes the feature as having connections with external Wi-Fi or Bluetooth devices. It’s possible that the software will run on both Facebook’s hardware and software, similar to Google Assistant that runs both on phones and Google Home speakers. [Update: As seen below, the Aloha feature contains a “Your mobile device is now connected Portal” screen, confirming that name for the Facebook video chat smart speaker device.]

Facebook declined to comment on the video, with its spokesperson Ha Thai telling me, “We test stuff all the time — nothing to share today but my team will be in touch in a few weeks about hardware news coming from the AR/VR org.” It unclear if that hardware news will focus on voice and Aloha or Portal, or if it’s merely related to Facebook’s Oculus Connect 5 conference on September 25th.

A source previously told me that years ago, Facebook was interested in developing its own speech recognition software designed specifically to accurately transcribe how friends talk to each other. These speech patterns are often more casual, colloquial, rapid and full of slang than the way we formally address computerized assistants like Amazon Alexa or Google Home.

Wong also found the Aloha logo buried in Facebook’s code, which features volcano imagery. I can confirm that I’ve seen a Facebook Aloha Setup chatbot with a similar logo on the phones of Facebook employees.

If Facebook can figure this out, it could offer its own transcription features in Messenger and elsewhere on the site so users could communicate across mediums. It could potentially let you dictate comments or messages to friends while you have your hands full or can’t look at your screen. The recipient could then read the text instead of having to listen to it like a voice message. The feature also could be used to power voice navigation of Facebook’s apps for better hands-free usage.

Speaker and camera patents

Facebook awarded patent for speaker

Facebook’s video chat smart speaker was reportedly codenamed Aloha originally but later renamed Portal, Alex Heath of Business Insider and now Cheddar first reported in August 2017. The $499 competitor to the Amazon Echo Show was initially set to launch at Facebook’s F8 in May, but Bloomberg reported it was pushed back amid concerns that it would exacerbate the privacy scandal ignited by Cambridge Analytica.

A new patent filing reveals Facebook was considering building a smart speaker as early as December 26th, 2016 when it filed a patent for a cube-shaped device. The patent diagrams an “ornamental design for a speaker device” invented by Baback Elmieh, Alexandre Jais and John Proksch-Whaley. Facebook had acquired Elmieh’s startup Nascent Objects in September of that year and he’s now a technical project lead at Facebook’s secretive Building 8 hardware lab.

The startup had been building modular hardware, and earlier this year he was awarded patents for work at Facebook on several modular cameras. The speaker and camera technology Facebook has been developing could potentially evolve into what’s in its video chat speaker.

The fact that Facebook has been exploring speaker technology for so long and that the lead on these patents is still running a secret project in Building 8 strengthens the case that Facebook has big plans for the voice space.

Patents awarded to Facebook show designs for a camera (left) and video camera (right)

Instagram voice messaging

And finally, Instagram is getting deeper into the voice game, too. A screenshot generated from the code of Instagram’s Android app by Wong reveals the development of a voice clip messaging feature heading to Instagram Direct. This would allow you to speak into Instagram and send the audio clips similar to a walkie-talkie, or the voice messaging feature Facebook Messenger added back in 2013.

You can see the voice button in the message composer at the bottom of the screen, and the code explains that to “Voice message, press and hold to record.” The prototype follows the recent launch of video chat in Instagram Direct, another feature on which TechCrunch broke the news thanks to Wong’s research. An Instagram spokesperson declined to comment, as is typical when features are spotted in its code but aren’t publicly testing yet, saying, “Unfortunately nothing more to share on this right now.”

The long road to Voicebook

Facebook has long tinkered in the voice space. In 2015, it acquired natural language processing startup Wit.ai that ran a developer platform for building speech interfaces, though it later rolled Wit.ai into Messenger’s platform team to focus on chatbots. Facebook also began testing automatically transcribing Messenger voice clips into text in 2015 in what was likely the groundwork for the Aloha feature seen above. The company also revealed its M personal assistant that could accomplish tasks for users, but it was only rolled out to a very limited user base and later turned off.

The next year, Facebook’s head of Messenger David Marcus claimed at TechCrunch Disrupt that voice “is not something we’re actively working on right now,” but added that “at some point it’s pretty obvious that as we develop more and more capabilities and interactions inside of Messenger, we’ll start working on voice exchanges and interfaces.” However, a source had told me Facebook’s secretive Language Technology Group was already exploring voice opportunities. Facebook also began testing its Live Audio feature for users who want to just broadcast sound and not video.

By 2017, Facebook was offering automatic captioning for Pages’ videos, and was developing a voice search feature. And this year, Facebook began trying voice clips as status updates and Stories for users around the world who might have trouble typing in their native tongue. But executives haven’t spoken much about the voice initiatives.

The most detailed comments we have come from Facebook’s head of design Luke Woods at TechCrunch Disrupt 2017 where he described voice search saying it was, “very promising. There are lots of exciting things happening…. I love to be able to talk to the car to navigate to a particular place. That’s one of many potential use cases.” It’s also one that voice transcription could aid.

It’s still unclear exactly what Facebook’s Aloha will become. It could be a de facto operating system or voice interface and transcription feature for Facebook’s smart speaker and apps. It could become a more full-fledged voice assistant like M, but with audio. Or perhaps it could become Facebook’s bridge to other voice ecosystems, serving as Facebook’s Alexa Skill or Google Assistant Action.

When I asked Woods “How would Facebook on Alexa work?,” he said with a smile “That’s a very interesting question! No comment.”

Powered by WPeMatico

Facebook builds its own AR games for Messenger video chat

Posted by | Apps, augmented reality, Augmented Reality Games, Developer, Facebook, facebook messenger, Facebook Messenger Instant Games, Gaming, Mobile, Snapchat Snappables, Social, TC | No Comments

Facebook is diving deeper into in-house game development with the launch of its own version of Snapchat’s multiplayer augmented reality video chat games. Today, Facebook Messenger globally launches its first two AR video chat games that you can play with up to six people.

“Don’t Smile” is like a staring contest that detects if you grin, and then uses AR to contort your face into an exaggerated Joker’s smirk while awarding your opponent the win. “Asteroids Attack” sees you move your face around to navigate a space ship, avoiding rocks and grabbing laser beam powerups. Soon, Facebook also plans to launch “Beach Bump” for passing an AR ball back and forth, and a “Kitten Craze” cat matching game. To play the games, you start a video chat, hit the star button to open the filter menu, then select one of the games. You can snap and share screenshots to your chat thread while you play.

The games are effectively a way to pass the time while you video chat, rather than something you’d ever play on your own. They could be a hit with parents and grandparents who are away and want to spend time with a kid…who isn’t exactly the best conversationalist.

Facebook tells me it built these games itself using the AR Studio tool it launched last year to let developers create their own AR face filters. When asked if game development would be available to everyone through AR studio, a spokesperson told me, “Not today, but we’ve seen successful short-session AR games developed by the creator community and are always looking out for ways to bring the best AR content to the FB family of apps.”

For now, there will be no ads, sponsored branding or in-app purchases in Messenger’s video chat games. But those all offer opportunities for Facebook and potentially outside developers to earn money. Facebook could easily show an ad interstitial between game rounds, let brands build games to promote movie releases or product launches or let you buy powerups to beat friends or cosmetically upgrade your in-game face.

Snapchat’s Snappables games launched in April

The games feel less polished than the launch titles for Snapchat’s Snappables gaming platform that launched in April. Snapchat focused on taking over your whole screen with augmented reality, transporting you into space or a disco dance hall. Facebook’s games merely overlay a few graphics on the world around you. But Facebook’s games are more purposefully designed for split-screen multiplayer. Snapchat is reportedly building its own third-party game development platform, but it seems Facebook wanted to get the drop on it.

The AR video chat games live separately from the Messenger Instant Games platform the company launched last year. These include arcade classics and new mobile titles that users can play by themselves and challenge friends over high-scores. Facebook now allows developers of Instant Games to monetize with in-app purchases and ads, foreshadowing what could come to AR video chat games.

Facebook has rarely developed its own games. It did build a few mini-games, like an arcade pop-a-shot style basketball game and a soccer game to show off what the Messenger Instant Games platform could become. But typically it’s stuck to letting outside developers lead. Here, it may be trying to set examples of what developers should build before actually spawning a platform around video chat games.

Now with more than 1.3 billion users, Facebook Messenger is seeking more ways to keep people engaged. Having already devoured many people’s one-on-one utility chats, it’s fun group chats, video calling and gaming that could get people spending more time in the app.

Powered by WPeMatico