augmented reality

This AR guppy feeds on the spectrum of human emotion

Posted by | Apps, augmented reality, e3 2018, Gaming, Startups | No Comments

Indiecade always offers a nice respite from the wall of undulating human flesh and heat that is the rest of the E3 show floor. The loose confederation of independent developers often produces compelling and bizarre gaming experiences outside of the big studio system.

TendAR is the most compelling example of this out of this year’s batch. It is, simply put, a pet fish that feeds on human emotions through augmented reality. I can’t really explain why this is a thing, but it is. It’s a video game, so just accept it and move on.

The app is produced by Tender Claws, a small studio out of Los Angeles best known for Virtual Virtual Reality, an Oculus title that boasts among its “key features”: 50-plus unique virtual virtual realities and an artichoke screams at you.

TendAR fits comfortably within that manner of absurdist framework, though the title has more in common with virtual pets like Tamagotchi and the belovedly bizarre Dreamcast cult hit, Seaman. There’s also a bit of Douglas Adams wrapped up in there, in that your pet guppy feeds on human emotions detected through face detection.

The app is designed for two players, both holding onto the same phone, feigning different emotions when prompted by a chatty talking fish. If you fail to give it what it wants, your fish will suffer. I tried the game and my guppy died almost immediately. Apparently my ability to approximate sadness is severely lacking. Tell it to my therapist, am I right?

The app is due out this year for Android.

Powered by WPeMatico

Now Snapchat lets you unsend messages like Faceboook promised

Posted by | Advertising Tech, Apps, augmented reality, instagram, Mobile, snap inc, Snapchat, Snapchat Ads, snapchat spectacles, Social, TC | No Comments

Mark Zuckerberg’s Facebook messages were retracted from the inboxes of some users, six sources told TechCrunch in April. Facebook quickly tried to normalize that breach of trust by claiming it would in the coming months give everyone the ability to unsend messages. We haven’t heard a word about it since, and Facebook told me it had nothing more to share here today.

Well Snap is stepping up. Snapchat will let you retract your risqué, embarrassing or incriminating messages thanks to a new feature called Clear Chats that’s rolling out globally over the next few weeks.

Hold down on a text, image, video, memory, sticker or audio note in a one-on-one or group chat Snapchat message thread and you’ll see a Delete button. Tap it, and Snapchat will try to retract the message, though it admits it won’t always work if the recipient lacks an internet connection or updated version of the app. The recipient will also be notified… something Facebook didn’t do in the case of Zuckerberg’s messages.

The Clear Chats feature could make people more comfortable sending sensitive information over Snapchat. The app already auto-deletes messages after they’re viewed, unless a recipient chooses to screenshot or Save them, which their conversation partner can see. This could be especially useful for thwarting cases of revenge porn, where hackers or jilted ex-lovers expose someone’s nude images.

Unfortunately, the Clear Chats option could also be used to send then retract abusive messages, destroying the paper trail. Social media evidence is increasingly being used in divorce and custody battles, which an unsend feature might undermine… especially if Facebook goes through with rolling it out on its platform where messages are normally permanent. But right now, Snapchat’s priority is doing whatever it can to boost usage after hitting its slowest growth rate ever last quarter. If teens feel like Snapchat is a consequence-free place to message, whether or not that’s true, they might favor it over SMS and other social apps.

More Snapchat Spectacles and e-commerce news

Snap made a few other announcements today. Spectacles v2, which are actually pretty great and I continue to use, are now available for purchase through Amazon in the U.S., U.K and Canada. The $150 photo- and video-recording sunglasses come to more European countries via Jeff Bezos soon, such as France, Germany, Italy and Spain. Amazon will sell Spectacles in three color combos: Onyx Moonlight, Sapphire Twilight and Ruby Daybreak.

Until now, you could only buy v2 on Snap’s website. That’s because Snapchat’s eagerness to develop a bevy of sales channels made it very tough to forecast demand for its lackluster v1 Spectacles. They only sold 220,000. That led to hundreds of thousands of pairs gathering dust unsold in warehouses, and Snapchat taking an embarrassing $40 million write-off.

“We had an inventory challenge with v1,” Snap’s VP of hardware Mike Randall told me in April. “We don’t think it was a product issue. It was an internal understanding our demand issue versus a planning issue. So we think by having a more simplistic channel strategy with v2 we can more thoughtfully manage demand with v2 versus v1.” Working with Amazon and its robust toolset should help Snap get Spectacles in front of more buyers without obscuring how many it should be manufacturing.

Still, the worst thing about Spectacles is Snapchat. The inability to dump footage directly to your phone’s camera roll, and the incompatibility of its round media format with other social networks, means it’s tough to share your Spectacles content anywhere else while making it look good. Snap has experimented with a traditional landscape export format, but that hasn’t rolled out. Spectacles could strongly benefit from Snap partnering with fellow apps or open sourcing to let others show its circular always-full-screen format in all its glory.

Finally, Snapchat is launching a new e-commerce ad unit that shows a carousel of purchaseable items at the bottom of the screen that users can tap to buy without leaving the Snapchat app. This follows our prediction that Snap launching its own in-app merch store was really the foundation of a bigger e-commerce platform that’s now rolling out.

Merchants can use the Snap Pixel to measure how their ads lead to sales. The ability to shave down the e-commerce conversion funnel could get advertisers spending more on Snapchat when it could use the dollars. Last quarter it lost $385 million and missed its revenue target by $14 million.

Snapchat is also bringing its augmented reality advertisements to its self-serve ad-buying tool. They’re sold on an effective CPM basis for $8 to $20 depending on targeting. Snapchat is also turning its new multiplayer game filters, called Snappables, into ads.

Overall, it’s good to see Snapchat iterating across its software, hardware and business units. Plagued by executive departures, fierce competition from Facebook, a rough recent earnings report and share price troubles, it’s easy to imagine the team getting distracted. The long-term roadmap is fuzzy. With Stories becoming more popular elsewhere, Spectacles sales not being enough to right the ship and Instagram preparing to launch a long-form video hub that competes with Snapchat Discover, Snap needs to figure out its identity. Perhaps that will hinge on some flashy new feature that captures the imagination of the youth. That could be its upcoming Snapkit platform that will let users log into other apps using their Snapchat credentials, bring their Bitmoji, and even use Snap’s AR-equipped software camera within other apps.

But otherwise, it must lock in for a long-haul of efficient and methodical improvement. If it’s not growing, the best it can do is hold on to its core audience and squeeze as many dollars out of them as possible without looking desperate.

Powered by WPeMatico

Speech recognition triggers fun AR stickers in Panda’s video app

Posted by | Apps, augmented reality, Entertainment, funding, instagram, Mobile, panda, snap inc, Snapchat, Social, Speech Recognition, Startups, TC, video chat | No Comments

Panda has built the next silly social feature Snapchat and Instagram will want to steal. Today the startup launches its video messaging app that fills the screen with augmented reality effects based on the words you speak. Say “Want to get pizza?” and a 3D pizza slice hovers by your mouth. Say “I wear my sunglasses at night” and suddenly you’re wearing AR shades with a moon hung above your head. Instead of being distracted by having to pick effects out of a menu, they appear in real-time as you chat.

Panda is surprising and delightful. It’s also a bit janky, created by a five person team with under $1 million in funding. Building a video chat app user base from scratch amidst all the competition will be a struggle. But even if Panda isn’t the app to popularize the idea, it’s invented a smart way to enhance visual communication that blends into our natural behavior.

It all started with a trippy vision. Panda’s 18-year-old founder Daniel Singer had built a few failed apps and was working as a product manager at peer-to-peer therapy startup Sensay in LA. When Alaska Airlines bought Virgin, Singer scored a free flight and came to see his buddy Arjun Sethi, an investor at Social Capital in SF. That’s when suddenly “I’m hallucinating that as I’m talking the things I’m saying should appear” he tells me. Sethi dug the idea and agreed to fund a project to build it.

Panda founder Daniel Singer

Meanwhile, Singer had spent the last 6 years FaceTiming almost every day. He loved telling stories with his closest friends, yet Apple’s video chat protocol had fallen behind Snapchat and Instagram when it came to creative tools. So a year ago he raised $850,000 from Social Capital and Shrug Capital plus angels like Cyan (Banister) and Secret’s David Byttow. Singer set out to build Panda to combine FaceTime’s live chat with Snapchat’s visual flare triggered by voice.

But it turns out, “video chat is hard” he admits. So his small team settled for letting users send 10-second-max asynchronous video messages. Panda’s iOS app launched today with about 200 different voice activated stickers from footballs to sleepy Zzzzzs to a “&’%!#” censorship bar that covers your mouth when you swear. Tap them and they disappear, and soon you’ll be able to reposition them. As you trigger the effects for the first time, they go into a trophy case that gamifies voice experimentation.

Panda is fun to play around with yourself even if you aren’t actively messaging friends, which is reminiscent of how teens play with Snapchat face filters without always posting the results. The speech recognition effects will make a lot more sense if Panda can eventually succeed at solving the live video chat tech challenge. One day Singer imagines Panda making money by selling cosmetic effects that make you more attractive or fashionable, or offering sponsored effects so when you say “gym”, the headband that appears on you is Nike branded.

Unfortunately, the app can be a bit buggy and effects don’t always trigger, fooling you that you aren’t saying the right words. And it could be tough convincing buddies to download another messaging app, let alone turn it into a regular habit. Apple is also adding a slew of Memoji personalized avatars and other effects to FaceTime in its upcoming iOS 12.

Panda does advance one of technology’s fundamental pursuits: taking the fuzzy ideas in your head and translating them into meaning for others in clearer ways than just words can offer. It’s the next wave of visual communication that doesn’t require you to break from the conversation.

When I ask why other apps couldn’t just copy the speech stickers, Singer insisted “This has to be voice native.” I firmly disagree, and can easily imagine his whole app becoming just a single filter in Snapchat and Instagram Stories. He eventually acquiesced that “It’s a new reality that bits and pieces of consumer technology get traded around. I wouldn’t be surprised if others think it’s a good idea.”

It’s an uphill battle trying to disrupt today’s social giants, who are quick to seize on any idea that gives them an edge. Facebook rationalizes stealing other apps’ features by prioritizing whatever will engage its billions of users over the pride of its designers. Startups like Panda are effectively becoming outsourced R&D departments.

Still, Panda pledges to forge on (though it might be wise to take a buyout offer). Singer gets that his app won’t cure cancer or “make the world a better place” as HBO’s Silicon Valley has lampooned. “We’re going to make really fun stuff and make them laugh and smile and experience human emotion” he concludes. “At the end of the day, I don’t think there’s anything wrong with building entertainment and delight.”

Powered by WPeMatico

HoloLens acts as eyes for blind users and guides them with audio prompts

Posted by | accessibility, augmented reality, Gadgets, hardware, Health, HoloLens, Microsoft, science, Wearables | No Comments

Microsoft’s HoloLens has an impressive ability to quickly sense its surroundings, but limiting it to displaying emails or game characters on them would show a lack of creativity. New research shows that it works quite well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions.

The researchers, from Caltech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. After all, if you can tell where a chair is, you don’t need to see it to avoid it, right?

Crunching visual data and producing a map of high-level features like walls, obstacles and doors is one of the core capabilities of the HoloLens, so the team decided to let it do its thing and recreate the environment for the user from these extracted features.

They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it.

That’s all well for stationary tasks like finding your cane or the couch in a friend’s house. But the system also works in motion.

The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well.

This render shows the actual paths taken by the users in the navigation tests

Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A “virtual guide” repeatedly says “follow me” from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails were and when the user had gone off course.

All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation. One subject, the paper notes, said “That was fun! When can I get one?”

Microsoft actually looked into something like this years ago, but the hardware just wasn’t there — HoloLens changes that. Even though it is clearly intended for use by sighted people, its capabilities naturally fill the requirements for a visual prosthesis like the one described here.

Interestingly, the researchers point out that this type of system was also predicted more than 30 years ago, long before they were even close to possible:

“I strongly believe that we should take a more sophisticated approach, utilizing the power of artificial intelligence for processing large amounts of detailed visual information in order to substitute for the missing functions of the eye and much of the visual pre-processing performed by the brain,” wrote the clearly far-sighted C.C. Collins way back in 1985.

The potential for a system like this is huge, but this is just a prototype. As systems like HoloLens get lighter and more powerful, they’ll go from lab-bound oddities to everyday items — one can imagine the front desk at a hotel or mall stocking a few to give to vision-impaired folks who need to find their room or a certain store.

“By this point we expect that the reader already has proposals in mind for enhancing the cognitive prosthesis,” they write. “A hardware/software platform is now available to rapidly implement those ideas and test them with human subjects. We hope that this will inspire developments to enhance perception for both blind and sighted people, using augmented auditory reality to communicate things that we cannot see.”

Powered by WPeMatico

Meet the speakers at The Europas, and get your ticket free (July 3, London)

Posted by | accelerator, Advertising Tech, Apps, artificial intelligence, Asia, augmented reality, automotive, Banking, biotech, blockchain, Book Review, brazil, Built In, cannabis, Cloud, Collaborative Consumption, Community, Crowdfunding, cryptocurrency, Developer, Distributed Ledger, Diversity, Earnings, eCommerce, Education, Enterprise, Entertainment, Europe, events, Finance, food, funding, Fundings & Exits, Gadgets, Gaming, Government, GreenTech, Hack, hardware, Health, Hiring, Mobile, Social, Startups, TC | No Comments

Excited to announce that this year’s The Europas Unconference & Awards is shaping up! Our half day Unconference kicks off on 3 July, 2018 at The Brewery in the heart of London’s “Tech City” area, followed by our startup awards dinner and fantastic party and celebration of European startups!

The event is run in partnership with TechCrunch, the official media partner. Attendees, nominees and winners will get deep discounts to TechCrunch Disrupt in Berlin, later this year.
The Europas Awards are based on voting by expert judges and the industry itself. But key to the daytime is all the speakers and invited guests. There’s no “off-limits speaker room” at The Europas, so attendees can mingle easily with VIPs and speakers.

What exactly is an Unconference? We’re dispensing with the lectures and going straight to the deep-dives, where you’ll get a front row seat with Europe’s leading investors, founders and thought leaders to discuss and debate the most urgent issues, challenges and opportunities. Up close and personal! And, crucially, a few feet away from handing over a business card. The Unconference is focused into zones including AI, Fintech, Mobility, Startups, Society, and Enterprise and Crypto / Blockchain.

We’ve confirmed 10 new speakers including:


Eileen Burbidge, Passion Capital


Carlos Eduardo Espinal, Seedcamp


Richard Muirhead, Fabric Ventures


Sitar Teli, Connect Ventures


Nancy Fechnay, Blockchain Technologist + Angel


George McDonaugh, KR1


Candice Lo, Blossom Capital


Scott Sage, Crane Venture Partners


Andrei Brasoveanu, Accel


Tina Baker, Jag Shaw Baker

How To Get Your Ticket For FREE

We’d love for you to ask your friends to join us at The Europas – and we’ve got a special way to thank you for sharing.

Your friend will enjoy a 15% discount off the price of their ticket with your code, and you’ll get 15% off the price of YOUR ticket.

That’s right, we will refund you 15% off the cost of your ticket automatically when your friend purchases a Europas ticket.

So you can grab tickets here.

Vote for your Favourite Startups

Public Voting is still humming along. Please remember to vote for your favourite startups!

Awards by category:

Hottest Media/Entertainment Startup

Hottest E-commerce/Retail Startup

Hottest Education Startup

Hottest Startup Accelerator

Hottest Marketing/AdTech Startup

Hottest Games Startup

Hottest Mobile Startup

Hottest FinTech Startup

Hottest Enterprise, SaaS or B2B Startup

Hottest Hardware Startup

Hottest Platform Economy / Marketplace

Hottest Health Startup

Hottest Cyber Security Startup

Hottest Travel Startup

Hottest Internet of Things Startup

Hottest Technology Innovation

Hottest FashionTech Startup

Hottest Tech For Good

Hottest A.I. Startup

Fastest Rising Startup Of The Year

Hottest GreenTech Startup of The Year

Hottest Startup Founders

Hottest CEO of the Year

Best Angel/Seed Investor of the Year

Hottest VC Investor of the Year

Hottest Blockchain/Crypto Startup Founder(s)

Hottest Blockchain Protocol Project

Hottest Blockchain DApp

Hottest Corporate Blockchain Project

Hottest Blockchain Investor

Hottest Blockchain ICO (Europe)

Hottest Financial Crypto Project

Hottest Blockchain for Good Project

Hottest Blockchain Identity Project

Hall Of Fame Award – Awarded to a long-term player in Europe

The Europas Grand Prix Award (to be decided from winners)

The Awards celebrates the most forward thinking and innovative tech & blockchain startups across over some 30+ categories.

Startups can apply for an award or be nominated by anyone, including our judges. It is free to enter or be nominated.

What is The Europas?

Instead of thousands and thousands of people, think of a great summer event with 1,000 of the most interesting and useful people in the industry, including key investors and leading entrepreneurs.

• No secret VIP rooms, which means you get to interact with the Speakers

• Key Founders and investors speaking; featured attendees invited to just network

• Expert speeches, discussions, and Q&A directly from the main stage

• Intimate “breakout” sessions with key players on vertical topics

• The opportunity to meet almost everyone in those small groups, super-charging your networking

• Journalists from major tech titles, newspapers and business broadcasters

• A parallel Founders-only track geared towards fund-raising and hyper-networking

• A stunning awards dinner and party which honors both the hottest startups and the leading lights in the European startup scene

• All on one day to maximise your time in London. And it’s PROBABLY sunny!

europas8

That’s just the beginning. There’s more to come…

europas13

Interested in sponsoring the Europas or hosting a table at the awards? Or purchasing a table for 10 or 12 guest or a half table for 5 guests? Get in touch with:
Petra Johansson
Petra@theeuropas.com
Phone: +44 (0) 20 3239 9325

Powered by WPeMatico

Fantasmo is a decentralized map for robots and augmented reality

Posted by | Apple, Apps, augmented reality, Developer, Fantasmo, funding, Google, Mobile, robotics, Startups, TC | No Comments

“Whether for AR or robots, anytime you have software interacting with the world, it needs a 3D model of the globe. We think that map will look a lot more like the decentralized internet than a version of Apple Maps or Google Maps.” That’s the idea behind new startup Fantasmo, according to co-founder Jameson Detweiler. Coming out of stealth today, Fantasmo wants to let any developer contribute to and draw from a sub-centimeter accuracy map for robot navigation or anchoring AR experiences.

Fantasmo plans to launch a free Camera Positioning Standard (CPS) that developers can use to collect and organize 3D mapping data. The startup will charge for commercial access and premium features in its TerraOS, an open-sourced operating system that helps property owners keep their maps up to date and supply them for use by robots, AR and other software equipped with Fantasmo’s SDK.

With $2 million in funding led by TenOneTen Ventures, Fantasmo is now accepting developers and property owners to its private beta.

Directly competing with Google’s own Visual Positioning System is an audacious move. Fantasmo is betting that private property owners won’t want big corporations snooping around to map their indoor spaces, and instead will want to retain control of this data so they can dictate how it’s used. With Fantasmo, they’ll be able to map spaces themselves and choose where robots can roam or if the next Pokémon GO can be played there.

“Only Apple, Google, and HERE Maps want this centralized. If this data sits on one of the big tech company’s servers, they could basically spy on anyone at any time,” says Detweiler. The prospect gets scarier when you imagine everyone wearing camera-equipped AR glasses in the future. “The AR cloud on a central server is Big Brother. It’s the end of privacy.”

Detweiler and his co-founder Dr. Ryan Measel first had the spark for Fantasmo as best friends at Drexel University. “We need to build Pokémon in real life! That was the genesis of the company,” says Detweiler. In the meantime he founded and sold LaunchRock, a 500 Startups company for creating “Coming Soon” sign-up pages for internet services.

After Measel finished his PhD, the pair started Fantasmo Studios to build augmented reality games like Trash Collectors From Space, which they took through the Techstars accelerator in 2015. “Trash Collectors was the first time we actually created a spatial map and used that to sync multiple people’s precise position up,” says Detweiler. But while building the infrastructure tools to power the game, they realized there was a much bigger opportunity to build the underlying maps for everyone’s games. Now the Santa Monica-based Fantasmo has 11 employees.

“It’s the internet of the real world,” says Detweiler. Fantasmo now collects geo-referenced photos, scans them for identifying features like walls and objects, and imports them into its point cloud model. Apps and robots equipped with the Fantasmo SDK can then pull in the spatial map for a specific location that’s more accurate than federally run GPS. That lets them peg AR objects to precise spots in your environment while making sure robots don’t run into things.

Fantasmo identifies objects in geo-referenced photos to build a 3D model of the world

“I think this is the most important piece of infrastructure to be built during the next decade,” Detweiler declares. That potential attracted funding from TenOneTen, Freestyle Capital, LDV, NoName Ventures, Locke Mountain Ventures and some angel investors. But it’s also attracted competitors like Escher Reality, which was acquired by Pokémon GO parent company Niantic, and Ubiquity6, which has investment from top-tier VCs like Kleiner Perkins and First Round.

Google is the biggest threat, though. With its industry-leading traditional Google Maps, experience with indoor mapping through Tango, new VPS initiative and near limitless resources. Just yesterday, Google showed off using an AR fox in Google Maps that you can follow for walking directions.

Fantasmo is hoping that Google’s size works against it. The startup sees a path to victory through interoperability and privacy. The big corporations want to control and preference their own platforms’ access to maps while owning the data about private property. Fantasmo wants to empower property owners to oversee that data and decide what happens to it. Measel concludes, “The world would be worse off if GPS was proprietary. The next evolution shouldn’t be any different.”

Powered by WPeMatico

Maps walking navigation is Google’s most compelling use for AR yet

Posted by | AR, augmented reality, Google, google i/o, Google I/O 2018, Google-Maps, Mobile | No Comments

Google managed to elicit an audible gasp from the crowd at I/O today when it showed off a new augmented feature for Maps. It was a clear standout during a keynote that contained plenty of iterative updates to existing software, and proved a key glimpse into what it will take to move AR from interesting novelty to compelling use case.

Along with the standard array of ARCore-based gaming offerings, the new AR mode for Maps is arguably one of the first truly indispensable real-world applications. As someone who spent the better part of an hour yesterday attempting to navigate the long, unfamiliar blocks of Palo Alto, California by following an arrow on a small blue circle, I can personally vouch for the usefulness of such an application.

It’s still early days — the company admitted that it’s playing around with a few ideas here. But it’s easy to see how offering visual overlays of a real-time image would make it a heck of a lot easier to navigate unfamiliar spaces.

In a sense, it’s a like a real-time version of Street View, combining real-world images with map overlays and location-based positioning. In the demo, a majority of the screen is devoted to the street image captured by the on-board camera. Turn by turn directions and large arrows are overlaid onto the video, while a small half-circle displays a sliver of the map to give you some context of where you are and how long it will take to get where you’re going.

Of course, such a system that’s heavily reliant on visuals wouldn’t make sense in the context of driving, unless, of course, it’s presented in a kind of heads up display. Here, however, it works seamlessly, assuming, of course, you’re willing to look a bit dorky by holding up your phone in front of your face.

There are a lot of moving parts here too, naturally. In order to sync up to a display like this, the map is going to have to get things just right — and anyone who’s ever walked through the city streets on Maps knows how often that can misfire. That’s likely a big part of the reason Google wasn’t really willing to share specifics with regards to timing. For now, we just have to assume this is a sort of proof of concept — along with the fun little fox walking guy the company trotted out that had shades of a certain Johnny Cash-voiced coyote.

But if this is what trying to find my way in a new city looks like, sign me up.

Powered by WPeMatico

What to expect at Google I/O this week

Posted by | Android, artificial intelligence, augmented reality, events, Google, Google I/O 2018 | No Comments

Google has been rolling out news at a steady rate since last week, in what feels like a bit of a last-minute clearinghouse ahead of tomorrow. The company’s already taken the wraps off of news about Android TV, Google Home, Wear OS Assistant, you name it. If this were practically any other company, we’d be concerned that there’s nothing left to discuss.

But this is Google. The next few days are going to be jam-packed with developer news and a whole lot of information around the company’s consumer-facing offerings over the next year and beyond. Android, Assistant, Wear OS, search and the like are going to take center stage when the company kicks off the festivities tomorrow at the Shoreline Amphitheatre in Mountain View.

You’d better believe we’ll be on-hand bringing you all of the relevant information as it breaks. In the meantime, here’s some of what you can expect from the big show.

Android P

The latest version of Google’s mobile operating system seems likely to take center stage here — be it Peppermint Patty, Pudding or Popsicle. The first developer preview of 9.0 dropped in March of this year, and I/O is likely to be the launching pad of the next big one. Given how much of Oreo’s changes happened behind the scenes, it stands to reason that we’re in for a more consumer-facing update for the OS this time out.

We’ve already seen a bit of those visual updates, including new notifications and some upgrades setting the stage for the nearly ubiquitous top notch. That, by most accounts, won’t be going away any time soon. “Material Design 2” is a buzzword that’s been floating around for a few months now to describe the first major overhaul to the OS’s aesthetic in about four years, bringing an overall flatter and more universal design language to Android.

We’ll also likely get some more insight into a gesture-based navigation that takes some cues from the iPhone X.

Assistant/Home

Assistant has been a linchpin in Google’s ecosystem play for a few years now, and its importance is only likely to grow. Announcements over the past couple of weeks have broadened the company’s Siri/Alexa competitor to even more categories, including Android TV and Wear OS, so probably don’t do an Assistant-related drinking game tomorrow, unless you’re gunning for alcohol poisoning.

It also seems fairly likely that we’ll see more devices on this front. A second version of Google Home seems overdue. That could well get an Echo-like update, bringing it up to speed with the rest of the line. And what of all of those Smart Displays the company talked up back at CES? Things have been pretty quiet on that front — perhaps a little too quiet.

Expect partnerships galore. The company showed off a Fandango Action just this week — and that’s likely to only be the tip of the iceberg.

AR/VR/AI

Artificial intelligence has also been gaining plenty of steam on the Google campus. AI and ML have been the driving forces in key offerings like Translate, Lens and, of course, Assistant, which the company is looking to truly distinguish from the competition. The company’s TensorFlow machine learning engine is going to get a lot of attention.

Google also just recently took the wraps off the Lenovo-branded Daydream headset, setting the stage for some big VR talk at this week’s show. Of course, the company seems even more content to focus on augmented reality these days. The tech has been a focus recently on Pixel devices, as the company looks to distinguish ARCore from Apple’s ARKit. Now’s the time for the company to really double down on what’s becoming a more and more important piece of mobile tech.

Wear OS

This is a tough one. Google already revealed some Assistant features for the newly rebranded wearable operating system, perhaps in an attempt to build a little excitement around what, by most accounts, has been a pretty stagnant product category for the company. Wearables in general have been on a bit of a downward trajectory and Google specifically hasn’t done a lot to change that.

The company really needs to come in with guns blazing here and reassert itself in the category. Assistant integration will do a bit to help invigorate the company, but expect to see Google do a much better job laying out what the future of wearables will look like under the new rebrand.

Google I/O kicks off tomorrow. You can follow along here

Powered by WPeMatico

Snapchat launches AR selfie games called Snappables

Posted by | Apps, augmented reality, Gaming, Mobile, snap inc, Snapchat, Snapchat Games, Snapchat Lenses, Snapchat Snappables, Social, TC | No Comments

Snapchat wants to let you play its augmented reality Lenses, not just play dress-up. Today it launched Snappables — AR games that use your touch, motion and facial expressions to compete for high scores or in literal head-to-head multiplayer match-ups. Snappables live alongside Snapchat’s other Lenses and are rolling out globally this week. New games will be released each week, while favorites will stick around.

These are Snapchat’s first collaborative or shared Lenses that let you interact with another friend on their own phone, which could create new opportunities for the app in the future. Some of the first Snappables previewed by Snapchat include an Asteroids-style space shooter, a bubble gum popping contest, a weight lifting one you play by straining your forehead, a kiss-blowing game, an egg catching competition and a dance party.

The Killer Features blog first spotted Snappables in Snapchat’s code, though originally thought it was a collaborative Snap creation option. Snapchat acquired game engine PlayCanvas last month, but it’s unclear if that contributed to the Snappables experience. The games look similar to Tribe’s multiplayer selfie video chat games we wrote about this month and predicted Snapchat would copy.

Snapchat’s new bubble gum Snappable game

These aren’t Snapchat’s first selfie games, though. Back in 2016, it tried a Kraft Mac & Cheese noodle catching game, and a holiday elf skiing game that used your face. It’s also worked with partners like Gatorade to build ads that open up to interactive experiences that live inside Snapchat, like a Serena Williams tennis game.

Snapchat first tested selfie games like this Mac & Cheese noodle catcher back in 2016

To play Snappables, you select one of the game Lenses from the Snapchat camera and follow the on-screen instructions. Some you play solo and try to get the highest score, while others let you invite friends to play simultaneously. You can send to a friend a Snap of you playing, which they can use to jump in and play too.

Snapchat could use Snappables to strengthen growth after years of battling Instagram for users and a big redesign that’s received harsh reviews. I can imagine more art-based co-creation Snappables coming in the future, where you cooperate to create a masterpiece. Of course, Instagram probably won’t be far behind in offering games inside Stories.

If the goal of apps like Snapchat is to make people feel like they’re together even when they’re apart, games could help achieve that feeling of co-presence. Sometimes you don’t have anything to talk about or show off. That’s partly why Snapchat got into augmented reality in the first place — to make life more interesting and shareable. But with the challenge, competition and excitement inherent in games, Snappables could help people make memories together no matter the distance in-between.

Here’s more video and photos showing off Snappables:

Powered by WPeMatico

Facebook Stories adds funky AR drawing and Instagram’s Boomerang

Posted by | Apps, augmented reality, Facebook, facebook camera, Facebook Stories, instagram, Mobile, Social, TC | No Comments

You’ll soon be able to draw on the world around you and shoot back-and-forth Instagram Boomerang GIFs with the Facebook Camera. Bringing additional creative tools to the Facebook Camera could make it a more popular place to shoot content and help the company compete with Snapchat.

“We wanted to give people an easy way to create with augmented reality and draw in the world around them” says John Barnett, a Facebook Camera Product Manager about the feature it calls “3D drawing”. It’s rolling out to users over the coming weeks. Matt Navarra first spotted the features.

With AR drawing, you can scribble on the world around you, then move your camera and see the markings stay in place. It’s a fun way to add graffiti that only exists inside your screen. You can add the drawings before or while you’re recording, allowing you to draw on something out of frame, then pan or unzoom to reveal it. Facebook will eventually add more brushes beyond the pastel gradient colors seen here.

Facebook tells me the technology understands the corners and objects in the room to create a 3D spec. Facebook could that use that to detect surfaces like walls and tables to wrap the drawing onto them. Currently, it only does that when it’s confident about the object recognition, such as in optimal light conditions.

Since drawing is a universal language, the feature could make AR easy to use for younger users and Internet novices. Facebook launched its AR effects at F8 last April, and has recently added AR tracker target experiences that are triggered by real-world posters or QR codes. It all started with the company acquiring fledgling AR masks startup MSQRD in 2016.

Facebook added looping GIF creation to the Facebook Camera a year ago, but those can feel a bit jarring since they start back at the beginning once they end. Some users no longer have that GIF option, so it’s potentially being replaced by Boomerang’s established brand and more silky back-and-forth animated video clips. Facebook confirms that this feature is now rolling out to the Facebook Camera.

As we reported last week, Facebook is determined to make Stories work. Despite the criticism of it being a rip-off of Snapchat and redundant given Instagram Stories, Facebook is trying new ways to make Stories more popular an accessible. That includes tests of Stories as the default destination for content shot with the Facebook Camera, showing bigger tiles with previews of Stories atop the News Feed, and showing a camera and camera roll preview window when you open the status composer. Those, combined with these new features, could give Facebook Stories a boost in utility and visibility.

Facebook believes social media is on an inevitable journey from text to photos to videos to Stories equipped with augmented reality. Since Snapchat refused its acquisition offers, Facebook is now on a quest to evolve into an AR company rather than having to buy a big one. It remains to be seen whether users think AR is a novelty or a core utility, but Facebook won’t wait to find out.

Powered by WPeMatico