artificial intelligence

Facebook rolls out 3D photos that use AI to simulate depth

Posted by | Apps, artificial intelligence, Facebook, Mobile, Oculus, Portrait mode, Social, TC, Virtual reality | No Comments

What if you could peek behind what’s in your photos, like you’re moving your head to see what’s inside a window? That’s the futuristic promise of Facebook 3D photos. After announcing the feature at F8 in May, Facebook is now rolling out 3D photos to add make-believe depth to your iPhone portrait mode shots. Shoot one, tap the new 3D photos option in the status update composer, select a portrait mode photo and users on the desktop or mobile News Feed as well as in VR through Oculus Go’s browser or Firefox on Oculus Rift can tap/click and drag or move their head to see the photo’s depth. Everyone can now view 3D photos and the ability to create them will open to everyone in the coming weeks.

Facebook is constantly in search of ways to keep the News Feed interesting. What started with text and photos eventually expanded into videos and live broadcasts, and now to 360 photos and 3D photos. Facebook hopes if it’s the exclusive social media home for these new kinds of content, you’ll come back to explore and rack up some ad views in the meantime. Sometimes that means embracing mind-bending new formats like VR memories that recreate a scene in digital pointillism based on a photo.

So how exactly do 3D photos work? Our writer Devin Coldewey did a deep-dive earlier this year into how Facebook uses AI to stitch together real layers of the photo with what it infers should be there if you tilted your perspective. Since portrait mode fires off both of a phone’s cameras simultaneously, parallax differences can be used to recreate what’s behind the subject.

To create the best 3D photos with your iPhone 7+, 8+, X or XS (more phones will work with the feature in the future), Facebook recommends you keep your subject three to four feet away, and have things in the foreground and background. Distinct colors will make the layers separate better, and transparent or shiny objects like glass or plastic can throw off the AI.

Originally, the idea was to democratize the creation of VR content. But with headset penetration still relatively low, it’s the ability to display depth in the News Feed that will have the greatest impact for Facebook. In an era where Facebook’s cool is waning, hosting next-generation art forms could make it a must-visit property even as more of our socializing moves to Instagram.

Powered by WPeMatico

Apple needs a feature like Google’s Call Screen

Posted by | a.i., Apple, artificial intelligence, Google, Mobile, PIXEL | No Comments

Google just one-upped Apple in a significant way by addressing a problem that’s plaguing U.S. cellphone owners: spam calls. The company’s new Pixel 3 flagship Android smartphone is first to introduce a new call screening feature that leverages the built-in Google Assistant. The screening service transcribes the caller’s request in real-time, allowing you to decide whether or not to pick up, and gives you a way to respond.

Despite the numerous leaks about Google’s new hardware, Call Screen and the launch of Duplex for restaurant reservations were big surprises coming from Google’s hardware event yesterday.

Arguably, they’re even more important developments than fancy new camera features  – even if Group Selfie and Top Shot are cool additions to Google’s new phone.

Apple has nothing like this call screening feature, only third-party call blocking apps – which are also available on Android, of course.

Siri today simply isn’t capable of answering phones on your behalf, politely asking the caller what they want, and transcribing their response instantly. It needs to catch up, and fast.

Half of calls will be spam in 2019

Call Screen, based on Google’s Duplex technology, is a big step for our smart devices. One where we’re not just querying our Assistant for help with various tasks, or to learn the day’s news and weather, but one where the phone’s assistant is helping with real-world problems.

In addition to calling restaurants to inquire about tables, Assistant will now help save us from the increasing barrage of spam calls.

This is a massive problem that every smartphone owner can relate to, and one the larger mobile industry has so far failed to solve.

Nearly half of all cellphone calls next year will be from scammers. And their tactics have gotten much worse in recent months.

They now often trick people by claiming to be the IRS, a bank, government representatives, and more. They pretend you’re in some sort of legal trouble. They say someone has stolen your bank card. They claim you owe taxes. Plus, they often use phone number spoofing tricks to make their calls appear local in order to get recipients to pick up.

The national Do-Not-Call registry hasn’t solved the problem. And despite large FCC fines, the epidemic continues.

A.I. handles the spammers 

In light of an industry solution, Google has turned to A.I.

The system has been designed to sound more natural, stepping in to do the sort of tasks we don’t want to – like calling for bookings, or screening our calls by first asking “who is this, please?” 

With Call Screen, as Google explained yesterday, Pixel device owners will be able to tap a button when a call comes in to send it to the new service. Google Assistant will answer the call for you, saying: “Hi, the person you’re calling is using a screening service from Google, and will get a copy of this conversation. Go ahead and say your name and why you’re calling.

The caller’s response is then transcribed in real-time on your screen.

These transcripts aren’t currently being saved, but Google says they could be stored in your Call History in the future.

To handle the caller, you can tap a variety of buttons to continue or end the conversation. Based on the demo and support documentation, these include things like: “Who is this?,” “I’ll call you back,” “Tell me more,” “I can’t understand,” or “Is it urgent?”

You can also use the Assistant to say things like, “Please remove the number from your contact list. Thanks and goodbye,” the demo showed, after the recipient hit the “Report as spam” button.

While Google’s own Google Voice technology has been able to screen incoming calls, this involved little more than asking for the caller’s name. Call Screen is next-level stuff, to put it mildly.

And it’s all taking place on the device, using A.I. – it doesn’t need to use your Wi-Fi connection or your mobile data, Google says.

As Call Screen is adopted at scale, Google will have effectively built out its own database of scammers. It could then feasibly block spam calls or telemarketers on your behalf as an OS-level feature at some point in the future.

“You’ll never have to talk to another telemarketer,” said Google PM Liza Ma at the event yesterday, followed by cheers and applause – one of the few times the audience even clapped during this otherwise low-key press conference.

Google has the better A.I. Phone

The news of Call Screen, and of Duplex more broadly, is another shot fired across Apple’s bow.

Smartphone hardware is basically good enough, and has been for some time. Apple and Google’s modern smartphones take great photos, too. New developments on the camera front matter more to photography enthusiasts than to the average user. The phones are fine. The cameras are fine. So what else can the phones do?

The next battle for smartphones is going to be about A.I. technology.

Apple is aware that’s the case.

In June, the company introduced what we called its “A.I. phone” – an iPhone infused with Siri smarts to personalize the device and better assist. It allows users to create A.I.-powered workflows to automate tasks, to speak with Siri more naturally with commands they invent, and to allow apps to make suggestions instead sending interruptive notifications.

But much of Siri’s capabilities still involve manual tweaking on users’ parts.

You record custom Siri voice commands to control apps (and then have to remember what your Siri catch phrase is in order to use them). Workflows have to be pinned together in a separate Siri Shortcuts app that’s over the heads of anyone but power users.

These are great features for iPhone owners, to be sure, but they’re not exactly automating A.I. technology in a seamless way. They’re Apple’s first steps towards making A.I. a bigger part of what it means to use an iPhone.

Call Screen, meanwhile, is a use case for A.I. that doesn’t require a ton of user education or manual labor. Even if you didn’t know it existed, pushing a “screen call” button when the phone rings is fairly straightforward stuff.

And it’s not just going to be just a Pixel 3 feature.

Said Google, Pixel 3 owners in the U.S. are just getting it first. It will also roll out to older Pixel devices next month (in English). Presumably, however, it will come to Android itself in time, when these early tests wrap.

After all, if the mobile OS battle is going to be over A.I. going forward, there’s no reason to keep A.I. advancements tied to only Google’s own hardware devices.

Powered by WPeMatico

Comparing Google Home Hub vs Amazon Echo Show 2 vs Facebook Portal

Posted by | Amazon, amazon alexa, Amazon Echo Show, artificial intelligence, eCommerce, Facebook, Facebook Portal, Gadgets, Google, Google Assistant, Google Hardware Event 2018, google home, hardware, JBL Link View, smart displays, Social, TC | No Comments

The war for the countertop has begun. Google, Amazon and Facebook all revealed their new smart displays this month. Each hopes to become the center of your Internet of Things-equipped home and a window to your loved ones. The $149 Google Home Hub is a cheap and privacy-safe smart home controller. The $229 Amazon Echo Show 2 gives Alexa a visual complement. And the $199 Facebook Portal and $349 Portal+ offer a Smart Lens that automatically zooms in and out to keep you in frame while you video chat.

For consumers, the biggest questions to consider are how much you care about privacy, whether you really video chat, which smart home ecosystem you’re building around and how much you want to spend.

  • For the privacy obsessed, Google’s Home Hub is the only one without a camera and it’s dirt cheap at $149.
  • For the privacy agnostic, Facebook’s Portal+ offers the best screen and video chat functionality.
  • For the chatty, Amazon Echo Show 2 can do message and video chat over Alexa, call phone numbers and is adding Skype.

If you want to go off-brand, there’s also the Lenovo Smart Display, with stylish hardware in a $249 10-inch 1080p version and a $199 8-inch 720p version. And for the audiophile, there’s the $199 JBL Link View. While those hit the market earlier than the platform-owned versions we’re reviewing here, they’re not likely to benefit from the constant iteration Google, Amazon and Facebook are working on for their tabletop screens.

Here’s a comparison of the top smart displays, including their hardware specs, unique software, killer features and pros and cons:

more Google Event 2018 coverage

Powered by WPeMatico

The Google Assistant gets more visual

Posted by | Android, Apps, artificial intelligence, Assistant, Google, Google Assistant, google home, Mobile, smart home devices, TC | No Comments

Google today is launching a major visual redesign of its Assistant experience on phones. While the original vision of the Assistant focused mostly on voice, half of all interactions with the Assistant actually include touch. So with this redesign, Google acknowledges that and brings more and larger visuals to the Assistant experience.

If you’ve used one of the recent crop of Assistant-enabled smart displays, then some of what’s new here may look familiar. You now get controls and sliders to manage your smart home devices, for example. Those include sliders to dim your lights and buttons to turn them on or off. There also are controls for managing the volume of your speakers. Update: Google tells me that update will roll out over the course of the next few weeks, with the iOS release depending on Apple’s app store review process.Even in cases where the Assistant already offered visual feedback — say when you ask for the weather — the team has now also redesigned those results and brought them more in line with what users are already seeing on smart displays from the likes of Lenovo and LG. On the phone, though, that experience still feels a bit more pared down than on those larger displays.

With this redesign, which is going live on both Android and in the iOS app today, Google is also bringing a little bit more of the much-missed Google Now experience back to the phone. While you could already bring up a list of upcoming appointments, commute info, recent orders and other information about your day from the Assistant, that feature was hidden behind a rather odd icon that many users surely ignored. Now, after you’ve long-pressed the home button on your Android phone, you can swipe up to get that same experience. I’m not sure that’s more discoverable than previously, but Google is saving you a tap.

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions. And because nothing is complete with GIFs, they can now use GIFs in their Assistant apps, too.

But in addition to these new options for creating more visual experiences, Google is also making it a bit easier for developers to take their users money.

While they could already sell physical goods through their Assistant actions, starting today, they’ll also be able to sell digital goods. Those can be one-time purchases for a new level in a game or recurring subscriptions. Headspace, which has long offered a very basic Assistant experience, now lets you sign up for subscriptions right from the Assistant on your phone, for example.

Selling digital goods directly in the Assistant is one thing, but that sale has to sync across different applications, too, so Google today is also launching a new sign-in service for the Assistant that allows developers to log in and link their accounts.

“In the past, account linking could be a frustrating experience for your users; having to manually type a username and password — or worse, create a new account — breaks the natural conversational flow,” the company explains. “With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.”

Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

Powered by WPeMatico

Apple expands Business Chat with new businesses and additional countries

Posted by | Apple, Apps, artificial intelligence, B2C, business chat, Enterprise, messaging apps, Mobile | No Comments

Apple Business Chat launched earlier this year as a way for consumers to communicate directly with businesses on Apple’s messaging platform. Today the company announced it was expanding the program to add new businesses and support for additional countries.

When it launched in January, business partners included Discover, Hilton, Lowe’s and Wells Fargo. Today’s announcement includes the likes of Burberry, West Elm, Kimpton Hotels, and Vodafone Germany.

The program, which remains in Beta, added 15 new companies today in the US and 15 internationally including in the UK, Japan, Hong Kong, Singapore, Canada, Italy, Australia and France.

Since the launch, companies have been coming up with creative ways to interact directly with customers in a chat setting that many users prefer over telephone trees and staticy wait music (I know I do).

For instance, Four Seasons, which launched Business Chat in July, is expanding usage to 88 properties across the globe with the ability to chat in more than 100 languages with reported average response times of around 90 seconds.

Apple previously added features like Apple Pay to iMessage to make it easy for consumers to transact directly with business in a fully digital way. If for instance, your customer service rep helps you find the perfect item, you can purchase it right then and there with Apple Pay in a fully digital payment system without having to supply a credit card in the chat interface.

Photo: Apple

What’s more, the CSR could share a link, photo or video to let you see more information on the item you’re interested in or to help you fix a problem with an item you already own. All of this can take place in iMessage, a tool millions of iPhone and iPad owners are comfortable using with friends and family.

To interact with Business Chat, customers are given messaging as a choice in contact information. If they touch this option, the interaction opens in iMessage and customers can conduct a conversation with the brand’s CSR, just as they would with friends.

Touch Message to move to iMessage conversation. Photo: Apple

This link to customer service and sales through a chat interface also fits well with the partnership with Salesforce announced last week and with the company’s overall push to the enterprise. Salesforce president and chief product officer, Bret Taylor described how Apple Business Chat could integrate with Salesforce’s Service Bot platform, which was introduced in 2017 to allow companies to build integrated automated and human response systems.

The bots could provide a first level of service and if the customer required more personal support, there could be an option to switch to Apple Business Chat.

Apple Business Chat requires iOS 11.3 or higher.

Powered by WPeMatico

SwiftKey on Android now has two-way translation baked in. Qué bien

Posted by | Android, Apps, artificial intelligence, keyboard apps, machine translation, Microsoft, microsoft translator, SwiftKey, Translation, Translator | No Comments

The Internet is of course amazing if you want to send messages across borders. But different languages can still put a wrinkle in your conversational flow, even with all the handy translation apps also on tap to help turn zut alors into shucks!

So Microsoft -owned SwiftKey is probably still onto something with a new feature launching today in its Android app that bakes two-way translation right into the keyboard — which should save a lot of tedious copy-pasting, at least if you’re frequently conversing across language barriers.

It’s not clear whether the translation feature will be coming to SwiftKey on iOS too (we’ve asked and will update with any additional details).

Microsoft Translator is the underlying technology powering the core linguistic automagic. So SwiftKey’s parent is intimately involved in this feature addition.

Microsoft’s tech does continue to exist in a standalone app form too, though. And that app is getting a cross-promotional push, via the SwiftKey addition, with the company touting an added benefit for users if they install Microsoft Translator — as the keyboard translation feature will then work offline.

(SwiftKey had some 300M active users at the time of its acquisition by Microsoft, three years ago, so the size of that promotional push for Translator is potentially pretty large.)

The translation option is being added to SwiftKey via a relatively recently launched Toolbar that lets users customize the keyboard — such as by adding stickers, location or calendar.

To access the Toolbar (and the various add-ons nested within it) users tap on the ‘+’ in the upper left corner.

With translation enabled, users of the next word predicting keyboard can then switch between input and output languages to turn incoming missives from one of more than 60 languages into another tongue at the tap of a button, as well as translate their outgoing replies back the other way without needing to know how to write in that other language.

Supported languages include Italian, Spanish, Germany, Russian and Turkish, to name a few.

And while the machine translation technology is doing away with the immediate need for human foreign language expertise, there’s at least a chance app users will learn a bit as they go along — i.e. as they watch their words get rendered in another tongue right before their eyes.

As tech magic goes, translation is hard to beat. Even though machine translation can often still be very rough round the edges. But here, for helping with everyday chatting on mobiule messaging apps, there’s no doubt it will be a great help.

Commenting on the new feature in a statement, Colleen Hall, senior product manager at SwiftKey, said: “The integration of Microsoft Translator into SwiftKey is a great, natural fit, enhancing the raft of language-focused features we know our users love to use.”

Powered by WPeMatico

Happy 10th anniversary, Android

Posted by | Amazon, Android, andy rubin, Angry Birds, Apple, artificial intelligence, AT&T, China, computing, consumer electronics, digital media, Facebook, Gadgets, Google, google nexus, hardware, HTC, HTC Dream, HTC EVO 4G smartphone, huawei, india, iPad, iPhone, Kindle, LG, lists, Mobile, Motorola, motorola droid, motorola xoom, Nexus One, oled, operating system, operating systems, phablet, Samsung, smartphone, smartphones, Sony, sprint, T-Mobile, TC, TechCrunch, United States, Verizon, xperia | No Comments

It’s been 10 years since Google took the wraps off the G1, the first Android phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road.

Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q.

HTC G1 (2008)

This is the one that started it all, and I have a soft spot in my heart for the old thing. Also known as the HTC Dream — this was back when we had an HTC, you see — the G1 was about as inauspicious a debut as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale.

But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell.

Moto Droid (2009)

Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous marketing push from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.)

For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone.

HTC/Google Nexus One (2010)

This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. The Nexus One was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a cool new OLED screen, and had a lovely smooth design. Unfortunately it ran into two problems.

First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, Apple would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight.

HTC Evo 4G (2010)

Another HTC? Well, this was prime time for the now-defunct company. They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller.

The Evo 4G somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.)

Samsung Galaxy S (2010)

Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, T-Mobile Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time!

Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today.

Motorola Xoom (2011)

This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner Motorola, which volunteered to be the guinea pig with its short-lived Xoom tablet.

Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they sold well enough for a while. This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee.

Amazon Kindle Fire (2011)

And who better to illustrate than Amazon? Its contribution to the Android world was the Fire series of tablets, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad.

Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle.

Xperia Play (2011)

Sony has always had a hard time with Android. Its Xperia line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be the Xperia Play. This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered.

What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be.

Samsung Galaxy Note (2012)

As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own.

The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series.

Google Nexus Q (2012)

This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. No one really knew, apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time:

Here’s the problem with the Nexus Q:  it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it.

It was made, or rather nearly made in the USA, though, so it had that going for it.

HTC First — “The Facebook Phone” (2013)

The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Facebook Home, was hopelessly bad.

How bad? Announced in April, discontinued in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets.

HTC One/M8 (2014)

This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. The One and its successors were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating.

As Samsung increasingly dominated, Sony plugged away, and LG and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today.

Google/LG Nexus 5X and Huawei 6P (2015)

This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored.

We called the 6P “the crown jewel of Android devices”. This was when Google took its phones to the next level and never looked back.

Google Pixel (2016)

If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the Pixel line could be its victory lap. It’s an honest-to-god competitor to the Apple phone.

Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender.

The rise and fall of the Essential phone

In 2017 Andy Rubin, the creator of Android, debuted the first fruits of his new hardware startup studio, Digital Playground, with the launch of Essential (and its first phone). The company had raised $300 million to bring the phone to market, and — as the first hardware device to come to market from Android’s creator — it was being heralded as the next new thing in hardware.

Here at TechCrunch, the phone received mixed reviews. Some on staff hailed the phone as the achievement of Essential’s stated vision — to create a “lovemark” for Android smartphones, while others on staff found the device… inessential.

Ultimately, the market seemed to agree. Four months ago plans for a second Essential phone were put on hold, while the company explored a sale and pursued other projects. There’s been little update since.

A Cambrian explosion in hardware

In the ten years since its launch, Android has become the most widely used operating system for hardware. Some version of its software can be found in roughly 2.3 billion devices around the world and its powering a technology revolution in countries like India and China — where mobile operating systems and access are the default. As it enters its second decade, there’s no sign that anything is going to slow its growth (or dominance) as the operating system for much of the world.

Let’s see what the next ten years bring.

Powered by WPeMatico

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

Posted by | artificial intelligence, Gadgets, robotics, science, TC | No Comments

Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.

“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”

Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.

The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.

The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle

The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.

Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.

The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.

Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”

Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Powered by WPeMatico

Facebook rolls out photo/video fact checking so partners can train its AI

Posted by | Apps, artificial intelligence, Facebook, Facebook AI, Facebook Fake News, fact checking, Media, Mobile, Policy, Social, TC | No Comments

Sometimes fake news lives inside of Facebook as photos and videos designed to propel misinformation campaigns, instead of off-site on news articles that can generate their own ad revenue. To combat these politically rather than financially motivated meddlers, Facebook has to be able to detect fake news inside of images and the audio that accompanies video clips. Today its expanding its photo and video fact checking program from four countries to all 23 of its fact-checking partners in 17 countries.

“Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken” says Facebook product manager Antonia Woodford. “As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model.”

The goal is for Facebook to be able to automatically spot manipulated images, out of context images that don’t show what they say they do, or text and audio claims that are provably false.

In last night’s epic 3,260-word security manifesto, Facebook CEO Mark Zuckerberg explained that “The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.” That means using AI to proactively hunt down false news rather than waiting for it to be flagged by users. For that, Facebook needs AI training data that will be produced as exhaust from its partners’ photo and video fact checking operations.

Facebook is developing technology tools to assist its fact checkers in this process. “we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated” Woodford notes, referring to DeepFakes that use AI video editing software to make someone appear to say or do something they haven’t.

Image memes were one of the most popular forms of disinformation used by the Russian IRA election interferers. The problem is that since they’re so easily re-shareable and don’t require people to leave Facebook to view them, they can get viral distribution from unsuspecting users who don’t realize they’ve become pawns in a disinformation campaign.

Facebook could potentially use the high level of technical resources necessary to build fake news meme-spotting AI as an argument for why Facebook shouldn’t be broken up. With Facebook, Messenger, Instagram, and WhatsApp combined, the company gains economies of scale when it comes to fighting the misinformation scourge.

Powered by WPeMatico

AliveCor gets a green light from FDA to screen for dangerously high potassium levels in the blood

Posted by | AliveCor, artificial intelligence, Gadgets, hardware, Health, Mayo Clinic, medicine, neural network, TC, Vic Gundotra | No Comments

The U.S. Food and Drug Administration has granted AliveCor the designation of “breakthrough device” for its ability to detect a rare but dangerous blood condition called hyperkalemia without taking any blood from the patient.

Hyperkalemia is a medical term describing elevated potassium levels in the blood and is usually found in those with kidney disease. The correct amount of potassium is critical for the function of nerve and muscles in the body, including your heart muscle. A blood potassium level higher than 6.0 mmol/L can be dangerous and usually requires immediate treatment, according to the Mayo Clinic.

A surprising 31 million people in the U.S. suffer from chronic kidney conditions leading to potentially elevated levels of potassium. Nearly 500,000 of those with the condition are on dialysis as their kidneys are no longer able to function.

AliveCor is able to detect elevated levels of potassium in the blood using the company’s specifically trained deep neural network and data from its electrocardiograms (ECG) technology, similar to those captured by AliveCor’s KardiaMobile and KardiaBand devices.

The new designation means the FDA will begin to fast-track the technology, enabling patients with kidney disease to use AliveCor for home-based detection of elevated potassium levels.

AliveCor was cleared late last year by the FDA to use its KardiaBand technology as a medical device for the Apple Watch to detect abnormal hearth rhythm. Allowing kidney and heart patients to use this technology at home would potentially save lives by detecting and warning them that something is wrong before heading into the doctor’s office to get checked.

“We are gratified that the artificial intelligence work we’re doing at AliveCor has been deemed so meaningful that it has achieved FDA ‘Breakthrough Device’ status,” AliveCor CEO Vic Gundotra said in a statement. “We view it as a key milestone in our corporate history and look forward to the further development of our non-invasive Hyperkalemia detection tools.”

Powered by WPeMatico