artificial intelligence

India’s Reliance Jio inks deal with Microsoft to expand Office 365, Azure to more businesses; unveils broadband, blockchain and IoT platforms

Posted by | Apps, artificial intelligence, Asia, blockchain, broadband, Cloud, eCommerce, Enterprise, Entertainment, high speed internet, india, Microsoft, Mobile, Mumbai, Office 365, online stores, reliance, reliance jio, Satya Nadella, Social | No Comments

India’s Reliance Jio, which has disrupted the local telecom and features phone markets in less than three years of existence, is ready to foray into many more businesses.

In a series of announcements Monday, which included a long-term partnership with global giant Microsoft, Reliance Jio said it will commercially roll out its broadband service next month; an IoT platform with ambitions to power more than a billion devices on January 1 next year; and “one of the world’s biggest blockchain networks” in the next 12 months — all while also scaling its retail and commerce businesses.

The broadband service, called Jio Fiber, is aimed at individual customers, small and medium-sized businesses as well as enterprises, Mukesh Ambani, chairman and managing director of Reliance Industries and Asia’s richest man, said at a shareholders’ meeting today.

The service, which is being initially targeted at 20 million homes and 15 million businesses in 1,600 towns, will start rolling out commercially starting September 5. Ambani said more than half a million customers have already been testing the broadband service, which was first unveiled last year.

The broadband service will come bundled with access to hundreds of TV channels and free calls across India and at discounted rates to the U.S. and Canada, Ambani said. The service, the cheapest tier of which will offer internet speeds of 100Mbps, will be priced at Rs 700 (~$10) a month. The company said it will offer various plans to meet a variety of needs, including those of customers who want access to gigabit internet speeds.

Continuing its tradition to woo users with significant “free stuff,” Jio, which is a subsidiary of India’s largest industrial house (Reliance Industries) said customers who opt for the yearly plan of its fiber broadband will be provided with the set-top box and an HD or 4K TV at no extra charge. Specific details weren’t immediately available. A premium tier, which will be available starting next year, will allow customers to watch many movies on the day of their public release.

The broadband service will bundle games from many popular studios, including Microsoft Game Studios, Riot Games, Tencent Games and Gameloft, Jio said.

Partnership with Microsoft

The company also announced a 10-year partnership with Microsoft to launch new cloud data centers in India to ensure “more of Jio’s customers can access the tools and platforms they need to build their own digital capability,” said Microsoft CEO Satya Nadella in a video appearance Monday.

ambani nadella

Microsoft CEO Satya Nadella talks about the company’s partnership with Reliance Jio

“At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. Core to this mission is deep partnerships, like the one we are announcing today with Reliance Jio. Our ambition is to help millions of organizations across India thrive and grow in the era of rapid technological change.”

“Together, we will offer a comprehensive technology solution, from compute to storage, to connectivity and productivity for small and medium-sized businesses everywhere in the country,” he added.

As part of the partnership, Nadella said, Jio and Microsoft will jointly offer Azure, Microsoft 365 and Microsoft AI platforms to more organizations in India, and also bring Azure Cognitive Services to more devices and in 13 Indian languages to businesses in the country. The solutions will be “accessible” to reach as many people and organizations in India as possible, he added. The cloud services will be offered to businesses for as little as Rs 1,500 ($21) per month.

The first two data centers will be set up in Gujarat and Maharashtra by next year. Jio will migrate all of its non-networking apps to the Microsoft Azure platform and promote its adoption among its ecosystem of startups, the two said in a joint statement.

The foray into broadband business and push to court small enterprises come as Reliance Industries, which dominates the telecom and retail spaces in India, attempts to diversify from its marquee oil and gas business. Reliance Jio, the nation’s top telecom operator, has amassed more than 340 million subscribers in less than three years of its commercial operations.

At the meeting, Ambani also unveiled that Saudi Arabia’s state-owned oil producer Aramco was buying a 20% stake in $75 billion worth Reliance Industries’ oil-to-chemicals business.

Like other Silicon Valley companies, Microsoft sees massive potential in India, where tens of millions of users and businesses have come online for the first time in recent years. Cloud services in India are estimated to generate a revenue of $2.4 billion this year, up about 25% from last year, according to research firm Gartner. Microsoft has won several major clients in India in recent years, including insurance giant ICICI Lombard.

Today’s partnership could significantly boost Microsoft’s footprint in India, posing a bigger headache for Amazon and Google.

Ambani also said Reliance Retail, the nation’s largest retailer, is working on a “digital stack” to create a new commerce partnership platform in India to reach tens of millions of merchants, consumers and producers. Ambani said Reliance Industries plans to list both Reliance Retail and Jio publicly in the next years.

“We have received strong interests from strategic and financial investors in our consumer businesses — Jio and Reliance Retail. We will induct leading global partners in these businesses in the next few quarters and move towards listing of both these companies within the next five years,” he said.

The announcement comes weeks after Reliance Industries acquired for $42.3 million a majority stake in Fynd, a Mumbai-based startup that connects brick and mortar retailers with online stores and consumers. Reliance Industries has previously stated plans to launch a new e-commerce firm in the country.

Without revealing specific details, Ambani also said that Jio is building an IoT platform to control at least one billion of the two billion IoT devices in India by next year. He said he sees IoT as a $2.8 billion revenue opportunity for Jio. Similarly, the company also plans to expand its blockchain network across India, he said.

“Using blockchain, we can deliver unprecedented security, trust, automation and efficiency to almost any type of transaction. And using blockchain, we also have an opportunity to invent a brand-new model for data privacy where Indian data, especially customer data is owned and controlled through technology by the Indian people an d not by corporate, especially global corporations,” he added.

Powered by WPeMatico

Siri recordings ‘regularly’ sent to Apple contractors for analysis, claims whistleblower

Posted by | Apple, artificial intelligence, Gadgets, Mobile, privacy, siri | No Comments

Apple has joined the dubious company of Google and Amazon in secretly sharing with contractors audio recordings of its users, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature and are paired with other revealing data, like location, app data and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than 1% of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of 1% rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google likewise sends clips to be analyzed, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (under which Siri queries would fall):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.

Powered by WPeMatico

FaceApp gets federal attention as Sen. Schumer raises alarm on data use

Posted by | Apps, artificial intelligence, charles schumer, FaceApp, Government, Mobile, privacy, TC | No Comments

It’s been hard to get away from FaceApp over the last few days, whether it’s your friends posting weird selfies using the app’s aging and other filters, or the brief furore over its apparent (but not actual) circumvention of permissions on iPhones. Now even the Senate is getting in on the fun: Sen. Chuck Schumer (D-NY) has asked the FBI and the FTC to look into the app’s data handling practices.

“I write today to express my concerns regarding FaceApp,” he writes in a letter sent to FBI Director Christopher Wray and FTC Chairman Joseph Simons. I’ve excerpted his main concerns below:

In order to operate the application, users must provide the company full and irrevocable access to their personal photos and data. According to its privacy policy, users grant FaceApp license to use or publish content shared with the application, including their username or even their real name, without notifying them or providing compensation.

Furthermore, it is unclear how long FaceApp retains a user’s data or how a user may ensure their data is deleted after usage. These forms of “dark patterns,” which manifest in opaque disclosures and broader user authorizations, can be misleading to consumers and may even constitute a deceptive trade practices. Thus, I have serious concerns regarding both the protection of the data that is being aggregated as well as whether users are aware of who may have access to it.

In particular, FaceApp’s location in Russia raises questions regarding how and when the company provides access to the data of U.S. citizens to third parties, including potentially foreign governments.

For the cave-dwellers among you (and among whom I normally would proudly count myself) FaceApp is a selfie app that uses AI-esque techniques to apply various changes to faces, making them look older or younger, adding accessories, and, infamously, changing their race. That didn’t go over so well.

There’s been a surge in popularity over the last week, but it was also noticed that the app seemed to be able to access your photos whether you said it could or not. It turns out that this is actually a normal capability of iOS, but it was being deployed here in somewhat of a sneaky manner and not as intended. And arguably it was a mistake on Apple’s part to let this method of selecting a single photo go against the “never” preference for photo access that a user had set.

Fortunately the Senator’s team is not worried about this or even the unfounded (we checked) concerns that FaceApp was secretly sending your data off in the background. It isn’t. But it very much does send your data to Russia when you tell it to give you an old face, or a hipster face, or whatever. Because the computers that do the actual photo manipulation are located there — these filters are being applied in the cloud, not directly on your phone.

His concerns are over the lack of transparency that user data is being sent out to servers who knows where, to be kept for who knows how long, and sold to who knows whom. Fortunately the obliging FaceApp managed to answer most of these questions before the Senator’s letter was ever posted.

The answers to his questions, should we choose to believe them, are that user data is not in fact sent to Russia, the company doesn’t track users and usually can’t, doesn’t sell data to third parties, and deletes “most” photos within 48 hours.

Although the “dark patterns” of which the Senator speaks are indeed an issue, and although it would have been much better if FaceApp had said up front what it does with your data, this is hardly an attempt by a Russian adversary to build up a database of U.S. citizens.

While it is good to see Congress engaging with digital privacy, asking the FBI and FTC to look into a single app seems unproductive when that app is not doing much that a hundred others, American and otherwise, have been doing for years. Cloud-based processing and storage of user data is commonplace — though usually disclosed a little better.

Certainly as Sen. Schumer suggests, the FTC should make sure that “there are adequate safeguards in place to protect the privacy of Americans…and if not, that the public be made aware of the risks associated with the use of this application or others similar to it.” But this seems the wrong nail to hang that on. We see surreptitious slurping of contact lists, deceptive deletion promises, third-party sharing of poorly anonymized data, and other bad practices in apps and services all the time — if the federal government wants to intervene, let’s have it. But let’s have a law or a regulation, not a strongly worded letter written after the fact.

Schumer Faceapp Letter by TechCrunch on Scribd

Powered by WPeMatico

AI photo editor FaceApp goes viral again on iOS, raises questions about photo library access

Posted by | Android, api, apple inc, Apple Photos, artificial intelligence, Banking, computing, iOS, ios 11, iOS 8, ML, ocr, operating systems, smartphones, Software, TC, Will Strafach | No Comments

FaceApp. So. The app has gone viral again after first doing so two years ago or so. The effect has gotten better but these apps, like many other one off viral apps, tend to come and go in waves driven by influencer networks or paid promotion. We first covered this particular AI photo editor  from a team of Russian developers about two years ago.

It has gone viral again now due to some features that allow you to edit a person’s face to make it appear older or younger. You may remember at one point it had an issue because it enabled what amounted to digital blackface by changing a person from one ethnicity to another.

In this current wave of virality, some new rumors are floating about FaceApp. The first is that it uploads your camera roll in the background. We found no evidence of this and neither did security researcher and Guardian App CEO Will Strafach or researcher Baptiste Robert.

The second is that it somehow allows you to pick photos without giving photo access to the app. You can see a video of this behavior here:

Shouldn’t photo access need to be enabled for this to be possible ? 🤔pic.twitter.com/wy45zKn63E

— Karissa Bell (@karissabe) July 16, 2019

While the app does indeed let you pick a single photo without giving it access to your photo library, this is actually 100% allowed by an Apple API introduced in iOS 11. It allows a developer to let a user pick one single photo from a system dialog to let the app work on. You can view documentation here and here.

IMG 54E064B28241 1

Because the user has to tap on one photo, this provides something Apple holds dear: user intent. You have explicitly tapped it, so it’s ok to send that one photo. This behavior is actually a net good in my opinion. It allows you to give an app one photo instead of your entire library. It can’t see any of your photos until you tap one. This is far better than committing your entire library to a jokey meme app.

Unfortunately, there is still some cognitive dissonance here, because Apple allows an app to call this API even if a user has set the Photo Access setting to Never in settings. In my opinion, if you have it set to Never, you should have to change that before any photo can enter the app from your library, no matter what inconvenience that causes. Never is not a default, it is an explicit choice and that permanent user intent overrules the one-off user intent of the new photo picker.

I believe that Apple should find a way to rectify this in the future by making it more clear or disallowing if people have explicitly opted out of sharing photos in an app.

IMG 0475

One good idea might be the equivalent of the ‘only once’ location option added to the upcoming iOS 13 might be appropriate.

One thing that FaceApp does do, however, is it uploads your photo to the cloud for processing. It does not do on-device processing like Apple’s first party app does and like it enables for third parties through its ML libraries and routines. This is not made clear to the user.

I have asked FaceApp why they don’t alert the user that the photo is processed in the cloud. I’ve also asked them whether they retain the photos.

Given how many screenshots people take of sensitive information like banking and whatnot, photo access is a bigger security risk than ever these days. With a scraper and optical character recognition tech you could automatically turn up a huge amount of info way beyond ‘photos of people’.

So, overall, I think it is important that we think carefully about the safeguards put in place to protect photo archives and the motives and methods of the apps we give access to.

Powered by WPeMatico

Blackstone is acquiring mobile ad company Vungle

Posted by | artificial intelligence, Blackstone, Fundings & Exits, M&A, Mobile, Private Equity, vungle | No Comments

Private equity firm Blackstone just announced that it has reached an agreement to acquire mobile advertising company Vungle.

The companies aren’t disclosing the financial terms, but as part of the transaction, Vungle has also reached a settlement with founder Zain Jaffer, who filed a wrongful termination lawsuit against the company earlier this year.

(Update: Multiple sources with knowledge of the deal said that the acquisition price was around — or north of — $750 million. One of those sources also said it was an all-cash transaction.)

“As a best-in-class performance marketing platform, Vungle represents a key growth engine for the mobile app ecosystem,” said Blackstone principal Sachin Bavishi in a statement. “Our investment will help deliver on the company’s tremendous growth potential and we look forward to partnering with management to extend Vungle’s strength across mobile gaming and other performance brands.”

Meanwhile, CEO Rick Tallman said the deal will allow the company to “further accelerate Vungle’s mission to be the trusted guide for growth and engagement, transforming how users discover and experience mobile apps.”

Vungle was founded back in 2011, and, according to the acquisition release, it’s currently working with 60,000 mobile apps worldwide, serving more than 4 billion video views per month and working with publishers like Rovio, Zynga, Pandora, Microsoft and Scopely.

Jaffer led Vungle as CEO until October 2017, when he was arrested on charges including performing a lewd act upon a child and assault with a deadly weapon. The charges were ultimately dropped, with the San Mateo County District Attorney’s office stating that it did “not believe that there was any sexual conduct by Mr. Jaffer that evening,” while “the injuries were the result of Mr. Jaffer being in a state of unconsciousness caused by prescription medication.”

In his lawsuit, Jaffer alleged that after the charges were dropped, “Vungle unfairly and unlawfully sought to destroy my career, blocked my efforts to sell my own shares or transfer shares to family members, and tried to prevent me from purchasing shares in the Company.”

In a statement today, Jaffer said, he is “pleased with the terms of the settlement, which are confidential.” He also commented on the acquisition:

It is extremely gratifying for me to see our early vision, execution and the hard work of so many talented people rewarded like this. From Day 1, Vungle has been at the forefront of the changing advertising landscape. Today, companies of all sizes, and in all industries, are utilizing in-app video ads as an integral part of their customer acquisition strategies.

The acquisition is expected to close later this year. According to Crunchbase, Vungle previously raised more than $25 million from Crosslink Capital, Thomvest Ventures, Seven Peaks Ventures, GV, AOL Ventures, Uncork Capital, 500 Startups and Angelpad, where the startup was incubated. (AOL Ventures was backed by TechCrunch’s parent company AOL, a.k.a. Oath, a.k.a. Verizon Media.)

 

Powered by WPeMatico

These robo-ants can work together in swarms to navigate tricky terrain

Posted by | artificial intelligence, EPFL, Gadgets, hardware, robotics, science, TC | No Comments

While the agility of a Spot or Atlas robot is something to behold, there’s a special merit reserved for tiny, simple robots that work not as a versatile individual but as an adaptable group. These “tribots” are built on the model of ants, and like them can work together to overcome obstacles with teamwork.

Developed by EPFL and Osaka University, tribots are tiny, light and simple, moving more like inchworms than ants, but able to fling themselves up and forward if necessary. The bots themselves and the system they make up are modeled on trap-jaw ants, which alternate between crawling and jumping, and work (as do most other ants) in fluid roles like explorer, worker and leader. Each robot is not itself very intelligent, but they are controlled as a collective that deploys their abilities intelligently.

In this case a team of tribots might be expected to get from one end of a piece of complex terrain to another. An explorer could move ahead, sensing obstacles and relaying their locations and dimensions to the rest of the team. The leader can then assign worker units to head over to try to push the obstacles out of the way. If that doesn’t work, an explorer can try hopping over it — and if successful, it can relay its telemetry to the others so they can do the same thing.

fly tribot fly

Fly, tribot, fly!

It’s all done quite slowly at this point — you’ll notice that in the video, much of the action is happening at 16x speed. But rapidity isn’t the idea here; similar to Squishy Robotics’ creations, it’s more about adaptability and simplicity of deployment.

The little bots weigh only 10 grams each, and are easily mass-produced, as they’re basically PCBs with some mechanical bits and grip points attached — “a quasi-two-dimensional metamaterial sandwich,” according to the paper. If they only cost (say) a buck each, you could drop dozens or hundreds on a target area and over an hour or two they could characterize it, take measurements and look for radiation or heat hot spots, and so on.

If they moved a little faster, the same logic and a modified design could let a set of robots emerge in a kitchen or dining room to find and collect crumbs or scoot plates into place. (Ray Bradbury called them “electric mice” or something in “There will come soft rains,” one of my favorite stories of his. I’m always on the lookout for them.)

Swarm-based bots have the advantage of not failing catastrophically when something goes wrong — when a robot fails, the collective persists, and it can be replaced as easily as a part.

“Since they can be manufactured and deployed in large numbers, having some ‘casualties’ would not affect the success of the mission,” noted EPFL’s Jamie Paik, who co-designed the robots. “With their unique collective intelligence, our tiny robots can demonstrate better adaptability to unknown environments; therefore, for certain missions, they would outperform larger, more powerful robots.”

It raises the question, in fact, of whether the sub-robots themselves constitute a sort of uber-robot? (This is more of a philosophical question, raised first in the case of the Constructicons and Devastator. Transformers was ahead of its time in many ways.)

The robots are still in prototype form, but even as they are, constitute a major advance over other “collective” type robot systems. The team documents their advances in a paper published in the journal Nature.

Powered by WPeMatico

AI smokes 5 poker champs at a time in no-limit Hold’em with ‘relentless consistency’

Posted by | artificial intelligence, Gaming, poker, science, TC | No Comments

The machines have proven their superiority in one-on-one games like chess and go, and even poker — but in complex multiplayer versions of the card game humans have retained their edge… until now. An evolution of the last AI agent to flummox poker pros individually is now decisively beating them in championship-style 6-person game.

As documented in a paper published in the journal Science today, the CMU/Facebook collaboration they call Pluribus reliably beats five professional poker players in the same game, or one pro pitted against five independent copies of itself. It’s a major leap forward in capability for the machines, and amazingly is also far more efficient than previous agents as well.

One-on-one poker is a weird game, and not a simple one, but the zero-sum nature of it (whatever you lose, the other player gets) makes it susceptible to certain strategies in which computer able to calculate out far enough can put itself at an advantage. But add four more players into the mix and things get real complex, real fast.

With six players, the possibilities for hands, bets, and possible outcomes are so numerous that it is effectively impossible to account for all of them, especially in a minute or less. It’d be like trying to exhaustively document every grain of sand on a beach between waves.

Yet over 10,000 hands played with champions, Pluribus managed to win money at a steady rate, exposing no weaknesses or habits that its opponents could take advantage of. What’s the secret? Consistent randomness.

Even computers have regrets

Pluribus was trained, like many game-playing AI agents these days, not by studying how humans play but by playing against itself. At the beginning this is probably like watching kids, or for that matter me, play poker — constant mistakes, but at least the AI and the kids learn from them.

The training program used something called Monte Carlo counterfactual regret minimization. Sounds like when you have whiskey for breakfast after losing your shirt at the casino, and in a way it is — machine learning style.

Regret minimization just means that when the system would finish a hand (against itself, remember), it would then play that hand out again in different ways, exploring what might have happened had it checked here instead of raised, folded instead of called, and so on. (Since it didn’t really happen, it’s counterfactual.)

A Monte Carlo tree is a way of organizing and evaluating lots of possibilities, akin to climbing a tree of them branch by branch and noting the quality of each leaf you find, then picking the best one once you think you’ve climbed enough.

If you do it ahead of time (this is done in chess, for instance) you’re looking for the best move to choose from. But if you combine it with the regret function, you’re looking through a catalog of possible ways the game could have gone and observing which would have had the best outcome.

So Monte Carlo counterfactual regret minimization is just a way of systematically investigating what might have happened if the computer had acted differently, and adjusting its model of how to play accordingly.

traverserj

The game originall played out as you see on the left, with a loss. But the engine explores other avenues where it might have done better.

Of course the number of games is nigh-infinite if you want to consider what would happen if you had bet $101 rather than $100, or you would have won that big hand if you’d had an eight kicker instead of a seven. Therein also lies nigh-infinite regret, the kind that keeps you in bed in your hotel room until past lunch.

The truth is these minor changes matter so seldom that the possibility can basically be ignored entirely. It will never really matter that you bet an extra buck — so any bet within, say, 70 and 130 can be considered exactly the same by the computer. Same with cards — whether the jack is a heart or a spade doesn’t matter except in very specific (and usually obvious) situations, so 99.999 percent of the time the hands can be considered equivalent.

This “abstraction” of gameplay sequences and “bucketing” of possibilities greatly reduces the possibilities Pluribus has to consider. It also helps keep the calculation load low; Pluribus was trained on a relatively ordinary 64-core server rack over about a week, while other models might take processor-years in high-power clusters. It even runs on a (admittedly beefy) rig with two CPUs and 128 gigs of RAM.

Random like a fox

The training produces what the team calls a “blueprint” for how to play that’s fundamentally strong and would probably beat plenty of players. But a weakness of AI models is that they develop tendencies that can be detected and exploited.

In Facebook’s writeup of Pluribus, it provides the example of two computers playing rock-paper-scissors. One picks randomly while the other always picks rock. Theoretically they’d both win the same amount of games. But if the computer tried the all-rock strategy on a human, it would start losing with a quickness and never stop.

As a simple example in poker, maybe a particular series of bets always makes the computer go all in regardless of its hand. If a player can spot that series, they can take the computer to town any time they like. Finding and preventing ruts like these is important to creating a game-playing agent that can beat resourceful and observant humans.

To do this Pluribus does a couple things. First, it has modified versions of its blueprint to put into play should the game lean towards folding, calling, or raising. Different strategies for different games mean it’s less predictable, and it can switch in a minute should the bet patterns change and the hand go from a calling to a bluffing one.

It also engages in a short but comprehensive introspective search looking at how it would play if it had every other hand, from a big nothing up to a straight flush, and how it would bet. It then picks its bet in the context of all those, careful to do so in such a way that it doesn’t point to any one in particular. Given the same hand and same play again, Pluribus wouldn’t choose the same bet, but rather vary it to remain unpredictable.

These strategies contribute to the “consistent randomness” I alluded to earlier, and which were a part of the model’s ability to slowly but reliably put some of the best players in the world.

The human’s lament

There are too many hands to point to a particular one or ten that indicate the power Pluribus was bringing to bear on the game. Poker is a game of skill, luck, and determination, and one where winners emerge after only dozens or hundreds of hands.

And here it must be said that the experimental setup is not entirely reflective of an ordinary 6-person poker game. Unlike a real game, chip counts are not maintained as an ongoing total — for every hand, each player was given 10,000 chips to use as they pleased, and win or lose they were given 10,000 in the next hand as well.

interface

The interface used to play poker with Pluribus. Fancy!

Obviously this rather limits the long-term strategies possible, and indeed “the bot was not looking for weaknesses in its opponents that it could exploit,” said Facebook AI research scientist Noam Brown. Truly Pluribus was living in the moment the way few humans can.

But simply because it was not basing its play on long-term observations of opponents’ individual habits or styles does not mean that its strategy was shallow. On the contrary, it is arguably more impressive, and casts the game in a different light, that a winning strategy exists that does not rely on behavioral cues or exploitation of individual weaknesses.

The pros who had their lunch money taken by the implacable Pluribus were good sports, however. They praised the system’s high level play, its validation of existing techniques, and inventive use of new ones. Here’s a selection of laments from the fallen humans:

I was one of the earliest players to test the bot so I got to see its earlier versions. The bot went from being a beatable mediocre player to competing with the best players in the world in a few weeks. Its major strength is its ability to use mixed strategies. That’s the same thing that humans try to do. It’s a matter of execution for humans — to do this in a perfectly random way and to do so consistently. It was also satisfying to see that a lot of the strategies the bot employs are things that we do already in poker at the highest level. To have your strategies more or less confirmed as correct by a supercomputer is a good feeling. -Darren Elias

It was incredibly fascinating getting to play against the poker bot and seeing some of the strategies it chose. There were several plays that humans simply are not making at all, especially relating to its bet sizing. -Michael ‘Gags’ Gagliano

Whenever playing the bot, I feel like I pick up something new to incorporate into my game. As humans I think we tend to oversimplify the game for ourselves, making strategies easier to adopt and remember. The bot doesn’t take any of these short cuts and has an immensely complicated/balanced game tree for every decision. -Jimmy Chou

In a game that will, more often than not, reward you when you exhibit mental discipline, focus, and consistency, and certainly punish you when you lack any of the three, competing for hours on end against an AI bot that obviously doesn’t have to worry about these shortcomings is a grueling task. The technicalities and deep intricacies of the AI bot’s poker ability was remarkable, but what I underestimated was its most transparent strength – its relentless consistency. -Sean Ruane

Beating humans at poker is just the start. As good a player as it is, Pluribus is more importantly a demonstration that an AI agent can achieve superhuman performance at something as complicated as 6-player poker.

“Many real-world interactions, such as financial markets, auctions, and traffic navigation, can similarly be modeled as multi-agent interactions with limited communication and collusion among participants,” writes Facebook in its blog.

Yes, and war.

Powered by WPeMatico

Luminar eyes production vehicles with $100M round and new Iris lidar platform

Posted by | artificial intelligence, automotive, autonomous vehicles, funding, Gadgets, hardware, Lidar, Luminar, robotics, self-driving cars, Transportation | No Comments

Luminar is one of the major players in the new crop of lidar companies that have sprung up all over the world, and it’s moving fast to outpace its peers. Today the company announced a new $100 million funding round, bringing its total raised to more than $250 million — as well as a perception platform and a new, compact lidar unit aimed at inclusion in actual cars. Big day!

The new hardware, called Iris, looks to be about a third of the size of the test unit Luminar has been sticking on vehicles thus far. That one was about the size of a couple hardbacks stacked up, and Iris is more like a really thick sandwich.

Size is very important, of course, as few cars just have caverns of unused space hidden away in prime surfaces like the corners and windshield area. Other lidar makers have lowered the profiles of their hardware in various ways; Luminar seems to have compactified in a fairly straightforward fashion, getting everything into a package smaller in every dimension.

Luminar IRIS AND TEST FLEET LiDARS

Test model, left, Iris on the right.

Photos of Iris put it in various positions: below the headlights on one car, attached to the rear-view mirror in another and high up atop the cabin on a semi truck. It’s small enough that it won’t have to displace other components too much, although of course competitors are aiming to make theirs even more easy to integrate. That won’t matter, Luminar founder and CEO Austin Russell told me recently, if they can’t get it out of the lab.

“The development stage is a huge undertaking — to actually move it towards real-world adoption and into true series production vehicles,” he said (among many other things). The company that gets there first will lead the industry, and naturally he plans to make Luminar that company.

Part of that is of course the production process, which has been vastly improved over the last couple of years. These units can be made quickly enough that they can be supplied by the thousands rather than dozens, and the cost has dropped precipitously — by design.

Iris will cost less than $1,000 per unit for production vehicles seeking serious autonomy, and for $500 you can get a more limited version for more limited purposes like driver assistance, or ADAS. Luminar says Iris is “slated to launch commercially on production vehicles beginning in 2022,” but that doesn’t mean necessarily that they’re shipping to customers right now. The company is negotiating more than a billion dollars in contracts at present, a representative told me, and 2022 would be the earliest that vehicles with Iris could be made available.

LUMINAR IRIS TRAFFIC JAM PILOT

The Iris units are about a foot below the center of the headlight units here. Note that this is not a production vehicle, just a test one.

Another part of integration is software. The signal from the sensor has to go somewhere, and while some lidar companies have indicated they plan to let the carmaker or whoever deal with it their own way, others have opted to build up the tech stack and create “perception” software on top of the lidar. Perception software can be a range of things: something as simple as drawing boxes around objects identified as people would count, as would a much richer process that flags intentions, gaze directions, characterizes motions and suspected next actions and so on.

Luminar has opted to build into perception, or rather has revealed that it has been working on it for some time. It now has 60 people on the task split between Palo Alto and Orlando, and hired a new VP of Software, former robo-taxi head at Daimler, Christoph Schroder.

What exactly will be the nature and limitations of Luminar’s perception stack? There are dangers waiting if you decide to take it too far, because at some point you begin to compete with your customers, carmakers that have their own perception and control stacks that may or may not overlap with yours. The company gave very few details as to what specifically would be covered by its platform, but no doubt that will become clearer as the product itself matures.

Last and certainly not least is the matter of the $100 million in additional funding. This brings Luminar to a total of over a quarter of a billion dollars in the last few years, matching its competitor Innoviz, which has made similar decisions regarding commercialization and development.

The list of investors has gotten quite long, so I’ll just quote Luminar here:

G2VP, Moore Strategic Ventures, LLC, Nick Woodman, The Westly Group, 1517 Fund / Peter Thiel, Canvas Ventures, along with strategic investors Corning Inc, Cornes, and Volvo Cars Tech Fund.

The board has also grown, with former Broadcom exec Scott McGregor and G2VP’s Ben Kortlang joining the table.

We may have already passed “peak lidar” as far as sheer number of deals and startups in the space, but that doesn’t mean things are going to cool down. If anything, the opposite, as established companies battle over lucrative partnerships and begin eating one another to stay competitive. Seems like Luminar has no plans on becoming a meal.

Powered by WPeMatico

Week-in-Review: Alexa’s indefinite memory and NASA’s otherworldly plans for GPS

Posted by | 4th of July, AI assistant, alex wong, Amazon, Andrew Kortina, Android, andy rubin, appeals court, Apple, apple inc, artificial intelligence, Assistant, China, enterprise software, Getty-Images, gps, here, iPhone, machine learning, Online Music Stores, operating systems, Sam Lessin, social media, Speech Recognition, TC, Tim Cook, Twitter, United States, Venmo, voice assistant | No Comments

Hello, weekenders. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about the cult of Ive and the degradation of Apple design. On Sunday night, The Wall Street Journal published a report on how Ive had been moving away from the company, to the dismay of many on the design team. Tim Cook didn’t like the report very much. Our EIC gave a little breakdown on the whole saga in a nice piece.

Apple sans Ive


Amazon Buys Whole Foods For Over 13 Billion

The big story

This week was a tad restrained in its eventfulness; seems like the newsmakers went on 4th of July vacations a little early. Amazon made a bit of news this week when the company confirmed that Alexa request logs are kept indefinitely.

Last week, an Amazon public policy exec answered some questions about Alexa in a letter sent to U.S. Senator Coons. His office published the letter on its site a few days ago and most of the details aren’t all that surprising, but the first answer really sets the tone for how Amazon sees Alexa activity:

Q: How long does Amazon store the transcripts of user voice recordings?

A: We retain customers’ voice recordings and transcripts until the customer chooses to delete them.

What’s interesting about this isn’t that we’re only now getting this level of straightforward dialogue from Amazon on how long data is kept if not specifically deleted, but it makes one wonder why it is useful or feasible for them to keep it indefinitely. (This assumes that they actually are keeping it indefinitely; it seems likely that most of it isn’t, and that by saying this they’re protecting themselves legally, but I’m just going off the letter.)

After several years of “Hey Alexa,” the company doesn’t seem all that close to figuring out what it is.

Alexa seems to be a shit solution for commerce, so why does Amazon have 10,000 people working on it, according to a report this week in The Information? All signs are pointing to the voice assistant experiment being a short-term failure in terms of the short-term ambitions, though AI advances will push the utility.

Training data is a big deal across AI teams looking to educate models on data sets of relevant information. The company seems to say as much. “Our speech recognition and natural language understanding systems use machine learning to adapt to customers’ speech patterns and vocabulary, informed by the way customers use Alexa in the real world. To work well, machine learning systems need to be trained using real world data.”

The company says it doesn’t anonymize any of this data because it has to stay associated with a user’s account in order for them to delete it. I’d feel a lot better if Amazon just effectively anonymized the data in the first place and used on-device processing the build a profile on my voice. What I’m more afraid of is Amazon having such a detailed voiceprint of everyone who has ever used an Alexa device.

If effortless voice-based e-commerce isn’t really the product anymore, what is? The answer is always us, but I don’t like the idea of indefinitely leaving Amazon with my data until they figure out the answer.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • NASA’s GPS moonshot
    The U.S. government really did us a solid inventing GPS, but NASA has some bigger ideas on the table for the positioning platform, namely, taking it to the Moon. It might be a little complicated, but, unsurprisingly, scientists have some ideas here. Read more.
  • Apple has your eyes
    Most of the iOS beta updates are bug fixes, but the latest change to iOS 13 brought a very strange surprise: changing the way the eyes of users on iPhone XS or XS Max look to people on the other end of the call. Instead of appearing that you’re looking below the camera, some software wizardry will now make it look like you’re staring directly at the camera. Apple hasn’t detailed how this works, but here’s what we do know
  • Trump is having a Twitter party
    Donald Trump’s administration declared a couple of months ago that it was launching an exploratory survey to try to gain a sense of conservative voices that had been silenced on social media. Now @realdonaldtrump is having a get-together and inviting his friends to chat about the issue. It’s a real who’s who; check out some of the people attending here.
Amazon CEO And Blue Origin Founder Jeff Bezos Speaks At Air Force Association Air, Space And Cyber Conference

(Photo by Alex Wong/Getty Images)

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Amazon is responsible for what it sells:
    [Appeals court rules Amazon can be held liable for third-party products]
  2. Android co-creator gets additional allegations filed:
    [Newly unsealed court documents reveal additional allegations against Andy Rubin]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. TechCrunch reporter Kate Clark did a great interview with the ex-Facebook, ex-Venmo founding team behind Fin and how they’re thinking about the consumerization of the enterprise.

Sam Lessin and Andrew Kortina on their voice assistant’s workplace pivot

“…The thing is, developing an AI assistant capable of booking flights, arranging trips, teaching users how to play poker, identifying places to purchase specific items for a birthday party and answering wide-ranging zany questions like “can you look up a place where I can milk a goat?” requires a whole lot more human power than one might think. Capital-intensive and hard-to-scale, an app for “instantly offloading” chores wasn’t the best business. Neither Lessin nor Kortina will admit to failure, but Fin‘s excursion into B2B enterprise software eight months ago suggests the assistant technology wasn’t a billion-dollar idea.…”

Here are some of our other top reads this week for premium subscribers. This week, we talked a bit about asking for money and the future of China’s favorite tech platform:

Want more TechCrunch newsletters? Sign up here.

Powered by WPeMatico

Samsung shuts down its AI-powered Mall shopping app in India

Posted by | Amazon, Android, Apps, artificial intelligence, Asia, Bixby, india, Samsung, Samsung Electronics, Shopclues, Xiaomi | No Comments

Samsung has quietly discontinued an app that it built specifically for India, one of its largest markets and where it houses a humongous research and development team. The AI-powered Android app, called Samsung Mall, was positioned to help users identify objects around them and locate them on shopping sites to make a purchase.

The company has shut down the app a year and a half after its launch. Samsung Mall was exclusively available for select company handsets and was launched alongside the Galaxy On7 Prime smartphone. News blog TizenHelp was first to report the development.

At the time of launch, Samsung said the Mall app would complement features of Bixby, the company’s virtual assistant. Bixby already offers a functionality that allows users to identify objects through photos — but does not let them make the purchase.

samsung mall india

“The first insight while developing Samsung Mall was that consumers may be looking to find the price, the colour, delivery options and a lot of other things. Indian consumers want to find the best deals first. They aren’t tied up with one particular portal as well,” Sanjay Razdan, director of Samsung India told local outlet India Today at the time of the launch.

Samsung partnered with Amazon, ShopClues and TataCLiQ to show relevant results from these retailers on its “one-stop online experience” app. Users were also able to compare prices to see which website was offering them the item at lowest cost.

Samsung Mall app was downloaded about five million times from Google Play Store in India since March 2018, Randy Nelson, head of Mobile Insights at analytics firm SensorTower told TechCrunch. The app had begun to lose its popularity in recent months, though. Samsung has pulled the app from the app store.

“Downloads in May totaled 275,000 — which was down 38% year-over-year from 476,000 in May 2018. It was ranked No. 1,055 by downloads in India’s Google Play store in May — down from 487 a year ago,” said Nelson.

Once the top smartphone vendor in India, Samsung has lost that crown to Xiaomi. The Chinese smartphone maker has held the tentpole position in India for two straight years now, according to research firm IDC.

A Samsung spokesperson in India, reached out to by TechCrunch on Monday, has yet to comment on the story.

Powered by WPeMatico