TC

Facebook acquires the VR game studio behind one of the Rift’s best titles

Posted by | Facebook, Gaming, M&A, Oculus Rift, Sanzaru Games, TC, Virtual reality | No Comments

Facebook is aiming to build on its VR hardware launches of 2019 with an investment in virtual reality software.

Facebook announced today that it has acquired Bay Area VR studio Sanzaru Games, the developer of “Asgard’s Wrath,” considered by many enthusiasts to be one of the Oculus Rift’s best games. Terms of the deal weren’t disclosed, but the studio will continue to operate its offices in the U.S. and Canada with “the vast majority” of employees coming aboard following the acquisition, Facebook says.

The 13-year-old game studio has created a total of four titles for the Oculus Rift, including “Asgard’s Wrath” and “Marvel Powers United VR,” both of which were at least partially funded by Oculus Studios. Sanzaru has also made a number of titles on console and mobile systems, releasing games structured around their own IP alongside licensed titles for properties like Sonic and Spyro.

Following Facebook’s acquisition of Beat Games in November, the Sanzaru Games purchase showcases Facebook’s continued interest in propping up VR game studios and aligning them around their interests while allowing them to operate independently. While Beat Games’ “Beat Saber” was considered a more mass market title, Sanzaru’s “Asgard’s Wrath” represented a play toward courting serious gamers with a lengthier first-person adventure title.

Facebook has already injected billions of dollars into its VR ambitions and, as the company hopes to build out the content ecosystems of hardware it released last year (including the Oculus Quest and Oculus Rift S), there is little to suggest that their rate of investment will slow in the near future.

Powered by WPeMatico

Forensic Architecture redeploys surveillance-state tech to combat state-sponsored violence

Posted by | augmented reality, eyal weizman, forensic architecture, Gadgets, Government, Media, privacy, Social, TC, Virtual reality | No Comments

The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.

Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.

Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.

For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.

In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.

Situated testimony for survivors

I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; we’ll come back to this.)

The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.

“We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”

The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.

“Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”

A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.

But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.

“We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.

“And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”

Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.

“We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”

Using virtual reality: “Unparalleled. It’s almost tactile.”

In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.

“We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. We had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”

Dean Issacharoff — the soldier accused by Israel of giving false testimony — describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)

One thing about VR is that the sense of space is very real; if the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.

“That project is the first use of VR interviews we have done — it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing an incident is unparalleled. It’s almost tactile; you can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”

A photogrammetry-based reconstruction of the area of Hebron where the incident took place.

In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing toward an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”

Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have known that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.

Avoiding “product placement” and tech incursion

Sophie Landres, MOAD’s curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.

“For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”

Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.

She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.

“On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”

In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.

“The arbitrary logic of the border”

As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.

In a statement issued publicly afterwards, Weizman dissected the event.

In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.

This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.

The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors — police, militaries, secret services, border agencies — that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher-level monitoring by the very state agencies investigated.”

Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.

Powered by WPeMatico

Venmo prototypes a debit card for teenagers

Posted by | Apps, Current, debit cards, eCommerce, Finance, financial services, Greenlight, Kard, mastercard, Mobile, PayPal, Revolut, Social, TC, Venmo | No Comments

Allowance is going digital. Venmo has been spotted prototyping a new feature that would allow adult users to create for their teenage children a debit card connected to their account. That could potentially let parents set spending notifications and limits while giving kids more flexibility in urgent situations than a few dollars stuffed in a pocket.

Delving into children’s banking could establish a new reason for adults to sign up for Venmo, get them saving more in Venmo debit accounts where the company can earn interest on the cash and drive purchase frequency that racks up interchange fees for Venmo’s owner PayPal .

But Venmo is arriving late to the teen debit card market. Startups like Greenlight and Step let parents manage teen spending on dedicated debit cards. More companies like Kard and neo banking giant Revolut have announced plans to launch their own versions. And Venmo’s prototype uses very similar terminology to that of Current, a frontrunner in the children’s banking space with over 500,000 accounts that raised a $20 million Series B late last year.

The first signs of Venmo’s debit card were spotted by reverse engineering specialist Jane Manchun Wong, who has provided slews of accurate tips to TechCrunch in the past. Hidden in Venmo’s Android app is code revealing a “delegate card” feature, designed to let users create a debit card that’s connected to their account but has limited privileges.

A screenshot generated from hidden code in Venmo’s app, via Jane Manchun Wong

A set-up screen Wong was able to generate from the code shows the option to “Enter your teen’s info,” because “We’ll use this to set up the debit card.” It asks parents to enter their child’s name, birth date and “What does your teen call you?” That’s almost identical to the “What does [your child’s name] call you?” set-up screen for Current’s teen debit card.

When TechCrunch asked about the teen debit feature and when it might launch, a Venmo spokesperson gave a cagey response that implies it’s indeed internally testing the option, writing “Venmo is constantly working to identify ways to refine and enhance the user experience. We frequently test product offerings to understand the value it could have for our users, and I don’t have anything further to share right now.”

Typically, the tech company product development flow sees them come up with ideas, mock them up, prototype them in their real apps as internal-only features, test them externally with small percentages of real users, then launch them officially if feedback and data is positive throughout. It’s unclear when Venmo might launch teen debit cards, though the product could always be scrapped. It’d need to move fast to beat Revolut and Kard to market.

Current’s teen debit card

The launch would build upon the June 2018 launch of Venmo’s branded Mastercard debit card that’s monetized through interchange fees and interest on savings. It offers payment receipts with options to split charges with friends within Venmo, free withdrawls at MoneyPass ATMs, rewards and in-app features for reseting your PIN or disabling a stolen card. Venmo also plans to launch a credit card issued by Synchrony this year.

Venmo might look to equip its teen debit card with popular features from competitors, like automatic weekly allowance deposits, notifications of all purchases or the ability to block spending at certain merchants. It’s unclear if it will charge a fee like the $36 per year subscription for Current.

Current offers these features for parents who set up a teen debit card

Tech startups are increasingly pushing to offer a broad range of financial services where margins are high. It’s an easy way to earn cheap money at a time when unit economics are coming under scrutiny in the wake of the WeWork implosion. Investors are pinning their hopes on efficient financial services too, pouring $34 billion into fintech startups during 2019.

Venmo’s already become a popular way for younger people to split the bill for Uber rides or dinner. Bringing social banking to a teen demographic probably should have been its plan all along.

Powered by WPeMatico

In Fortnite’s new spy-themed season, more is more

Posted by | fortnite, Gaming, TC | No Comments

The new season of Fortnite’s second chapter finally landed last week, shaking up a reimagined map that burst dramatically out of a black hole in the game last year. Over the weekend, we scoped out what’s changed in a game now sprinkled with secret agents, laser beams and all manner of things dipped in gold. Happily, we can report that Epic returns the game to its true colors in season 2, with some innovative ideas that deepen the game for casual players.

The black hole event and subsequent total map makeover were exciting at the time, but as the months ticked by, Epic’s decision to pare down the game’s excesses left the game feeling bare. In season 2, Epic piles a lot of new ideas onto the game’s foundation, and the game feels weirder and more chaotic with a map that’s much more alive as a result. And bananas in suits. Did we mention bananas in suits?

The Island has been taken over by covert operatives – members of Ghost and Shadow. Will you join the fight? pic.twitter.com/dmUiUyxWM2

— Fortnite (@FortniteGame) February 20, 2020

In season 2, Fortnite takes its most committed stab yet at a coherent theme, with spies, secret societies, dapper bananas, bulky henchmen and… a really swole cat for some reason. It’s a fun vibe and well-executed so far. That theme plays out everywhere, from a revamped battle pass menu designed as a spy headquarters to some very dynamic new high-risk/high-reward map hotspots chock full of special new weapons, locked vaults and laser beams.

Even better, the new locations are stocked with NPC versions of the boss-like characters the season introduces us to right off the bat, making for a fun and reasonably challenging way to mix up gameplay when you need a break from the sometimes lonely intensity of battle royale play.

Suit up, it’s time to drop in, secure intel and take back the Island. The Agency is calling, whose side are you on? pic.twitter.com/kHw6LcDSnT

— Fortnite (@FortniteGame) February 20, 2020

The new season keeps the old map mostly intact while adding five main new locations, all heavily guarded, loot-rich fortresses. That means a new point of interest near each corner of the map, and one right on the central island (a spot inevitably destined for something more interesting than a suburban home). The rest of the map doesn’t have many visual changes, but a handful of smaller, old locations scattered around the map have been co-opted by spy organizations and staffed with henchmen, which makes for a chaotic surprise when you come across them in the heat of gameplay. Even Pleasant Park has its own underground spy hub now.

Down the line, the new season will also introduce two competing factions for players to join, Ghost and Shadow. Depending on which faction you choose, players can unlock some pretty cool variants on battle pass skins, including Meowscles, a shirtless, muscle-bound catman with a pec-flexing animation that might be the best thing to ever happen to Fortnite. Well, except for the new teleporting port-a-potties. You’ll find those soon enough.

⚠ Attention Operatives: Your choices will impact each Chapter 2 – Season 2 Battle Pass Agent’s future… permanently.

No matter what side you turn them to – GHOST or SHADOW, their allegiance cannot be reversed. Choose wisely! pic.twitter.com/k88IXZAEjl

— Fortnite (@FortniteGame) February 20, 2020

As far as changes that will affect gameplay, there are many, many unvaulted weapons mixing things up relative to last season’s stripped-down arsenal. Traps are gone, chests no longer shower you with fishing rods (thankfully) and heavy assault rifles and all manner of silenced guns have made a comeback. And if you really want to be treated to the best weapons in the game, you can raid one of the five new spy headquarters to take down bosses, including an explosive-happy rocker named TNTina, a sharply dressed guy calling himself Midas and Meowscles (oh Meowscles!), who hangs out on his own gigantic, laser-guarded yacht.

As you work through the battle pass, you’ll also unlock these boss characters as skins. It’s a fun way to drape some light narrative over a game loved mostly for its incoherent total cartoon chaos rather than a character-centric light and fluffy multiplayer shooter like Overwatch. And because Epic is tasked with the impossible — maintaining momentum on a game with such historic success it basically became a mainstream social network at its peak — carving out a deeper game under Fortnite’s candy-colored shell can’t hurt.

Powered by WPeMatico

Petnet’s smart pet feeder system is back after a week-long outage, but customers are still waiting for answers

Posted by | Gadgets, outage, Petnet, Petnet.io, Pets, smart feeder, TC | No Comments

Petnet, the smart pet feeder backed by investors including Petco, recently experienced a week-long system outage affecting its second-generation SmartFeeders. While the startup’s customer service tweeted over the weekend that its SmartFeeders and app’s functionality have been restored, Petnet’s lack of responsiveness continues to leave many customers frustrated and confused.

Petnet first announced on Feb. 14 that it was investigating a system outage affecting its second-generation SmartFeeders that made the feeders appear to be offline. The company said in a tweet that the SmartFeeders were still able to dispense on schedule, but several customers replied that their devices had also stopped dispensing food or weren’t dispensing it on schedule.

On Feb. 19, the company said that it is “working closely with our third-party service provider in regards to the outage,” before announcing on Feb. 22 that the SmartFeeders are returning online.

During that time, customers voiced frustration at the company’s lack of responses to their questions on Twitter and Facebook. Messages to the company’s support email and CEO Carlos Herrera were undeliverable.

TechCrunch tried contacting their emails and got delivery failure notices. A message sent to their Twitter account was also not replied to. We have contacted the company again for comment.

 

Petnet also experienced a similar system outage last month.

According to Crunchbase, Petnet.io has raised $14.9 million since it was founded in 2012, including a Series A led by Petco.

In a statement sent to TechCrunch over the weekend before Petnet said the outage was resolved, a Petco representative said, “Petco is a minor and passive investor in Petnet, but we do not have any involvement in the company’s operations nor insight into the system outage they are currently experiencing.”

Powered by WPeMatico

Do phones need to fold?

Posted by | Android, consumer electronics, foldables, Gadgets, hardware, iPhone, LG, Motorola, Nokia, Samsung, Samsung Electronics, smartphones, Sony, TC | No Comments

As Samsung (re)unveiled its clamshell folding phone last week, I kept seeing the same question pop up amongst my social circles: why?

I was wondering the same thing myself, to be honest. I’m not sure even Samsung knows; they’d win me over by the end, but only somewhat. The halfway-folded, laptop-style “Flex Mode” allows you to place the phone on a table for hands-free video calling. That’s pretty neat, I guess. But… is that it?

The best answer to “why?” I’ve come up with so far isn’t a very satisfying one: Because they can (maybe). And because they sort of need to do something.

Let’s time-travel back to the early 2000s. Phones were weird, varied and no manufacturers really knew what was going to work. We had basic flip phones and Nokia’s indestructible bricks, but we also had phones that swiveled, slid and included chunky physical keyboards that seemed absolutely crucial. The Sidekick! LG Chocolate! BlackBerry Pearl! Most were pretty bad by today’s standards, but it was at least easy to tell one model from the next.

(Photo by Kim Kulish/Corbis via Getty Images)

Then came the iPhone in 2007; a rectangular glass slab defined less by physical buttons and switches and more by the software that powered it. The device itself, a silhouette. There was hesitation to this formula, initially; the first Android phones shipped with swiveling keyboards, trackballs and various sliding pads. As iPhone sales grew, everyone else’s buttons, sliders and keyboards were boiled away as designers emulated the iPhone’s form factor. The best answer, it seemed, was a simple one.

Twelve years later, everything has become the same. Phones have become… boring. When everyone is trying to build a better rectangle, the battle becomes one of hardware specs. Which one has the fastest CPU? The best camera?

Powered by WPeMatico

Shopify joins Facebook’s cryptocurrency Libra Association

Posted by | cryptocurrency, eCommerce, Facebook, Libra, Libra Association, Mobile, payments, Shopify, Social, stablecoin, TC | No Comments

After eBay, Visa, Stripe and other high-profile partners ditched the Facebook -backed cryptocurrency collective, Libra scored a win today with the addition of Shopify. The e-commerce platform will become a member of Libra Association, contributing at least $10 million and operating a node that processes transactions for the Facebook-originated stable coin.

If Libra manages to assuage international regulators’ concerns, which are currently blocking its roll out, Shopify could gain a way to process transactions without paying credit card fees. Libra is designed to move between wallets with zero or nearly-zero fees. That could save money for Shopify and the 1 million merchants running online shops on its platform.

Shopify stressed that helping merchants reduce fees and bringing commerce opportunities to developing nations as reasons it’s joining the Libra Association . “Much of the world’s financial infrastructure was not built to handle the scale and needs of internet commerce,” Shopify writes. Here are the most critical parts of its announcement:

Our mission is to make commerce better for everyone and to do that, we spend a lot of our time thinking about how to make commerce better in parts of the world where money and banking could be far better . . . As a member of the Libra Association, we will work collectively to build a payment network that makes money easier to access and supports merchants and consumers everywhere . . . Our mission has always been to support the entrepreneurial journey of the more than one million merchants on our platform. That means advocating for transparent fees and easy access to capital, and ensuring the security and privacy of our merchants’ customer data. We want to create an infrastructure that empowers more entrepreneurs around the world.

As part of the Libra Association, Shopify will become a validator node operator, gain one vote on the Libra Association council and can earn dividends from interest earned on the Libra reserve in proportion to its investment, which is $10 million at a minimum.

The Libra Association had lost much of its e-commerce expertise when a string of members abandoned the project in October amidst regulatory scrutiny. That included traditional payment processors like Visa and Mastercard, online processors like Stripe and PayPal and marketplaces like eBay. That threw into question whether Libra would have the right partners to make the cryptocurrency accepted in enough places to be useful to people.

As it works to convince regulators Libra is safe, Facebook has been working on its other payment plays, including Facebook Pay and WhatsApp Pay, that rely on traditional bank transfers or credit cards.

Shopify’s CEO Tobi Lutke tweeted that “Shopify spends a lot of time thinking about how to make commerce better in parts of the world where money and banking could be far better. That’s why we decided to become a member of the Libra Association.”

“We are proud to welcome Shopify, Inc. (SHOP) to the Libra Association. As a multinational commerce platform with over one million businesses in approximately 175 countries, Shopify, Inc. brings a wealth of knowledge and expertise to the Libra project,” writes Dante Disparte, the Libra Association’s head of Policy and Communications. “Shopify joins an active group of Libra Association members committed to achieving a safe, transparent, and consumer-friendly implementation of a global payment system that breaks down financial barriers for billions of people.”

A recent hire further tied the two companies together. Facebook’s former lead product manager for its payment platform and billing teams, Kaz Nejatian, in September became Shopify’s VP and GM of money.

Operating an e-commerce store can be difficult or impossible without a traditional bank account that can be tough to attain in some developing countries. Libra could allow these merchants to establish a Libra Wallet where payments are sent instantly, without steep credit card fees, and in theory could be cashed out at local brick-and-mortar establishments or ATMs for local fiat currency.

Shopify’s credit card readers

But for any of that to happen, the Libra Association will have to convince the U.S. government, the EU and more that it won’t help terrorists launder money, hurt people’s privacy or weaken nations’ power in the global financial system. “The French Finance Minister Bruno Le Maire said, “the monetary sovereignty of countries is at stake from a possible privatisation of money . . . we cannot authorise the development of Libra on European soil.”

Libra was initially slated to launch in 2020. We’ll see.

Here’s the full list of Libra Association members:

Current

Facebook’s Calibra, Shopify, PayU, Farfetch, Lyft, Spotify, Uber, Illiad SA, Anchorage, Bison Trails, Coinbase, Xapo, Andreessen Horowitz, Union Square Ventures, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking.

Former members

Vodafone, Visa, Mastercard, Stripe, PayPal, Mercado Pago, Bookings Holdings, eBay.

Powered by WPeMatico

One year later, the future of foldables remains uncertain

Posted by | foldables, hardware, Mobile, Motorola, razr, Samsung, TC | No Comments

Yesterday, Samsung announced that the Galaxy Flip Z sold out online. What, precisely, that means, is hard to say, of course, without specific numbers from the company. But it’s probably enough to make the company bullish about its latest wade into the foldable waters, in the wake of last year’s Fold — let’s just say “troubles.”

Response to the device has been positive. I wrote mostly nice things about the Flip, with the caveat that the company only loaned out the product for 24 hours (I won’t complain here about heading into the city on a Saturday in 20-degree weather to return the device. I’m mostly not that petty).

Heck, the product even scored a (slightly) better score on iFixit’s repairability meter than the Razr. Keep in mind, it got a 2/10 to Motorola’s 1/10 (the lowest score), but in 2020, we’re all taking victories where we can get them.

There’s been some negative coverage mixed in, as well, of course; iFixit noted that the Flip could have some potential long-term dusty problems due to its hinge, writing, “it seems like dust might be this phone’s Kryptonite.” Also, the $1,400 phone’s new, improved folding glass has proven to be vulnerable to fingernails, of all things — a definite downside if you have, you know, fingers.

Reports of cracked screens have also begun to surface, owing, perhaps, to cold weather. It’s still hard to say how widespread these concerns are. Samsung’s saving grace, however, could well be the Razr. First the device made it through a fraction of the folds of Samsung’s first-gen product. Then reviewers and users alike complained of a noisy fold mechanism and build quality that might be…lacking.

A review at Input had some major issues with a screen that appeared to fall apart at the seams (again, perhaps due to cold weather). Motorola went on the defensive, issuing the following statement:

We have full confidence in razr’s display, and do not expect consumers to experience display peeling as a result of normal use. As part of its development process, razr underwent extreme temperature testing. As with any mobile phone, Motorola recommends not storing (e.g., in a car) your phone in temperatures below -4 degrees Fahrenheit and above 140 degrees Fahrenheit. If consumers experience device failure related to weather during normal use, and not as a result of abuse or misuse, it will be covered under our standard warranty.

Consensus among reviews is to wait. The Flip is certainly a strong indication that the category is heading in the right direction. And Samsung is licensing its folding glass technology, which should help competitors get a bit of a jump start and hopefully avoid some of the pitfalls of the first-gen Fold and Razr.

A new survey from PCMag shows that 82% of consumers don’t plan to purchase such a device, with things like snapping hinges, fragile screens and creases populating the list of concerns. Which, honestly, fair enough on all accounts.

The rush to get to market has surely done the category a disservice. Those who consider themselves early adopters are exactly the people who regularly read tech reviews, and widespread issues are likely enough to make many reconsider pulling the trigger on a $1,500-$2,000 device. Even early adopters are thrilled about the idea of beta testing for that much money.

Two steps forward, one step back, perhaps? Let’s check back in a generation or two from now and talk.

Powered by WPeMatico

How ‘The Mandalorian’ and ILM invisibly reinvented film and TV production

Posted by | Disney, Gadgets, hardware, ILM, lucasfilm, Media, TC, The Mandalorian | No Comments

“The Mandalorian” was a pretty good show. On that most people seem to agree. But while a successful live-action Star Wars TV series is important in its own right, the way this particular show was made represents a far greater change, perhaps the most important since the green screen. The cutting edge tech (literally) behind “The Mandalorian” creates a new standard and paradigm for media — and the audience will be none the wiser.

What is this magical new technology? It’s an evolution of a technique that’s been in use for nearly a century in one form or another: displaying a live image behind the actors. The advance is not in the idea but the execution: a confluence of technologies that redefines “virtual production” and will empower a new generation of creators.

As detailed in an extensive report in American Cinematographer Magazine (I’ve been chasing this story for some time, but suspected this venerable trade publication would get the drop on me), the production process of “The Mandalorian” is completely unlike any before, and it’s hard to imagine any major film production not using the technology going forward.

“So what the hell is it?” I hear you asking.

Meet “the Volume.”

Formally called Stagecraft, it’s 20 feet tall, 270 degrees around, and 75 feet across — the largest and most sophisticated virtual filmmaking environment yet made. ILM just today publicly released a behind-the-scenes video of the system in use, as well as a number of new details about it.

It’s not easy being green

In filmmaking terms, a “volume” generally refers to a space where motion capture and compositing take place. Some volumes are big and built into sets, as you might have seen in behind-the-scenes footage of Marvel or Star Wars movies. Some are smaller, plainer affairs, where the motions of the actors behind CG characters play out their roles.

But they generally have one thing in common: They’re static. Giant, bright green, blank expanses.

Does that look like fun to shoot in?

One of the most difficult things for an actor in modern filmmaking is getting into character while surrounded by green walls, foam blocks indicating obstacles to be painted in later and people with mocap dots on their face and suits with ping-pong balls attached. Not to mention everything has green reflections that need to be lit or colored out.

Advances some time ago (think prequels-era Star Wars) enabled cameras to display a rough pre-visualization of what the final film would look like, instantly substituting CG backgrounds and characters onto monitors. Sure, that helps with composition and camera movement, but the world of the film isn’t there, the way it is with practical sets and on-site shoots.

Practical effects were a deliberate choice for “The Child” (AKA Baby Yoda) as well.

What’s more, because of the limitations in rendering CG content, the movements of the camera are often restricted to a dolly track or a few pre-selected shots for which the content (and lighting, as we’ll see) has been prepared.

This particular volume, called Stagecraft by ILM, the company that put it together, is not static. The background is a set of enormous LED screens such as you might have seen onstage at conferences and concerts. The Stagecraft volume is bigger than any of those — but more importantly, it’s smarter.

See, it’s not enough to just show an image behind the actors. Filmmakers have been doing that with projected backgrounds since the silent era! And that’s fine if you just want to have a fake view out of a studio window or fake a location behind a static shot. The problem arises when you want to do anything more fancy than that, like move the camera. Because when the camera moves, it immediately becomes clear that the background is a flat image.

The innovation in Stagecraft and other, smaller LED walls (the more general term for these backgrounds) is not only that the image shown is generated live in photorealistic 3D by powerful GPUs, but that 3D scene is directly affected by the movements and settings of the camera. If the camera moves to the right, the image alters just as if it were a real scene.

This is remarkably hard to achieve. In order for it to work, the camera must send its real-time position and orientation to, essentially, a beast of a gaming PC, because this and other setups like it generally run on the Unreal engine (Epic does its own breakdown of the process here). This must take that movement and render it exactly in the 3D environment, with attendant changes to perspective, lighting, distortion, depth of field and so on — all fast enough so that those changes can be shown on the giant wall nearly instantly. After all, if the movement of the background lagged the camera by more than a handful frames it would be noticeable to even the most naive viewer.

Yet fully half of the scenes in “The Mandalorian” were shot within Stagecraft, and my guess is no one had any idea. Interior, exterior, alien worlds or spaceship cockpits, all used this giant volume for one purpose or another.

There are innumerable technological advances that have contributed to this; “The Mandalorian” could not have been made as it was five years ago. The walls weren’t ready; the rendering tech wasn’t ready; the tracking wasn’t ready — nothing was ready. But it’s ready now.

It must be mentioned that Jon Favreau has been a driving force behind this filmmaking method for years now; films like the remake of “The Lion King” were in some ways tech tryouts for “The Mandalorian.” Combined with advances made by James Cameron in virtual filmmaking, and, of course, the indefatigable Andy Serkis’s work in motion capture, this kind of production is only just now becoming realistic due to a confluence of circumstances.

Not just for SFX

Of course Stagecraft is probably also the most expensive and complex production environments ever used. But what it adds in technological overhead (and there’s a lot) it more than pays back in all kinds of benefits.

For one thing, it nearly eliminates on-location shooting, which is phenomenally expensive and time-consuming. Instead of going to Tunisia to get those wide-open desert shots, you can build a sandy set and put a photorealistic desert behind the actors. You can even combine these ideas for the best of both worlds: Send a team to scout locations in Tunisia and capture them in high-definition 3D to be used as a virtual background.

This last option produces an amazing secondary benefit: Reshoots are way easier. If you filmed at a bar in Santa Monica and changes to the dialogue mean you have to shoot the scene over again, no need to wrangle permits and painstakingly light the bar again. Instead, the first time you’re there, you carefully capture the whole scene with the exact lighting and props you had there the first time and use that as a virtual background for the reshoots.

The fact that many effects and backgrounds can be rendered ahead of time and shot in-camera rather than composited in later saves a lot of time and money. It also streamlines the creative process, with decisions able to be made on the spot by the filmmakers and actors, since the volume is reactive to their needs, not vice versa.

Lighting is another thing that is vastly simplified, in some ways at least, by something like Stagecraft. The bright LED wall can provide a ton of illumination, and because it actually represents the scene, that illumination is accurate to the needs of that scene. A red-lit interior of a space station, and the usual falling sparks and so on, shows red on the faces and of course the highly reflective helmet of the Mandalorian himself. Yet the team can also tweak it, for instance sticking a bright white line high on the LED wall out of sight of the camera but which creates a pleasing highlight on the helmet.

Naturally there are some trade-offs. At 20 feet tall, the volume is large but not so large that wide shots won’t capture the top of it, above which you’d see cameras and a different type of LED (the ceiling is also a display, though not as powerful). This necessitates some rotoscoping and post-production, or limits the angles and lenses one can shoot with — but that’s true of any soundstage or volume.

A shot like this would need a little massaging in post, obviously.

The size of the LEDs, that is of the pixels themselves, also limits how close the camera can get to them, and of course you can’t zoom in on an object for closer inspection. If you’re not careful, you’ll end up with Moiré patterns, those stripes you often see on images of screens.

Stagecraft is not the first application of LED walls — they’ve been used for years at smaller scales — but it is certainly by far the most high-profile, and “The Mandalorian” is the first real demonstration of what’s possible using this technology. And believe me, it’s not a one-off.

I’ve been told that nearly every production house is building or experimenting with LED walls of various sizes and types — the benefits are that obvious. TV productions can save money but look just as good. Movies can be shot on more flexible schedules. Actors who hate working in front of green screens may find this more palatable. And you better believe commercials are going to find a way to use these as well.

In short, a few years from now it’s going to be uncommon to find a production that doesn’t use an LED wall in some form or another. This is the new standard.

This is only a general overview of the technology that ILM, Disney and their many partners and suppliers are working on. In a follow-up article I’ll be sharing more detailed technical information directly from the production team and technologists who created Stagecraft and its attendant systems.

Powered by WPeMatico

As Morgan Stanley buys E-Trade, Robinhood preps social trading

Posted by | Apps, E-Trade, Finance, M&A, Mobile, morgan stanley, online brokerage, Robinhood, Social, Startups, stock trading, TC, Vlad Tenev | No Comments

Before it was worth $7.6 billion, the original idea for Robinhood was a stock-trading social network. At my kitchen table in San Francisco in 2013, the founders envisioned an app for sharing hot tips to a feed complete with a leaderboard of whose predictions were most accurate. Once they had SEC approval, they pivoted toward the real money maker: letting people buy and sell stocks in the app, and pay to borrow cash to do so.

Now, seven years later, Robinhood is subtly taking the first steps back to its start. Today it’s launching Profiles. For now, they let users see analytics about their portfolio, like how concentrated they are in stocks versus options versus cryptocurrency, as well as across different business sectors. Complete with usernames and a photo, Profiles let you follow self-made or Robinhood-provided lists of stocks and other assets.

Profiles could give Robinhood’s customers the confidence to trade more, and create a sense of lock-in that stops them from straying to other brokerages that have dropped their per-trade fees to zero to match the startup, like Charles Schwab, Ameritrade and E-Trade, which was acquired for $13 billion today by Morgan Stanley, as reported by The Wall Street Journal.

The Profile features certainly sound helpful. They could reveal that your portfolio is too centered around tech, media and telecom stocks, or that you’re ignoring cryptocurrency or corporations from your home state. Lists also makes it easier to track specific business verticals, save stocks to buy when you have the cash or set aside some for deeper research. Robinhood pulls info from FactSet, Morningstar and other trusted sources to figure out which stocks and ETFs go into sector lists, or you can make and name your own. Profiles and lists begin to roll out to all users next week.

But what’s most interesting is how profiles lay the foundation for Robinhood as a social network. It’s easy to imagine letting users follow other accounts or lists they create. The original Robinhood app let users make predictions like “17% increase in Facebook share price over the next 11 weeks,” with comments to explain why. It showed users’ prediction accuracy, their average holding time for assets, a point score for smart foresight and community BUY or SELL ratings on stocks.

If Robinhood rebuilt some of these features, it might lessen the need for an expensive financial advisor or having enough cash to qualify for one with a different brokerage. Robinhood could let you crowdsource advice. “We understand the connotation of taking something from the rich and giving it to the poor. Robinhood is liberating information that’s locked up with professionals and giving it to the people,” Robinhood co-founder and co-CEO Vlad Tenev told me back in 2013.

Robinhood would certainly need to be careful about scammy tips going viral. Improper safeguards could lead to pump and dump schemes where those late to buy in get screwed when prices snap back to reality.

But embracing social could leverage some of its strongest assets: the youthfulness of its user base and the depth of connection to its users. The median age of a Robinhood customer is 30, and half say they’re first-time investors. Being able to turn to friends or experts within the app might convince them to pull the trigger on trades.

Most online brokerages are somewhat undifferentiated beyond differences in pricing, while their clunky, unstylized products don’t generate the same brand affinity as people have for Robinhood. Unsatisfied users could bail for a competitor at any time. Robinhood’s users are accustomed to social networking and the way it locks in users, because they don’t want to abandon their community.

When I asked Robinhood Profiles’ product manager Shanthi Shanmugam directly about whether this was the start of more social trading features, they suspiciously dodged the question, telling me, “When thinking about how to reflect who you are as an investor, we looked at how other apps represent you and it felt natural to leverage a design that felt more like a profile. When helping people group their investment ideas, it was easy to envision this as a playlist you might find on your favorite music app.”

That’s far from a denial. Offering social validation for trading could help Robinhood earn more from its customers despite their small total account balances. While Robinhood might have more than 10 million accounts versus E-Trade’s 5.2 million and Morgan Stanley’s 3 million, E-Trade’s average account size is $69,230 and Morgan Stanley’s is $900,000, while a survey found most of Robinhood’s held $1,000 to $5,000.

That all means that Robinhood earns less on interest sitting in users’ accounts than the old incumbents. But Robinhood earns the majority of its money on selling order flow and through its subscription Robinhood Gold feature that lets users pay monthly so they can borrow cash to trade with. Profiles and lists, and then eventually more social features, could get Robinhood’s users trading more so there’s more order flow to sell and more reason for them to buy subscriptions.

“Democratizing access is about lowering fees, minimums and other barriers people face — like confidence. Profiles and lists make finance easier to understand and more familiar for people,” says Shanmugam. More social features built safely, more reassurance, more trading, more revenue. Robinhood has raised $910 million. But to outgun larger competitors like the newly assembled Morgan Stanley/E-Trade that’s matched its zero-fee pricing, Robinhood will have to win with product.

Powered by WPeMatico