TC

Pro gamer Tfue files lawsuit against esports org over ‘grossly oppressive’ contract

Posted by | esports, faze clan, Gaming, lawsuit, Sports, Startups, Talent, TC, tfue, Twitch, YouTube | No Comments

Turner “Tfue” Tenney, one of the world’s premier streamers and esports pros, has filed a lawsuit against esports organization Faze Clan over a “grossly oppressive, onerous and one-sided” contract, according to THR.

The complaint alleges that Faze Clan’s Gamer Agreement relegates up to 80% of the streamer’s earnings from branded content (sponsored videos) to Faze Clan, and that the contract hinders Tfue from pursuing and earning money from sponsorship deals that Faze Clan hasn’t approved.

Tfue’s lawyer, Bryan Freedman of Freedman + Taitelman, took the complaint to the California Labor Commissioner with issues that span far beyond financial contracts. Freedman wrote that Faze Clan takes advantage of young artists and actually jeopardizes their health and safety, noting an incident where Tfue was allegedly pressured to skateboard in a video and injured his arm. Freedman also wrote that Faze Clan pressured Tfue to live in one of its homes where he was given alcohol before being 21 years old, and encouraged to illegally gamble.

From the complaint:

In one instance, Tenney suffered an injury (a deep wound that likely required stitches) which resulted in permanent disfigurement. Faze Clan also encourages underage drinking and gambling in Faze Clan’s so-called Clout House and FaZe House, where Faze Clan talent live and frequently party. It is also widely publicized that Faze Clan has attempted to exploit at least one artist who is a minor.

Faze Clan issued the following statement on Twitter following the news:

A follow-up from FaZe Clan on today’s unfortunate situation. pic.twitter.com/qm6sK8v88B

— FaZe Clan (@FaZeClan) May 21, 2019

Faze Clan claims that it has taken no more than 20% of Tfue’s earnings from sponsored content, which amounts to a total of $60,000. The owner of Faze Clan, Ricky Banks, took to Twitter to make his case, showing the incredible growth of Tfue’s popularity across Twitch and YouTube since signing with Faze Clan.

I recruited Tfue to FaZe Clan in April of 2018. These are graphs from both his YouTube & Twitch channels following the mark of our relationship. pic.twitter.com/c7m3QwsoTZ

— FaZe Banks (@Banks) May 20, 2019

As it stands now, Tfue boasts more than 120 million views on Twitch, more than 10 million YouTube subscribers and 5.5 million followers on Instagram.

Banks also reiterated Faze Clan’s official statement saying that the company has taken 20% of Tfue’s earnings from branded deals, totaling $60,000.

OK LAST TWEET – To clarify Turners contract does outline splits in prizes, ad revenue, stuff like that. But again we’ve collected absolutely none of it with no plans to and that was very clear to him. We have collected a total of $60,000 from 300k in brand deals (20%). That’s it

— FaZe Banks (@Banks) May 20, 2019

The Tfue claim, however, seems to take issue with the content of the agreement, not necessarily its execution, and the general legality of these types of gamer agreements across the esports landscape. Moreover, the complaint alleges that Tfue lost potential earnings due to his agreement with Faze Clan and their own conflicts of interest with various brands interested in a sponsorship.

Powered by WPeMatico

Adobe brings its Premiere Rush video editing app to Android

Posted by | Adobe, Adobe Creative Cloud, Android, Apps, mobile phones, OnePlus, PIXEL, samsung galaxy, samsung galaxy s9, smartphones, TC | No Comments

Adobe launched Premiere Rush, its newest all-in-one video editing tool that is essentially a pared-down version of its flagship Premiere Pro and Audition tools for professional video editors, in late 2018. At the time, it was only available on iOS, macOS and Windows. Now, however, it is also finally bringing it to Android.

There is a caveat here, though: it’ll only run on relatively new phones, including the Samsung Galaxy S9 and S10 series, Google’s Pixel 2 and 3 phones and the OnePlus 6T.

The idea behind Premiere Rush is to give enthusiasts — and the occasional YouTuber who needs to quickly get a video out — all of the necessary tools to create a video without having to know the ins and outs of a complex tool like Premiere Pro. It’s based on the same technologies as its professional counterpart, but it’s significantly easier to use. What you lose in flexibility, you gain in efficiency.

Premiere Rush is available for free for those who want to give it a try, though this “Starter Plan” only lets you export up to three projects. For full access, you either need to subscribe to Adobe’s Creative Cloud or buy a $9.99/month plan to access Rush, with team and enterprise plans costing $19.99/month and $29.99/month respectively.

Powered by WPeMatico

Why is Facebook doing robotics research?

Posted by | artificial intelligence, Facebook, Gadgets, hardware, robotics, robots, science, Social, TC | No Comments

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy,” the hexapod robot

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the auto-didactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Powered by WPeMatico

Instagram’s IGTV copies TikTok’s AI, Snapchat’s design

Posted by | Apps, instagram, Instagram IGTV, instagram video, Mobile, Snapchat, Snapchat Discover, Social, Startups, TC, tiktok | No Comments

Instagram conquered Stories, but it’s losing the battle for the next video formats. TikTok is blowing up with an algorithmically suggested vertical one-at-a-time feed featuring videos of users remixing each other’s clips. Snapchat Discover’s 2 x infinity grid has grown into a canvas for multi-media magazines, themed video collections and premium mobile TV shows.

Instagram’s IGTV…feels like a flop in comparison. Launched a year ago, it’s full of crudely cropped and imported viral trash from around the web. The long-form video hub that lives inside both a homescreen button in Instagram as well as a standalone app has failed to host lengthier must-see original vertical content. Sensor Tower estimates that the IGTV app has just 4.2 million installs worldwide, with just 7,700 new ones per day — implying less than half a percent of Instagram’s billion-plus users have downloaded it. IGTV doesn’t rank on the overall charts and hangs low at No. 191 on the US – Photo & Video app charts, according to App Annie.

Now Instagram has quietly overhauled the design of IGTV’s space inside its main app to crib what’s working from its two top competitors. The new design showed up in last week’s announcements for Instagram Explore’s new Shopping and IGTV discovery experiences. At the time, Instagram’s product lead on Explore Will Ruben told us that with the redesign, “the idea is this is more immersive and helps you to see the breadth of videos in IGTV rather than the horizontal scrolling interface that used to exist,” but the company declined to answer follow-up questions about it.

IGTV has ditched its category-based navigation system’s tabs like “For You”, “Following”, “Popular”, and “Continue Watching” for just one central feed of algorithmically suggested videos — much like TikTok. This affords a more lean-back, ‘just show me something fun’ experience that relies on Instagram’s AI to analyze your behavior and recommend content instead of putting the burden of choice on the viewer.

IGTV has also ditched its awkward horizontal scrolling design that always kept a clip playing in the top half of the screen. Now you’ll scroll vertically through a 2 x infinity grid of recommended clips in what looks just like a Snapchat Discover feed. Once you get past a first video that auto-plays up top, you’ll find a full-screen grid of things to watch. You’ll only see the horizontal scroller in the standalone IGTV app, or if you tap into an IGTV video, and then tap the Browse button for finding a next clip while the last one plays up top.

Instagram seems to be trying to straddle the designs of its two competitors. The problem is that TikTok’s one-at-a-time feed works great for punchy, short videos that get right to the point. If you’re bored after five seconds you swipe to the next. IGTV’s focus on long-form means its videos might start too slowly to grab your attention if they were auto-played full-screen in the feed rather than being chosen by a viewer. But Snapchat makes the most of the two previews per row design IGTV has adopted because professional publishers take the time to make compelling cover thumbnail images promoting their content. IGTV’s focus on independent creators means fewer have labored to make great cover images, so viewers have to rely on a screenshot and caption.

Instagram is prototyping a number of other features to boost engagement across its app, as discovered by reverse-engineering specialist and frequent TechCrunch tipster Jane Manchun Wong. Those include options to blast a direct message to all your Close Friends at once but in individual message threads, see a divider between notifications and likes you have or haven’t seen, or post a Chat sticker to Stories that lets friends join a group message thread about that content. And to better compete with TikTok, it may let you add lyrics stickers to Stories that appear word-by-word in sync with Instagram’s licensed music soundtrack feature, and share Music Stories to Facebook. What we haven’t seen is any cropping tool for IGTV that would help users reformat landscape videos. The vertical-only restriction keeps lots of great content stuck outside IGTV, or letterboxed with black, color-matched backgrounds, or meme-style captions with the video as just a tiny slice in the middle.

When I spoke with Instagram co-founder and ex-CEO Kevin Systrom last year a few months after IGTV’s launch, he told me, “It’s a new format. It’s different. We have to wait for people to adopt it and that takes time . . . Everything that is great starts small.”

But to grow large, IGTV needs to demonstrate how long-form portrait mode video can give us a deeper look at the nuances of the influencers and topics we care about. The company has rightfully prioritized other drives like safety and well-being with features that hide bullies and deter overuse. But my advice from August still stands despite all the ground Instagram has lost in the meantime. “Concentrate on teaching creators how to find what works on the format and incentivizing them with cash and traffic. Develop some must-see IGTV and stoke a viral blockbuster. Prove the gravity of extended, personality-driven vertical video.” Until the content is right, it won’t matter how IGTV surfaces it.

Powered by WPeMatico

This clever transforming robot flies and rolls on its rotating arms

Posted by | drones, Gadgets, hardware, robotics, science, TC, UAVs | No Comments

There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.

Conceived by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?

Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.

Of course you could have a drone that simply has a couple of wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient because the same motor drives the rotors and wheels at the same time — though when rolling, the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase and high clearance that make it much more capable on rough terrain.

You can watch FSTAR fly, roll, transform, flatten and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:

The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs or simply ascend and fly to a new location, give FSTAR considerable adaptability.

“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.

Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations and the military.

Powered by WPeMatico

Minecraft Earth makes the whole real world your very own blocky realm

Posted by | Apps, augmented reality, Gadgets, Gaming, Microsoft, Minecraft, Mobile, niantic, TC | No Comments

When your game tops 100 million players, your thoughts naturally turn to doubling that number. That’s the case with the creators, or rather stewards, of Minecraft at Microsoft, where the game has become a product category unto itself. And now it is making its biggest leap yet — to a real-world augmented reality game in the vein of Pokémon GO, called Minecraft Earth.

Announced today but not playable until summer (on iOS and Android) or later, MCE (as I’ll call it) is full-on Minecraft, reimagined to be mobile and AR-first. So what is it? As executive producer Jesse Merriam put it succinctly: “Everywhere you go, you see Minecraft. And everywhere you go, you can play Minecraft.”

Yes, yes — but what is it? Less succinctly put, MCE is like other real-world-based AR games in that it lets you travel around a virtual version of your area, collecting items and participating in mini-games. Where it’s unlike other such games is that it’s built on top of Minecraft: Bedrock Edition, meaning it’s not some offshoot or mobile cash-in; this is straight-up Minecraft, with all the blocks, monsters and redstone switches you desire, but in AR format. You collect stuff so you can build with it and share your tiny, blocky worlds with friends.

That introduces some fun opportunities and a few non-trivial limitations. Let’s run down what MCE looks like — verbally, at least, as Microsoft is being exceedingly stingy with real in-game assets.

There’s a map, of course

Because it’s Minecraft Earth, you’ll inhabit a special Minecraftified version of the real world, just as Pokémon GO and Harry Potter: Wizards Unite put a layer atop existing streets and landmarks.

The look is blocky to be sure, but not so far off the normal look that you won’t recognize it. It uses OpenStreetMaps data, including annotated and inferred information about districts, private property, safe and unsafe places and so on — which will be important later.

The fantasy map is filled with things to tap on, unsurprisingly called tappables. These can be a number of things: resources in the form of treasure chests, mobs and adventures.

Chests are filled with blocks, naturally, adding to your reserves of cobblestone, brick and so on, all the different varieties appearing with appropriate rarity.

A pig from Minecraft showing in the real world via augmented reality.Mobs are animals like those you might normally run across in the Minecraft wilderness: pigs, chickens, squid and so on. You snag them like items, and they too have rarities, and not just cosmetic ones. The team highlighted a favorite of theirs, the muddy pig, which when placed down will stop at nothing to get to mud and never wants to leave, or a cave chicken that lays mushrooms instead of eggs. Yes, you can breed them.

Last are adventures, which are tiny AR instances that let you collect a resource, fight some monsters and so on. For example you might find a crack in the ground that, when mined, vomits forth a volume of lava you’ll have to get away from, and then inside the resulting cave are some skeletons guarding a treasure chest. The team said they’re designing a huge number of these encounters.

Importantly, all these things — chests, mobs and encounters — are shared between friends. If I see a chest, you see a chest — and the chest will have the same items. And in an AR encounter, all nearby players are brought in, and can contribute and collect the reward in shared fashion.

And it’s in these AR experiences and the “build plates” you’re doing it all for that the game really shines.

The AR part

“If you want to play Minecraft Earth without AR, you have to turn it off,” said Torfi Olafsson, the game’s director. This is not AR-optional, as with Niantic’s games. This is AR-native, and for good and ill the only way you can really play is by using your phone as a window into another world. Fortunately it works really well.

First, though, let me explain the whole build plate thing. You may have been wondering how these collectibles and mini-games amount to Minecraft. They don’t — they’re just the raw materials for it.

Whenever you feel like it, you can bring out what the team calls a build plate, which is a special item, a flat square that you virtually put down somewhere in the real world — on a surface like the table or floor, for instance — and it transforms into a small, but totally functional, Minecraft world.

In this little world you can build whatever you want, or dig into the ground, build an inverted palace for your cave chickens or create a paradise for your mud-loving pigs — whatever you want. Like Minecraft itself, each build plate is completely open-ended. Well, perhaps that’s the wrong phrase — they’re actually quite closely bounded, as the world only exists out to the edge of the plate. But they’re certainly yours to play with however you want.

Notably all the usual Minecraft rules are present — this isn’t Minecraft Lite, just a small game world. Water and lava flow how they should, blocks have all the qualities they should and mobs all act as they normally would.

The magic part comes when you find that you can instantly convert your build plate from miniature to life-size. Now the castle you’ve been building on the table is three stories tall in the park. Your pigs regard you silently as you walk through the halls and admire the care and attention to detail with which you no doubt assembled them. It really is a trip.

It doesn’t really look like this but, you get the idea

In the demo, I played with a few other members of the press; we got to experience a couple of build plates and adventures at life-size (technically actually 3/4 life size — the 1 block to 1 meter scale turned out to be a little daunting in testing). It was absolute chaos, really, everyone placing blocks and destroying them and flooding the area and putting down chickens. But it totally worked.

The system uses Microsoft’s new Azure Spatial Anchor system, which quickly and continuously fixed our locations in virtual space. It updated remarkably quickly, with no lag, showing the location and orientation of the other players in real time. Meanwhile the game world itself was rock-solid in space, smooth to enter and explore, and rarely bugging out (and that only in understandable circumstances). That’s great news considering how heavily the game leans on the multiplayer experience.

The team said they’d tested up to 10 players at once in an AR instance, and while there’s technically no limit, there’s sort of a physical limit in how many people can fit in the small space allocated to an adventure or around a tabletop. Don’t expect any giant 64-player raids, but do expect to take down hordes of spiders with three or four friends.

Pick(ax)ing their battles

In choosing to make the game the way they’ve made it, the team naturally created certain limitations and risks. You Wouldn’t want, for example, an adventure icon to pop up in the middle of the highway.

For exactly that reason the team spent a lot of work making the map metadata extremely robust. Adventures won’t spawn in areas like private residences or yards, though of course simple collectibles might. But because you’re able to reach things up to 70 meters away, it’s unlikely you’ll have to knock on someone’s door and say there’s a cave chicken in their pool and you’d like to touch it, please.

Furthermore adventures will not spawn in areas like streets or difficult to reach areas. The team said they worked very hard making it possible for the engine to recognize places that are not only publicly accessible, but safe and easy to access. Think sidewalks and parks.

Another limitation is that, as an AR game, you move around the real world. But in Minecraft, verticality is an important part of the gameplay. Unfortunately, the simple truth is that in the real world you can’t climb virtual stairs or descend into a virtual cave. You as a player exist on a 2D plane, and can interact with but not visit places above and below that plane. (An exception of course is on a build plate, where in miniature you can fly around it freely by moving your phone.)

That’s a shame for people who can’t move around easily, though you can pick up and rotate the build plate to access different sides. Weapons and tools also have infinite range, eliminating a potential barrier to fun and accessibility.

What will keep people playing?

In Pokémon GO, there’s the drive to catch ’em all. In Wizards Unite, you’ll want to advance the story and your skills. What’s the draw with Minecraft Earth? Well, what’s the draw in Minecraft? You can build stuff. And now you can build stuff in AR on your phone.

The game isn’t narrative-driven, and although there is some (unspecified) character progression, for the most part the focus is on just having fun doing and making stuff in Minecraft. Like a set of LEGO blocks, a build plate and your persistent inventory simply make for a lively sandbox.

Admittedly that doesn’t sound like it carries the same addictive draw of Pokémon, but the truth is Minecraft kind of breaks the rules like that. Millions of people play this game all the time just to make stuff and show that stuff to other people. Although you’ll be limited in how you can share to start, there will surely be ways to explore popular builds in the future.

And how will it make money? The team basically punted on that question — they’re fortunately in a position where they don’t have to worry about that yet. Minecraft is one of the biggest games of all time and a big money-maker — it’s probably worth the cost just to keep people engaged with the world and community.

MCE seems to me like a delightful thing, but one that must be appreciated on its own merits. A lack of screenshots and gameplay video isn’t doing a lot to help you here, I admit. Trust me when I say it looks great, plays well and seems fundamentally like a good time for all ages.

A few other stray facts I picked up:

  • Regions will roll out gradually, but it will be available in all the same languages as Vanilla at launch
  • Yes, there will be skins (and they’ll carry over from your existing account)
  • There will be different sizes and types of build plates
  • There’s crafting, but no 3×3 crafting grid (?!)
  • You can report griefers and so on, but the way the game is structured it shouldn’t be an issue
  • The AR engine creates and uses a point cloud but doesn’t, like, take pictures of your bedroom
  • Content is added to the map dynamically, and there will be hot spots but emptier areas will fill up if you’re there
  • It leverages AR Core and AR Kit, naturally
  • The HoloLens version of Minecraft we saw a while back is a predecessor “more spiritually than technically”
  • Adventures that could be scary to kids have a special sign
  • “Friends” can steal blocks from your build plate if you’re playing together (or donate them)

Sound fun? Sign up for the beta here.

Powered by WPeMatico

Xprize names two grand prize winners in $15 million Global Learning Challenge

Posted by | Android, bangalore, california, carnegie mellon, carnegie mellon university, cci, Education, Elon Musk, Google, kenya, machine learning, musk, New York, pittsburgh, Seoul, south korea, Speech Recognition, Tanzania, TC, technology, transhumanism, United Kingdom, United States, XPRIZE | No Comments

Xprize, the nonprofit organization developing and managing competitions to find solutions to social challenges, has named two grand prize winners in the Elon Musk-backed Global Learning Xprize.

The companies, KitKit School out of South Korea and the U.S., and onebillion, operating in Kenya and the U.K., were announced at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, Calif.

Xprize set each of the competing teams the task of developing scalable services that could enable children to teach themselves basic reading, writing and arithmetic skills within 15 months.

Musk himself was on hand to award $5 million checks to each of the winning teams.

Five finalists, including New York-based CCI, which developed lesson plans and a development language so non-coders could create lessons; Chimple, a Bangalore-based learning platform enabling children to learn reading, writing and math on a tablet; RobotTutor, a Pittsburgh-based company, which used Carnegie Mellon research to develop an app for Android tablets that would teach lessons in reading and writing with speech recognition, machine learning and human computer interactions; and the two grand prize winners all received $1 million to continue developing their projects.

The tests required each product to be field-tested in Swahili, reaching nearly 3,000 children in 170 villages across Tanzania.

All of the final solutions from each of the five teams that made it to the final round of competition have been open-sourced so anyone can improve on and develop local solutions using the toolkits developed by each team in competition.

Kitkit School, with a team from Berkeley, Calif. and Seoul, developed a program with a game-based core and flexible learning architecture to help kids learn independently, while onebillion merged numeracy content with literacy material to provide directed learning and activities alongside monitoring to personalize responses to children’s needs.

Both teams are going home with $5 million to continue their work.

The problem of access to basic education affects more than 250 million children around the world, who can’t read or write, and one-in-five children around the world aren’t in school, according to data from UNESCO.

The problem of access is compounded by a shortage of teachers at the primary and secondary school levels. Some research, cited by Xprize , indicates that the world needs to recruit another 68.8 million teachers to provide every child with a primary and secondary education by 2040.

Before the Global Learning Xprize field test, 74% of the children who participated were reported as never having attended school; 80% were never read to at home; and 90% couldn’t read a single word of Swahili.

After the 15-month program working on donated Google Pixel C tablets and pre-loaded with software, the number was cut in half.

“Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of Xprize, in a statement. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.”

After the grand prize announcement, Xprize said it will work to secure and load the software onto tablets; localize the software; and deliver preloaded hardware and charging stations to remote locations so all finalist teams can scale their learning software across the world.

Powered by WPeMatico

Steam Link now lets you beam Steam games to your iOS devices

Posted by | Gaming, Steam, TC, Valve | No Comments

About a year ago, Valve announced that it was building an application called Steam Link. It’d let you play Steam games built for Mac/Windows/Linux on your iOS or Android devices through the magic of streaming, with a computer on your local network doing all the actual heavy lifting.

Then Valve submitted it to the iOS App Store and… Apple rejected it. At the time, Valve said that Apple pinned the rejection on “business conflicts.”

A year later, it seems said conflicts have finally been resolved. Steam Link for iOS just hit the App Store.

Because there’s no way most PC games would be fun on a touchscreen, you’ll probably want a controller — Valve says that Made for iPhone-certified controllers should work, as will its own Steam-branded controller. The company also notes that for best performance, the computer doing the streaming should be hardwired to your router, and your iOS device should be running on your Wi-Fi network’s 5Ghz band.

Powered by WPeMatico

ObjectiveEd is building a better digital curriculum for vision-impaired kids

Posted by | accessibility, Apps, Blindness, Education, Gadgets, Gaming, hardware, objectiveed, TC, visual impairment, visually impaired | No Comments

Children with vision impairments struggle to get a solid K-12 education for a lot of reasons — so the more tools their teachers have to impart basic skills and concepts, the better. ObjectiveEd is a startup that aims to empower teachers and kids with a suite of learning games accessible to all vision levels, along with tools to track and promote progress.

Some of the reasons why vision-impaired kids don’t get the education they deserve are obvious, for example that reading and writing are slower and more difficult for them than for sighted kids. But other reasons are less obvious, for example that teachers have limited time and resources to dedicate to these special needs students when their overcrowded classrooms are already demanding more than they can provide.

Technology isn’t the solution, but it has to be part of the solution, because technology is so empowering and kids take to it naturally. There’s no reason a blind 8-year-old can’t also be a digital native like her peers, and that presents an opportunity for teachers and parents both.

This opportunity is being pursued by Marty Schultz, who has spent the last few years as head of a company that makes games targeted at the visually impaired audience, and in the process saw the potential for adapting that work for more directly educational purposes.

“Children don’t like studying and don’t like doing their homework,” he told me. “They just want to play video games.”

It’s hard to argue with that. True of many adults too, for that matter. But as Schultz points out, this is something educators have realized in recent years and turned to everyone’s benefit.

“Almost all regular education teachers use educational digital games in their classrooms and about 20% use it every day,” he explained. “Most teachers report an increase in student engagement when using educational video games. Gamification works because students own their learning. They have the freedom to fail, and try again, until they succeed. By doing this, students discover intrinsic motivation and learn without realizing it.”

Having learned to type, point and click, do geometry and identify countries via games, I’m a product of this same process, and many of you likely are as well. It’s a great way for kids to teach themselves. But how many of those games would be playable by a kid with vision impairment or blindness? Practically none.

Held back

It turns out that these kids, like others with disabilities, are frequently left behind as the rising technology tide lifts everyone else’s boats. The fact is it’s difficult and time-consuming to create accessible games that target things like Braille literacy and blind navigation of rooms and streets, so developers haven’t been able to do so profitably and teachers are left to themselves to figure out how to jury-rig existing resources or, more likely, fall back on tried and true methods like printed worksheets, in-person instruction and spoken testing.

And because teacher time is limited and instructors trained in vision-impaired learning are thin on the ground, these outdated methods are also difficult to cater to an individual student’s needs. For example a kid may be great at math but lack directionality skills. You need to draw up an “individual education plan” (IEP) explaining (among other things) this and what steps need to be taken to improve, then track those improvements. It’s time-consuming and hard! The idea behind ObjectiveEd is to create both games that teach these basic skills and a platform to track and document progress as well as adjust the lessons to the individual.

How this might work can be seen in a game like Barnyard, which like all of ObjectiveEd’s games has been designed to be playable by blind, low-vision or fully sighted kids. The game has the student finding an animal in a big pen, then dragging it in a specified direction. The easiest levels might be left and right, then move on to cardinal directions, then up to clock directions or even degrees.

“If the IEP objective is ‘Child will understand left versus right and succeed at performing this task 90% of the time,’ the teacher will first introduce these concepts and work with the child during their weekly session,” Schultz said. That’s the kind of hands-on instruction they already get. “The child plays Barnyard in school and at home, swiping left and right, winning points and getting encouragement, all week long. The dashboard shows how much time each child is playing, how often, and their level of success.”

That’s great for documentation for the mandated IEP paperwork, and difficulty can be changed on the fly as well:

“The teacher can set the game to get harder or faster automatically, or move onto the next level of complexity automatically (such as never repeating the prompt when the child hesitates). Or the teacher can maintain the child at the current level and advance the child when she thinks it’s appropriate.”

This isn’t meant to be a full-on K-12 education in a tablet app. But it helps close the gap between kids who can play Mavis Beacon or whatever on school computers and vision-impaired kids who can’t.

Practical measures

Importantly, the platform is not being developed without expert help — or, as is actually very important, without a business plan.

“We’ve developed relationships with several schools for the blind as well as leaders in the community to build educational games that tackle important skills,” Schultz said. “We work with both university researchers and experienced Teachers of Visually Impaired students, and Certified Orientation and Mobility specialists. We were surprised at how many different skills and curriculum subjects that teachers really need.”

Based on their suggestions, for instance, the company has built two games to teach iPhone gestures and the accessibility VoiceOver rotor. This may be a proprietary technology from Apple, but it’s something these kids need to know how to use, just like they need to know how to run a Google search, use a mouse without being able to see the screen, and other common computing tasks. Why not learn it in a game like the other stuff?

Making technological advances is all well and good, but doing so while building a sustainable business is another thing many education startups have failed to address. Fortunately, public school systems actually have significant money set aside specifically for students with special needs, and products that improve education outcomes are actively sought and paid for. These state and federal funds can’t be siphoned off to use on the rest of the class, so if there’s nothing to spend them on, they go unused.

ObjectiveEd has the benefit of being easily deployed without much specialty hardware or software. It runs on iPads, which are fairly common in schools and homes, and the dashboard is a simple web one. Although it may eventually interface with specialty hardware like Braille readers, it’s not necessary for many of the games and lessons, so that lowers the deployment bar as well.

The plan for now is to finalize and test the interface and build out the games library — ObjectiveEd isn’t quite ready to launch, but it’s important to build it with constant feedback from students, teachers and experts. With luck, in a year or two the visually-impaired youngsters at a school near you might have a fun new platform to learn and play with.

“ObjectiveEd exists to help teachers, parents and schools adapt to this new era of gamified learning for students with disabilities, starting with blind and visually impaired students,” Schultz said. “We firmly believe that well-designed software combined with ‘off-the-shelf’ technology makes all this possible. The low cost of technology has truly revolutionized the possibilities for improving education.”

Powered by WPeMatico