The camera clip and bag company has made a portable, packable, easy-to-setup professional travel tripod.
Video Producers: Yashad Kulkarni, Gregory S. Manalo
Shooter / Editor: Gregory S. Manalo
Powered by WPeMatico
The camera clip and bag company has made a portable, packable, easy-to-setup professional travel tripod.
Video Producers: Yashad Kulkarni, Gregory S. Manalo
Shooter / Editor: Gregory S. Manalo
Powered by WPeMatico
In a launch blog, Oculus touts the new hardware’s “all-in-one, fully immersive 6DOF VR” — writing: “We’re bringing the magic of presence to more people than ever before — and we’re doing it with the freedom of fully untethered movement.”
For a less varnished view on what it’s like to stick a face-computer on your head, you can check out our reviews by clicking on the links below…
TC: “The headset may not be the most powerful, but it is doubtlessly the new flagship VR product from Facebook”
TC: “It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable”
The Oculus blog contains no detail on pre-order sales for the headsets — beyond a few fine-sounding words.
Meanwhile, Facebook has, for months, been running native ads for Oculus via its eponymous and omnipresent social network — although there’s no explicit mention of the Oculus brand unless you click through to “learn more.”
Instead, it’s pushing the generic notion of “all-in-one VR,” shrinking the Oculus brand stamp on the headset to an indecipherable micro-scribble.
Here’s one of Facebook’s ads that targeted me in Europe, back in March, for e.g.:
For those wanting to partake of Facebook-flavored face gaming (and/or immersive movie watching), the Oculus Quest and Rift S are available to buy via oculus.com and retail partners including Amazon, Best Buy, Newegg, Walmart and GameStop in the U.S.; Currys PC World, FNAC, MediaMarkt and more in the EU and U.K.; and Amazon in Japan.
Just remember to keep your mouth shut.
Powered by WPeMatico
Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.
Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.
The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.
It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.
“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”
In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.
Powered by WPeMatico
It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.
Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.
AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.
What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.
Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.
This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:
By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.
What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.
Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the auto-didactic efficiencies that turn up here.
This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.
Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.
That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.
What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.
Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?
If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.
Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.
If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.
Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.
What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.
Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.
Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.
So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:
We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.
As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.
Powered by WPeMatico
There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.
Conceived by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?
Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.
Of course you could have a drone that simply has a couple of wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient because the same motor drives the rotors and wheels at the same time — though when rolling, the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase and high clearance that make it much more capable on rough terrain.
You can watch FSTAR fly, roll, transform, flatten and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:
The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs or simply ascend and fly to a new location, give FSTAR considerable adaptability.
“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.
Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations and the military.
Powered by WPeMatico
Huawei has finally gone on the record about a ban on its use of Android, but the company’s long-term strategy on mobile still remains unclear.
In an effort to appease its worried customer base, the embattled Chinese company said today that it will continue to provide security updates and after-sales support to its existing lineup of smartphones, but it’s what the company didn’t say that will spark concerns.
Huawei was unable to make guarantees about whether existing customers will continue to receive Android software updates, while its statement is bereft of any mention of whether future phones will ship with the current flavor of Android or something else.
The company, which is the world’s second largest smartphone vendor based on shipments, said it will continue to develop a safe software ecosystem for its customers across the globe. Huawei will also extend the support to Honor, a brand of smartphones it owns. Nearly 50 percent of all of Huawei’s sales comes from outside China, research firm Counterpoint told TechCrunch.
Here’s the statement in full:
Huawei has made substantial contributions to the development and growth of Android around the world. As one of Android’s key global partners, we have worked closely with their open-source platform to develop an ecosystem that has benefitted both users and the industry,
Huawei will continue to provide security updates and after sales services to all existing Huawei and Honor smartphone and tablet products covering those have been sold or still in stock globally. We will continue to build a safe and sustainable software ecosystem, in order to provide the best experience for all users globally.
In addition, the company said it plans to launch the Honor 20 as planned. The device is set to be unveiled at an event in London tomorrow. While Honor is a sub-brand, any sanctions levied on Huawei will likely be reflected in its business, too.
Huawei’s lukewarm response isn’t unexpected. Earlier, Google issued a similarly non-committal statement that indicated that owners of Huawei phones will continue to be able to access the Google Play Store and Google Play Protect, but — like the Chinese firm — it made no mention of the future, and that really is the key question.
Indeed, sources within both Google and Huawei have told TechCrunch that the immediate plan of action for what happens next remains unclear.
It could turn out that Huawei is forced to use the open source version of Android, AOSP, which comes stripped of Google Mobile Services, a suite for Google services such as Google Play Store, Gmail, and YouTube. That’s unless it doesn’t plump for its own homespun alternative, which media reports have claimed it has built in the case of an emergency situation.
Huawei’s response comes a day after Reuters reported that Google had suspended some of its businesses with the Chinese technology giant. The Android-maker is complying with a U.S. Commerce Department’s directive that placed Huawei and 70 of its affiliates on an “entity list” that requires any U.S. company to gain government approval before doing business with the Chinese tech company.
In the meantime, the troubles are mounting for Huawei. In addition to Android, the U.S. government’s move has seen Intel, Qualcomm, Xilinx, and Broadcom reportedly pause supplying chips to Huawei until a resolution has been reached.
Powered by WPeMatico
Google said today that existing users of Huawei Android devices can continue to use Google Play app store, offering some relief to tens of millions of users worldwide even as it remains unclear if the Chinese tech giant will be able to use the fully-functioning version of Android in its future phones.
Existing Huawei phone users will also be able to enjoy security protections delivered through Google Play Protect, the company said in a statement to TechCrunch. Google Play Protect is a built-in malware detector that uses machine learning to detect and weed out rogue apps. Google did not specify whether Huawei devices will receive future Android updates.
The statement comes after Reuters reported on Sunday that Google is suspending some businesses with Huawei, the world’s second largest smartphone maker that shipped over 200 million handsets last year. The report claimed, a point not addressed by Google, that future Android devices from Huawei will not run Google Mobile Services, a host of services offered by Google including Google Play Store, and email client Gmail. A Huawei spokesperson said the company is looking into the situation but has nothing to share beyond this.
For Huawei users’ questions regarding our steps to comply w/ the recent US government actions: We assure you while we are complying with all US gov’t requirements, services like Google Play & security from Google Play Protect will keep functioning on your existing Huawei device.
— Android (@Android) May 20, 2019
It’s a major setback for Huawei, which unless resolved in the next few weeks, could significantly disrupt its phone business outside of China. The top Android phone vendor, which is already grappling with controversy over security concerns, will have to rethink its software strategy for future phones if there is no resolution. Dearth — or delay in delivery — of future Android updates would also hurt the company’s reputation among its customers around the globe.
“We are complying with the order and reviewing the implications,” a company spokesperson said in a statement.
The two tech companies find themselves in this awkward situation as a result of the latest development in the ongoing U.S-China trade war. Huawei and 70 of its affiliates have been put on an “entity list” by the U.S. Commerce Department over national security concerns, requiring local giants such as Google and Intel to take approval from the government before conducting business with the Chinese firm.
Huawei may have already foreseen this. A company executive revealed recently that Huawei had built its own Android-based operating system in case a future event prevented it from using existing systems. Per Reuters, Huawei can also continue to use AOSP, the open source Android operating system that ships stripped off Google Mobile Services. And on paper, it can also probably have an app store of its own. But convincing enough stakeholders to make their apps available on Huawei’s store and continually push updates could prove incredibly challenging.
Powered by WPeMatico
Sometimes you need scrambled eggs. And with that thought, today at the Overland Expo in Flagstaff, AZ, Rivian announced a major accessory for its electric pickup: A camp kitchen. The unit slides out from the Rivian R1T’s so-called gear tunnel that lives between the bed and cab. The kitchen includes storage and a stove that’s powered by the R1T’s 180kWh battery pack.
This kitchen unit is the first significant concept Rivian has unveiled for the pickup’s unusual gear tunnel. This space provides another locked storage compartment for the pickup — but why have it all, many asked when it was revealed? And now, with this kitchen unit, Rivian is responding to the questions. It seems Rivian wants to make its vehicles the center of an ecosystem of add-ons. The company already revealed racks, vehicle-mounted tents and even a flashlight that hides in the side of the driver’s door. Expect more camping and outdoor gear as Rivian cements its brand image around adventurers.
Rivian is positioning its products for a particular lifestyle. Think Patagonia-wearing, Range Rover-driving, outdoorsy types or at least those who aspire to have that image. It’s a smart play, and so far, Rivian has stayed true to this image. All of its advertisements, social media posts, and appearances make it clear that Rivian is carefully aligning its brand image.
Trucks and SUVs are generally marketed to workman and families. TV commercials feature dusty men hauling bails of hay and women unloading groceries and closing the rear tailgate with her foot. But not Rivian.
So far Rivian has shown its products in the backwoods, running trails and sitting next to campfires. The people in the commercials are on an adventure, wearing coats by The North Face and sleeping in REI tents. With the kitchen from today’s announcements, they can pull a kitchen out of their pickup and make some coffee.
Rivian tells TechCrunch this is just a concept, but the company intends to bring this unit to production. There are likely to be other units for the gear tunnel. I, for one, would love to have a slide-out dog washing and drying station because there’s nothing worse than putting a muddy dog in a truck.
Powered by WPeMatico
When your game tops 100 million players, your thoughts naturally turn to doubling that number. That’s the case with the creators, or rather stewards, of Minecraft at Microsoft, where the game has become a product category unto itself. And now it is making its biggest leap yet — to a real-world augmented reality game in the vein of Pokémon GO, called Minecraft Earth.
Announced today but not playable until summer (on iOS and Android) or later, MCE (as I’ll call it) is full-on Minecraft, reimagined to be mobile and AR-first. So what is it? As executive producer Jesse Merriam put it succinctly: “Everywhere you go, you see Minecraft. And everywhere you go, you can play Minecraft.”
Yes, yes — but what is it? Less succinctly put, MCE is like other real-world-based AR games in that it lets you travel around a virtual version of your area, collecting items and participating in mini-games. Where it’s unlike other such games is that it’s built on top of Minecraft: Bedrock Edition, meaning it’s not some offshoot or mobile cash-in; this is straight-up Minecraft, with all the blocks, monsters and redstone switches you desire, but in AR format. You collect stuff so you can build with it and share your tiny, blocky worlds with friends.
That introduces some fun opportunities and a few non-trivial limitations. Let’s run down what MCE looks like — verbally, at least, as Microsoft is being exceedingly stingy with real in-game assets.
Because it’s Minecraft Earth, you’ll inhabit a special Minecraftified version of the real world, just as Pokémon GO and Harry Potter: Wizards Unite put a layer atop existing streets and landmarks.
The look is blocky to be sure, but not so far off the normal look that you won’t recognize it. It uses OpenStreetMaps data, including annotated and inferred information about districts, private property, safe and unsafe places and so on — which will be important later.
The fantasy map is filled with things to tap on, unsurprisingly called tappables. These can be a number of things: resources in the form of treasure chests, mobs and adventures.
Chests are filled with blocks, naturally, adding to your reserves of cobblestone, brick and so on, all the different varieties appearing with appropriate rarity.
Mobs are animals like those you might normally run across in the Minecraft wilderness: pigs, chickens, squid and so on. You snag them like items, and they too have rarities, and not just cosmetic ones. The team highlighted a favorite of theirs, the muddy pig, which when placed down will stop at nothing to get to mud and never wants to leave, or a cave chicken that lays mushrooms instead of eggs. Yes, you can breed them.
Last are adventures, which are tiny AR instances that let you collect a resource, fight some monsters and so on. For example you might find a crack in the ground that, when mined, vomits forth a volume of lava you’ll have to get away from, and then inside the resulting cave are some skeletons guarding a treasure chest. The team said they’re designing a huge number of these encounters.
Importantly, all these things — chests, mobs and encounters — are shared between friends. If I see a chest, you see a chest — and the chest will have the same items. And in an AR encounter, all nearby players are brought in, and can contribute and collect the reward in shared fashion.
And it’s in these AR experiences and the “build plates” you’re doing it all for that the game really shines.
“If you want to play Minecraft Earth without AR, you have to turn it off,” said Torfi Olafsson, the game’s director. This is not AR-optional, as with Niantic’s games. This is AR-native, and for good and ill the only way you can really play is by using your phone as a window into another world. Fortunately it works really well.
First, though, let me explain the whole build plate thing. You may have been wondering how these collectibles and mini-games amount to Minecraft. They don’t — they’re just the raw materials for it.
Whenever you feel like it, you can bring out what the team calls a build plate, which is a special item, a flat square that you virtually put down somewhere in the real world — on a surface like the table or floor, for instance — and it transforms into a small, but totally functional, Minecraft world.
In this little world you can build whatever you want, or dig into the ground, build an inverted palace for your cave chickens or create a paradise for your mud-loving pigs — whatever you want. Like Minecraft itself, each build plate is completely open-ended. Well, perhaps that’s the wrong phrase — they’re actually quite closely bounded, as the world only exists out to the edge of the plate. But they’re certainly yours to play with however you want.
Notably all the usual Minecraft rules are present — this isn’t Minecraft Lite, just a small game world. Water and lava flow how they should, blocks have all the qualities they should and mobs all act as they normally would.
The magic part comes when you find that you can instantly convert your build plate from miniature to life-size. Now the castle you’ve been building on the table is three stories tall in the park. Your pigs regard you silently as you walk through the halls and admire the care and attention to detail with which you no doubt assembled them. It really is a trip.
In the demo, I played with a few other members of the press; we got to experience a couple of build plates and adventures at life-size (technically actually 3/4 life size — the 1 block to 1 meter scale turned out to be a little daunting in testing). It was absolute chaos, really, everyone placing blocks and destroying them and flooding the area and putting down chickens. But it totally worked.
The system uses Microsoft’s new Azure Spatial Anchor system, which quickly and continuously fixed our locations in virtual space. It updated remarkably quickly, with no lag, showing the location and orientation of the other players in real time. Meanwhile the game world itself was rock-solid in space, smooth to enter and explore, and rarely bugging out (and that only in understandable circumstances). That’s great news considering how heavily the game leans on the multiplayer experience.
The team said they’d tested up to 10 players at once in an AR instance, and while there’s technically no limit, there’s sort of a physical limit in how many people can fit in the small space allocated to an adventure or around a tabletop. Don’t expect any giant 64-player raids, but do expect to take down hordes of spiders with three or four friends.
In choosing to make the game the way they’ve made it, the team naturally created certain limitations and risks. You Wouldn’t want, for example, an adventure icon to pop up in the middle of the highway.
For exactly that reason the team spent a lot of work making the map metadata extremely robust. Adventures won’t spawn in areas like private residences or yards, though of course simple collectibles might. But because you’re able to reach things up to 70 meters away, it’s unlikely you’ll have to knock on someone’s door and say there’s a cave chicken in their pool and you’d like to touch it, please.
Furthermore adventures will not spawn in areas like streets or difficult to reach areas. The team said they worked very hard making it possible for the engine to recognize places that are not only publicly accessible, but safe and easy to access. Think sidewalks and parks.
Another limitation is that, as an AR game, you move around the real world. But in Minecraft, verticality is an important part of the gameplay. Unfortunately, the simple truth is that in the real world you can’t climb virtual stairs or descend into a virtual cave. You as a player exist on a 2D plane, and can interact with but not visit places above and below that plane. (An exception of course is on a build plate, where in miniature you can fly around it freely by moving your phone.)
That’s a shame for people who can’t move around easily, though you can pick up and rotate the build plate to access different sides. Weapons and tools also have infinite range, eliminating a potential barrier to fun and accessibility.
In Pokémon GO, there’s the drive to catch ’em all. In Wizards Unite, you’ll want to advance the story and your skills. What’s the draw with Minecraft Earth? Well, what’s the draw in Minecraft? You can build stuff. And now you can build stuff in AR on your phone.
The game isn’t narrative-driven, and although there is some (unspecified) character progression, for the most part the focus is on just having fun doing and making stuff in Minecraft. Like a set of LEGO blocks, a build plate and your persistent inventory simply make for a lively sandbox.
Admittedly that doesn’t sound like it carries the same addictive draw of Pokémon, but the truth is Minecraft kind of breaks the rules like that. Millions of people play this game all the time just to make stuff and show that stuff to other people. Although you’ll be limited in how you can share to start, there will surely be ways to explore popular builds in the future.
And how will it make money? The team basically punted on that question — they’re fortunately in a position where they don’t have to worry about that yet. Minecraft is one of the biggest games of all time and a big money-maker — it’s probably worth the cost just to keep people engaged with the world and community.
MCE seems to me like a delightful thing, but one that must be appreciated on its own merits. A lack of screenshots and gameplay video isn’t doing a lot to help you here, I admit. Trust me when I say it looks great, plays well and seems fundamentally like a good time for all ages.
A few other stray facts I picked up:
Sound fun? Sign up for the beta here.
Powered by WPeMatico
Children with vision impairments struggle to get a solid K-12 education for a lot of reasons — so the more tools their teachers have to impart basic skills and concepts, the better. ObjectiveEd is a startup that aims to empower teachers and kids with a suite of learning games accessible to all vision levels, along with tools to track and promote progress.
Some of the reasons why vision-impaired kids don’t get the education they deserve are obvious, for example that reading and writing are slower and more difficult for them than for sighted kids. But other reasons are less obvious, for example that teachers have limited time and resources to dedicate to these special needs students when their overcrowded classrooms are already demanding more than they can provide.
Technology isn’t the solution, but it has to be part of the solution, because technology is so empowering and kids take to it naturally. There’s no reason a blind 8-year-old can’t also be a digital native like her peers, and that presents an opportunity for teachers and parents both.
This opportunity is being pursued by Marty Schultz, who has spent the last few years as head of a company that makes games targeted at the visually impaired audience, and in the process saw the potential for adapting that work for more directly educational purposes.
It’s hard to argue with that. True of many adults too, for that matter. But as Schultz points out, this is something educators have realized in recent years and turned to everyone’s benefit.
“Almost all regular education teachers use educational digital games in their classrooms and about 20% use it every day,” he explained. “Most teachers report an increase in student engagement when using educational video games. Gamification works because students own their learning. They have the freedom to fail, and try again, until they succeed. By doing this, students discover intrinsic motivation and learn without realizing it.”
Having learned to type, point and click, do geometry and identify countries via games, I’m a product of this same process, and many of you likely are as well. It’s a great way for kids to teach themselves. But how many of those games would be playable by a kid with vision impairment or blindness? Practically none.
It turns out that these kids, like others with disabilities, are frequently left behind as the rising technology tide lifts everyone else’s boats. The fact is it’s difficult and time-consuming to create accessible games that target things like Braille literacy and blind navigation of rooms and streets, so developers haven’t been able to do so profitably and teachers are left to themselves to figure out how to jury-rig existing resources or, more likely, fall back on tried and true methods like printed worksheets, in-person instruction and spoken testing.
And because teacher time is limited and instructors trained in vision-impaired learning are thin on the ground, these outdated methods are also difficult to cater to an individual student’s needs. For example a kid may be great at math but lack directionality skills. You need to draw up an “individual education plan” (IEP) explaining (among other things) this and what steps need to be taken to improve, then track those improvements. It’s time-consuming and hard! The idea behind ObjectiveEd is to create both games that teach these basic skills and a platform to track and document progress as well as adjust the lessons to the individual.
How this might work can be seen in a game like Barnyard, which like all of ObjectiveEd’s games has been designed to be playable by blind, low-vision or fully sighted kids. The game has the student finding an animal in a big pen, then dragging it in a specified direction. The easiest levels might be left and right, then move on to cardinal directions, then up to clock directions or even degrees.
“If the IEP objective is ‘Child will understand left versus right and succeed at performing this task 90% of the time,’ the teacher will first introduce these concepts and work with the child during their weekly session,” Schultz said. That’s the kind of hands-on instruction they already get. “The child plays Barnyard in school and at home, swiping left and right, winning points and getting encouragement, all week long. The dashboard shows how much time each child is playing, how often, and their level of success.”
That’s great for documentation for the mandated IEP paperwork, and difficulty can be changed on the fly as well:
“The teacher can set the game to get harder or faster automatically, or move onto the next level of complexity automatically (such as never repeating the prompt when the child hesitates). Or the teacher can maintain the child at the current level and advance the child when she thinks it’s appropriate.”
This isn’t meant to be a full-on K-12 education in a tablet app. But it helps close the gap between kids who can play Mavis Beacon or whatever on school computers and vision-impaired kids who can’t.
Importantly, the platform is not being developed without expert help — or, as is actually very important, without a business plan.
“We’ve developed relationships with several schools for the blind as well as leaders in the community to build educational games that tackle important skills,” Schultz said. “We work with both university researchers and experienced Teachers of Visually Impaired students, and Certified Orientation and Mobility specialists. We were surprised at how many different skills and curriculum subjects that teachers really need.”
Based on their suggestions, for instance, the company has built two games to teach iPhone gestures and the accessibility VoiceOver rotor. This may be a proprietary technology from Apple, but it’s something these kids need to know how to use, just like they need to know how to run a Google search, use a mouse without being able to see the screen, and other common computing tasks. Why not learn it in a game like the other stuff?
Making technological advances is all well and good, but doing so while building a sustainable business is another thing many education startups have failed to address. Fortunately, public school systems actually have significant money set aside specifically for students with special needs, and products that improve education outcomes are actively sought and paid for. These state and federal funds can’t be siphoned off to use on the rest of the class, so if there’s nothing to spend them on, they go unused.
ObjectiveEd has the benefit of being easily deployed without much specialty hardware or software. It runs on iPads, which are fairly common in schools and homes, and the dashboard is a simple web one. Although it may eventually interface with specialty hardware like Braille readers, it’s not necessary for many of the games and lessons, so that lowers the deployment bar as well.
The plan for now is to finalize and test the interface and build out the games library — ObjectiveEd isn’t quite ready to launch, but it’s important to build it with constant feedback from students, teachers and experts. With luck, in a year or two the visually-impaired youngsters at a school near you might have a fun new platform to learn and play with.
“ObjectiveEd exists to help teachers, parents and schools adapt to this new era of gamified learning for students with disabilities, starting with blind and visually impaired students,” Schultz said. “We firmly believe that well-designed software combined with ‘off-the-shelf’ technology makes all this possible. The low cost of technology has truly revolutionized the possibilities for improving education.”
Powered by WPeMatico