accessibility

Logitech accessory kit makes the Xbox Adaptive Controller even more accessible

Posted by | accessibility, Gadgets, Gaming, hardware, Logitech, Microsoft, TC, xbox, xbox adaptive controller | No Comments

Microsoft’s Xbox Adaptive Controller was a breath of fresh air in a gaming world that has largely failed to consider the needs of people with disabilities. Now Logitech has joined the effort to empower this diverse population with an expanded set of XAC-compatible buttons and triggers.

Logitech’s $100 Adaptive Gaming Kit comes with a dozen buttons in a variety of sizes, two large analog levers to control the triggers, and a Velcro-style pad to which they can all be securely attached. It’s hopefully the start of a hardware ecosystem that will be at least a significant fraction of the diversity available to the able population.

The visibility of gamers with disabilities has grown both as the communities have organized and communicated their needs, and as gaming itself has moved towards the mainstream. Turns out there are millions of people who, for one reason or another, can’t use a controller or mouse and keyboard the way others can — and they want to play games too.

Always one of the more reliably considerate companies when it comes to accessibility issues, Microsoft began developing the XAC a couple years back — though admittedly after years of, like the rest of the gaming hardware community, failing to accommodate disabled gamers.

Logitech was an unwitting partner, having provided joysticks for the project without being told what they were for. But when the XAC was unveiled, Logitech was stunned and chagrined.

“This is something that, shame on us, we didn’t think about,” said Mark Starrett, Logitech G’s senior global product manager. “We’ve been trying to diversify gaming, like getting more girls to play, but we totally did not think about this. But you see the videos Microsoft put out, how excited the kids are — it’s so motivating to see that, it makes you want to continue that work.”

And to their credit, the team got in contact with Microsoft soon after and said they’d like to collaborate on some accessories for the system.

In some ways this wouldn’t be particularly difficult: The XAC uses 3.5mm headphone jacks as its main input, so it can accept signals from a wide range of devices, from its own buttons and sticks to things like blow tubes, so there’s no worries about proprietary connections, for instance. But when it comes to accessible devices and systems like this, there are often other rigorous standards in place that need to be upheld throughout, so it’s necessary to work closely with both the platform provider (Microsoft) and, naturally, the people who will actually be using them.

“This community, you can’t make anything for them without doing it with them,” said Starrett. “When we design a gaming keyboard or mouse, we engage pros, players, all that stuff, right? So with this, it’s absolutely critical to watch them with every piece.”

“The biggest takeaway is that everybody is so different: every challenge, every setup, everyone we talked to,” he continued. “We had a 70, 80 year old guy who plays Destiny and has arthritis — all we really needed to do was put a block on the back of his controller, because he couldn’t pull the trigger. Then we worked with a girl who has a quadstick, she was playing Madden like a pro with something you just puff and blow on. Another guy played everything with his feet. So we spent a lot of time on the site just watching.”

The final set of buttons they arrived at includes three very large ones, four smaller ones (though still big compared with ordinary controller buttons), four “light touch” buttons that can be easily activated by any contact, and two big triggers. Because they knew different gamers would use the sets differently, there’s a set of labels in the box that can be applied however they like.

Then there are two hook and loop (i.e. Velcro) mats to which the buttons can be attached, one rigid and the other flexible, so it can be draped over a leg, the arm of a couch, etc.

Even the packaging the buttons come in is accessible: A single strip of tape pulls out and causes the whole box to unfold, and then everything is in non-sealed reusable bags. The guide is wordless so it can be used in any country, by any player.

It’s nice to see such consideration at work, and no doubt the players who will benefit from these products will be happy to have a variety of options to choose from. I was starting to think I could use a couple of these buttons myself.

Starrett seemed very happy with the results, and also proud that the work had started something new at Logitech.

“The groups we talked to brought a lot of different things to mind for us,” he said. “We’re always updating things, but now we’re updating everything with an eye to accessibility. It’s helped Logitech as a company to learn about this stuff.”

You can pick up Logitech’s Adaptive Gaming kit here for $100.

Powered by WPeMatico

This tactile display lets visually impaired users feel on-screen 3D shapes

Posted by | accessibility, Blindness, Disabilities, disability, Gadgets, hardware, Stanford University, tactile display, visually impaired | No Comments

Using a computer and modern software can be a chore to begin with for the visually impaired, but fundamentally visual tasks like 3D design are even harder. This Stanford team is working on a way to display 3D information, like in a CAD or modeling program, using a “2.5D” display made up of pins that can be raised or lowered as sort of tactile pixels. Taxels!

The research project, a collaboration between graduate student Alexa Siu, Joshua Miele and lab head Sean Follmer, is intended to explore avenues by which blind and visually impaired people can accomplish visual tasks without the aid of a sighted helper. It was presented this week at SIGACCESS.

tactile display2The device is essentially a 12×24 array of thin columns with rounded tops that can be individually told to rise anywhere from a fraction of an inch to several inches above the plane, taking the shape of 3D objects quickly enough to amount to real time.

“It opens up the possibility of blind people being, not just consumers of the benefits of fabrication technology, but agents in it, creating our own tools from 3D modeling environments that we would want or need – and having some hope of doing it in a timely manner,” explained Miele, who is himself blind, in a Stanford news release.

Siu calls the device “2.5D,” since of course it can’t show the entire object floating in midair. But it’s an easy way for someone who can’t see the screen to understand the shape it’s displaying. The resolution is limited, sure, but that’s a shortcoming shared by all tactile displays — which it should be noted are extremely rare to begin with and often very expensive.

The field is moving forward, but too slowly for some, like this crew and the parents behind the BecDot, an inexpensive Braille display for kids. And other tactile displays are being pursued as possibilities for interactions in virtual environments.

Getting an intuitive understanding of a 3D object, whether one is designing or just viewing it, usually means rotating and shifting it — something that’s difficult to express in non-visual ways. But a real-time tactile display like this one can change the shape it’s showing quickly and smoothly, allowing more complex shapes, like moving cross-sections, to be expressed as well.

tac

Joshua Miele demonstrates the device

The device is far from becoming a commercial project, though as you can see in the images (and the video below), it’s very much a working prototype, and a fairly polished one at that. The team plans on reducing the size of the pins, which would of course increase the resolution of the display. Interestingly another grad student in the same lab is working on that very thing, albeit at rather an earlier stage.

The Shape Lab at Stanford is working on a number of projects along these lines — you can keep up with their work at the lab’s website.

Powered by WPeMatico

Live Caption, Google’s automatic captioning technology, is now available on Pixel 4

Posted by | accessibility, Android, captions, Google, Media, Mobile, PIXEL, Pixel 4, speech processing | No Comments

Live Caption, Google’s automatic captioning system first introduced at its I/O developer conference this May, is now officially available, alongside the launch of the new Pixel 4. But unlike some of the other technologies highlighted at the company’s Pixel hardware event yesterday, Live Caption won’t be limited to Google’s new smartphone alone. After the initial debut on Pixel 4, the automatic captioning technology will roll out to Pixel 3, Pixel 3 XL, Pixel 3a and Pixel 3a XL before year-end, says Google, and will become more broadly available in 2020.

The company has offered automatic captions on YouTube for a decade, but that same sort of experience isn’t available across the wider web and mobile devices. For example, Google explains, you can’t read captions for things like the audio messages sent by your friends, on trending videos published elsewhere on social media and on the content you record yourself.

There’s a significant accessibility issue with the lack of captions in all these places, but there’s a convenience issue, as well.

If you’re in a loud environment, like a commuter train, or trying to watch content privately and forgot your headphones, you may need to just use the captions. Or maybe you don’t want to blare the audio, which disturbs others around you. Or perhaps, you want to see the words appear because you’re having trouble understanding the audio, or just want to be sure to catch every word.

With the launch of the Pixel 4, Live Caption is also available for the first time to the general public.

The technology will capture and automatically caption videos and spoken audio on your device, except for phone and video calls. This captioning all happens in real time and on your device — not in the cloud. That means it works even if your device lacks a cell signal or access to Wi-Fi. The captions also stay private and don’t leave your phone.

Google Live Caption UIDemo720 16MB

This is similar to how the Pixel 4’s new Recorder app functions. It, too, will do its speech-to-text processing all on your device, in order to give you real-time transcriptions of your meetings, interviews, lectures or anything else you want to record, without compromising your privacy.

You can launch the Live Captions feature with a tap from the volume slider that appears, then reposition the caption box anywhere on your screen so it doesn’t get in the way of what you’re viewing.

Currently, the feature supports English only. But Google says it’s working to add more languages in the future.

After today’s launch on Pixel 4 and the rollout to the rest of the modern Pixel line of smartphones this year, it will start to show up in other new Android phones. Google says it’s working with other manufacturers to make the technology available to more people as soon as next year.

 

Powered by WPeMatico

Google announces Action Blocks, a new accessibility tool for creating mobile shortcuts

Posted by | accessibility, Android, artificial intelligence, Assistant, Google, Google Assistant, Mobile, mobile software, smartphones, TC, world wide web | No Comments

Google today announced Action Blocks, a new accessibility tool that allows you to create shortcuts for common multi-step tasks with the help of the Google Assistant. In that respect, Action Blocks isn’t all that different from Shortcuts on iOS, for example, but Google is specifically looking at this as an accessibility feature for people with cognitive disabilities.

“If you’ve booked a rideshare using your phone recently, you’ve probably had to go through several steps: unlock your phone, find the right app, navigate through its screens, select appropriate options, and enter your address into the input box,” writes Google accessibility software engineer Ajit Narayanan. “At each step, the app assumes that you’re able to read and write, find things by trial-and-error, remember your selections, and focus for a sustained period of time.”

Google’s own research shows that 80% of people with severe cognitive disabilities, like advanced dementia, autism or Down syndrome, don’t use smartphones, in part because of these barriers.

BedtimeStory 1

Action Blocks are essentially a sequence of commands for the Google Assistant, so everything the Assistant can do can be scripted using this new tool, no matter whether that’s starting a call or playing a TV show. Once the Action Block is set up, you can create a shortcut with a custom image on your phone’s home screen.

For now, the only way to get access to Action Blocks is to join Google’s trusted tester program. It’s unclear when this will roll out to a wider audience. When it does, though, I’m sure a variety of users will want to use of this feature.

Powered by WPeMatico

Amazon’s Echo Show can now identify household pantry items held in front of its camera

Posted by | accessibility, Alexa, Amazon, echo, Echo Show, Gadgets, Show and tell | No Comments

Amazon is introducing a new feature to its Alexa Show device designed to help blind and other low-vision customers identify common household pantry items by holding them in front of Alexa’s camera and asking what it is. The feature uses a combination of computer vision and machine learning techniques in order to recognize the objects the Echo Show sees.

The Echo Show is the version of the Alexa-powered smart speaker that tends to sit in customers’ kitchens because it helps them with other kitchen tasks, like setting timers, watching recipe videos or enjoying a little music or TV while they cook.

But for blind users, the Show will now have a new duty: helping them better identify those household pantry items that are hard to distinguish by touch — like cans, boxed foods, or spices, for example. 

To use the feature, customers can just say things like “Alexa, what am I holding?” or “Alexa, what’s in my hand?” Alexa will also give verbal and audio cues to help the customers place the item in front of the device’s camera.

Amazon says the feature was developed in collaboration with blind Amazon employees, including its principal accessibility engineer, Josh Miele, who gathered feedback from both blind and low-vision customers as part of the development process. The company also worked with the Vista Center for the Blind in Santa Cruz on early research, product development and testing.

“We heard that product identification can be a challenge and something customers wanted Alexa’s help with,” explained Sarah Caplener, head of Amazon’s Alexa for Everyone team. “Whether a customer is sorting through a bag of groceries, or trying to determine what item was left out on the counter, we want to make those moments simpler by helping identify these items and giving customers the information they need in that moment,” she said.

Smart home devices and intelligent voice assistants like Alexa have made life easier for disabled individuals, as it allows them to do things like adjust the thermostats and lights, lock the doors, raise the blinds and more. With “Show and Tell,” Amazon hopes to reach the wide market of blind and low-vision customers, as well. According to the World Health Organization, there are an estimated 1.3 billion with some sort of vision impairment, Amazon says.

That being said, Echo devices aren’t globally available — and even when they are offered in a particular country, the device may not support the local language. Plus, the feature itself is U.S.-only at launch.

Amazon isn’t alone in making accessibility a selling point for its smart speakers and screens. At Google’s I/O developer conference this year, it introduced a range of accessibility projects, including Live Caption, which transcribes real-time audio; Live Relay, for helping the deaf make phone calls; Project Diva, for helping those who don’t speak use smart assistants; and Project Euphonia, which helps make voice recognition work for those with speech impairments.

Show and Tell is available now to Alexa users in the U.S. on first and second-generation Echo Show devices.

 

Powered by WPeMatico

Comcast adds gaze control to its accessible remote software

Posted by | accessibility, Comcast, Gadgets, gaze tracking, Media, x+1, xfinity | No Comments

The latest feature for Comcast’s X1 remote software makes the clicker more accessible to people who can’t click it the same as everyone else. People with physical disabilities will now be able to change the channel and do all the usual TV stuff using only their eyes.

TVs and cable boxes routinely have horrendous interfaces, making the most tech-savvy among us recoil in horror. And if it’s hard for an able-bodied person to do, it may well be impossible for someone who suffers from a condition like ALS, or has missing limbs or other motor impairments.

Voice control helps, as do other changes to the traditional 500-button remote we all struggled with for decades, but gaze control is now beginning to be widely accessible as well, and may prove an even better option.

Comcast’s latest accessibility move — this is one area where the company seems to be genuinely motivated to help its customers — is to bring gaze control to its Xfinity X1 web remote. You load it up on a compatible computer or tablet, sync it with your cable box once, and then the web interface acts as your primary controller.

Users will be able to do pretty much all the everyday TV stuff using gaze: change channels, search and browse the guide, set and retrieve recordings, launch a live sport-tracking app and call up and change accessibility options like closed captioning.

A short video showing how one man finds the tech useful is worth a watch:

It’s amazing to think that among all the things Jimmy Curran has worked to make himself capable of in spite of his condition, changing the channel was not one of them. Perhaps there was some convoluted way of going about it, but it’s still an oversight on the part of TV interfaces that has limited accessibility for years.

Voice controls may also be more easily usable by people with conditions that affect their speech; Google is applying machine learning to the task with its Project Euphonia.

Users will need a gaze control setup of their own (this isn’t uncommon for folks with physical disabilities), after which they can direct the browser on it to xfin.tv/access, which will start the pairing process.

Powered by WPeMatico

Apple’s Voice Control improves accessibility OS-wide on all its devices

Posted by | accessibility, Apple, Gadgets, hardware, iOS, macos, Mobile, Speech Recognition, TC, voice control, wwdc, WWDC 2019 | No Comments

Apple is known for fluid, intuitive user interfaces, but none of that matters if you can’t click, tap, or drag because you don’t have a finger to do so with. For users with disabilities the company is doubling down on voice-based accessibility with the powerful new Voice Control feature on Macs and iOS (and iPadOS) devices.

Many devices already support rich dictation, and of course Apple’s phones and computers have used voice-based commands for years (I remember talking to my Quadra). But this is a big step forward that makes voice controls close to universal — and it all works offline.

The basic idea of Voice Control is that the user has both set commands and context-specific ones. Set commands are things like “Open Garage Band” or “File menu” or “Tap send.” And of course some intelligence has gone into making sure you’re actually saying the command and not writing it, like in that last sentence.

But that doesn’t work when you have an interface that pops up with lots of different buttons, fields, and labels. And even if every button or menu item could be called by name, it might be difficult or time-consuming to speak everything out loud.

To fix this Apple simply attaches a number to every UI item in the foreground, which a user can show by saying “show numbers.” Then they can simply speak the number or modify it with another command, like “tap 22.” You can see a basic workflow below, though of course without the audio cues it loses a bit:

Remember that these numbers may be more easily referenced by someone with little or no vocal ability, and could in fact be selected from using a simpler input like a dial or blow tube. Gaze tracking is good but it has its limitations, and this is a good alternative.

For something like maps, where you could click anywhere, there’s a grid system for selecting where to zoom in or click. Just like Blade Runner! Other gestures like scrolling and dragging are likewise supported.

Dictation has been around for a bit but it’s been improved as well. You can select and replace entire phrases, like “Replace ‘be right back’ with ‘on my way.’ ” Other little improvements will be noted and appreciated by those who use the tool often.

All the voice processing is done offline, which makes it both quick and robust to things like signal problems or use in foreign countries where data might be hard to come by. And the intelligence built into Siri lets it recognize names and context-specific words that may not be part of the base vocabulary. Improved dictation means selecting emoji and adding dictionary items is a breeze.

Right now Voice Control is supported by all native apps, and third party apps that use Apple’s accessibility API should be able to take advantage of it easily. And even if they don’t do it specifically, numbers and grids should still work just fine, since all the OS needs to know are the locations of the UI items. These improvements should appear in accessibility options as soon as a device is updated to iOS 13 or Catalina.

Powered by WPeMatico

Apple & Google celebrate Global Accessibility Awareness Day with featured apps, new shortcuts

Posted by | accessibility, Android, Apple, Apps, Google, iOS, Mobile | No Comments

With last fall’s release of iOS 12, Apple introduced Siri Shortcuts — a new app that allows iPhone users to create their own voice commands to take actions on their phone and in apps. Today, Apple is celebrating Global Accessibility Awareness Day (GAAD) by rolling out a practical, accessibility focused collection of new Siri Shortcuts, alongside accessibility focused App Store features and collections.

Google is doing something similar for Android users on Google Play.

For starters, Apple’s new Siri shortcuts are available today in a featured collection at the top of the Shortcuts app. The collection includes a variety of shortcuts aimed at helping users more quickly perform everyday tasks.

For example, there’s a new “Help Message” shortcut that will send your location to an emergency contact, a “Meeting Someone New” shortcut designed to speed up non-verbal introductions and communication, a mood journal for recording thoughts and feelings, a pain report that helps to communicate to others the location and intensity of your pain and several others.

Some are designed to make communication more efficient — like one that puts a favorite contact on the user’s home screen, so they can quickly call, text or FaceTime the contact with just a tap.

Others are designed to be used with QR codes. For example, “QR Your Shortcuts” lets you create a QR code for any shortcut you regularly use, then print it out and place it where it’s needed for quick access — like the “Speak Brush Teeth Routine” shortcut that speaks step-by-step instructions for teeth brushing, which would be placed in the bathroom.

In addition to the launch of the new shortcuts, Apple added a collection of accessibility focused apps to the App Store which highlights a ton of accessibility focused apps, including Microsoft’s new talking camera for the blind called Seeing AI, plus other utilities like text-to-speech readers, audio games, sign language apps, AAC (Augmentative and Alternative Communication) solutions, eye-controlled browsers, smart home apps, fine motor skill trainers and much more.

The App Store is also today featuring several interviews with developers, athletes, musicians and a comedian who talk about how they use accessible technology.

Apple is not the only company rolling out special GAAD-themed collections today. Google also unveiled its own editorial collection of accessible apps and games on Google Play. In addition to several utilities, the collection features Live Transcribe, Google’s brand-new accessibility service for the deaf and hard of hearing that debuted earlier this month at its annual Google I/O developer conference.

Though the app’s status is “Unreleased,” users can install the early version, which listens to conversations around you, then instantly transcribes them.

Other selections include home screen replacement Nova Launcher, blind assistant app Be My Eyes, head control for the device Open Sesame, communication aid Card Talk and more.

Powered by WPeMatico

ObjectiveEd is building a better digital curriculum for vision-impaired kids

Posted by | accessibility, Apps, Blindness, Education, Gadgets, Gaming, hardware, objectiveed, TC, visual impairment, visually impaired | No Comments

Children with vision impairments struggle to get a solid K-12 education for a lot of reasons — so the more tools their teachers have to impart basic skills and concepts, the better. ObjectiveEd is a startup that aims to empower teachers and kids with a suite of learning games accessible to all vision levels, along with tools to track and promote progress.

Some of the reasons why vision-impaired kids don’t get the education they deserve are obvious, for example that reading and writing are slower and more difficult for them than for sighted kids. But other reasons are less obvious, for example that teachers have limited time and resources to dedicate to these special needs students when their overcrowded classrooms are already demanding more than they can provide.

Technology isn’t the solution, but it has to be part of the solution, because technology is so empowering and kids take to it naturally. There’s no reason a blind 8-year-old can’t also be a digital native like her peers, and that presents an opportunity for teachers and parents both.

This opportunity is being pursued by Marty Schultz, who has spent the last few years as head of a company that makes games targeted at the visually impaired audience, and in the process saw the potential for adapting that work for more directly educational purposes.

“Children don’t like studying and don’t like doing their homework,” he told me. “They just want to play video games.”

It’s hard to argue with that. True of many adults too, for that matter. But as Schultz points out, this is something educators have realized in recent years and turned to everyone’s benefit.

“Almost all regular education teachers use educational digital games in their classrooms and about 20% use it every day,” he explained. “Most teachers report an increase in student engagement when using educational video games. Gamification works because students own their learning. They have the freedom to fail, and try again, until they succeed. By doing this, students discover intrinsic motivation and learn without realizing it.”

Having learned to type, point and click, do geometry and identify countries via games, I’m a product of this same process, and many of you likely are as well. It’s a great way for kids to teach themselves. But how many of those games would be playable by a kid with vision impairment or blindness? Practically none.

Held back

It turns out that these kids, like others with disabilities, are frequently left behind as the rising technology tide lifts everyone else’s boats. The fact is it’s difficult and time-consuming to create accessible games that target things like Braille literacy and blind navigation of rooms and streets, so developers haven’t been able to do so profitably and teachers are left to themselves to figure out how to jury-rig existing resources or, more likely, fall back on tried and true methods like printed worksheets, in-person instruction and spoken testing.

And because teacher time is limited and instructors trained in vision-impaired learning are thin on the ground, these outdated methods are also difficult to cater to an individual student’s needs. For example a kid may be great at math but lack directionality skills. You need to draw up an “individual education plan” (IEP) explaining (among other things) this and what steps need to be taken to improve, then track those improvements. It’s time-consuming and hard! The idea behind ObjectiveEd is to create both games that teach these basic skills and a platform to track and document progress as well as adjust the lessons to the individual.

How this might work can be seen in a game like Barnyard, which like all of ObjectiveEd’s games has been designed to be playable by blind, low-vision or fully sighted kids. The game has the student finding an animal in a big pen, then dragging it in a specified direction. The easiest levels might be left and right, then move on to cardinal directions, then up to clock directions or even degrees.

“If the IEP objective is ‘Child will understand left versus right and succeed at performing this task 90% of the time,’ the teacher will first introduce these concepts and work with the child during their weekly session,” Schultz said. That’s the kind of hands-on instruction they already get. “The child plays Barnyard in school and at home, swiping left and right, winning points and getting encouragement, all week long. The dashboard shows how much time each child is playing, how often, and their level of success.”

That’s great for documentation for the mandated IEP paperwork, and difficulty can be changed on the fly as well:

“The teacher can set the game to get harder or faster automatically, or move onto the next level of complexity automatically (such as never repeating the prompt when the child hesitates). Or the teacher can maintain the child at the current level and advance the child when she thinks it’s appropriate.”

This isn’t meant to be a full-on K-12 education in a tablet app. But it helps close the gap between kids who can play Mavis Beacon or whatever on school computers and vision-impaired kids who can’t.

Practical measures

Importantly, the platform is not being developed without expert help — or, as is actually very important, without a business plan.

“We’ve developed relationships with several schools for the blind as well as leaders in the community to build educational games that tackle important skills,” Schultz said. “We work with both university researchers and experienced Teachers of Visually Impaired students, and Certified Orientation and Mobility specialists. We were surprised at how many different skills and curriculum subjects that teachers really need.”

Based on their suggestions, for instance, the company has built two games to teach iPhone gestures and the accessibility VoiceOver rotor. This may be a proprietary technology from Apple, but it’s something these kids need to know how to use, just like they need to know how to run a Google search, use a mouse without being able to see the screen, and other common computing tasks. Why not learn it in a game like the other stuff?

Making technological advances is all well and good, but doing so while building a sustainable business is another thing many education startups have failed to address. Fortunately, public school systems actually have significant money set aside specifically for students with special needs, and products that improve education outcomes are actively sought and paid for. These state and federal funds can’t be siphoned off to use on the rest of the class, so if there’s nothing to spend them on, they go unused.

ObjectiveEd has the benefit of being easily deployed without much specialty hardware or software. It runs on iPads, which are fairly common in schools and homes, and the dashboard is a simple web one. Although it may eventually interface with specialty hardware like Braille readers, it’s not necessary for many of the games and lessons, so that lowers the deployment bar as well.

The plan for now is to finalize and test the interface and build out the games library — ObjectiveEd isn’t quite ready to launch, but it’s important to build it with constant feedback from students, teachers and experts. With luck, in a year or two the visually-impaired youngsters at a school near you might have a fun new platform to learn and play with.

“ObjectiveEd exists to help teachers, parents and schools adapt to this new era of gamified learning for students with disabilities, starting with blind and visually impaired students,” Schultz said. “We firmly believe that well-designed software combined with ‘off-the-shelf’ technology makes all this possible. The low cost of technology has truly revolutionized the possibilities for improving education.”

Powered by WPeMatico

Live transcription and captioning in Android are a boon to the hearing-impaired

Posted by | accessibility, Android Q, artificial intelligence, deafness, Gadgets, Google, Google I/O 2019, Mobile, natural language processing, Speech Recognition, TC | No Comments

A set of new features for Android could alleviate some of the difficulties of living with hearing impairment and other conditions. Live transcription, captioning and relay use speech recognition and synthesis to make content on your phone more accessible — in real time.

Announced today at Google’s I/O event in a surprisingly long segment on accessibility, the features all rely on improved speech-to-text and text-to-speech algorithms, some of which now run on-device rather than sending audio to a data center to be decoded.

The first feature to be highlighted, live transcription, was already mentioned by Google. It’s a simple but very useful tool: open the app and the device will listen to its surroundings and simply display as text on the screen any speech it recognizes.

We’ve seen this in translator apps and devices, like the One Mini, and the meeting transcription highlighted yesterday at Microsoft Build. One would think that such a straightforward tool is long overdue, but, in fact, everyday circumstances like talking to a couple of friends at a cafe can be remarkably difficult for natural language systems trained on perfectly recorded single-speaker audio. Improving the system to the point where it can track multiple speakers and display accurate transcripts quickly has no doubt been a challenge.

Another feature enabled by this improved speech recognition ability is live captioning, which essentially does the same thing as above, but for video. Now when you watch a YouTube video, listen to a voice message or even take a video call, you’ll be able to see what the person in it is saying, in real time.

That should prove incredibly useful not just for the millions of people who can’t hear what’s being said, but also those who don’t speak the language well and could use text support, or anyone watching a show on mute when they’re supposed to be going to sleep, or any number of other circumstances where hearing and understanding speech just isn’t the best option.

Gif showing a phone conversation being captioned live.Captioning phone calls is something CEO Sundar Pichai said is still under development, but the “live relay” feature they demoed onstage showed how it might work. A person who is hearing-impaired or can’t speak will certainly find an ordinary phone call to be pretty worthless. But live relay turns the call immediately into text, and immediately turns text responses into speech the person on the line can hear.

Live captioning should be available on Android Q when it releases, with some device restrictions. Live transcribe is available now, but a warning states that it is currently in development. Live relay is yet to come, but showing it onstage in such a complete form suggests it won’t be long before it appears.

Powered by WPeMatico