accessibility

Google highlights accessible locations with new Maps feature

Posted by | accessibility, Apps, Google, Google-Maps, Maps, Mobile | No Comments

Google has announced a new, welcome and no doubt long-asked-for feature to its Maps app: wheelchair accessibility info. Businesses and points of interest featuring accessible entrances, bathrooms and other features will now be prominently marked as such.

Millions, of course, require such accommodations as ramps or automatic doors, from people with limited mobility to people with strollers or other conveyances. Google has been collecting information on locations’ accessibility for a couple years, and this new setting puts it front and center.

The company showed off the feature in a blog post for Global Accessibility Awareness Day. To turn it on, users can go to the “Settings” section of the Maps app, then “Accessibility settings,” then toggle on “Accessible places.”

This will cause any locations searched for or tapped on to display a small wheelchair icon if they have accessible facilities. Drilling down into the details where you find the address and hours will show exactly what’s available. Unfortunately it doesn’t indicate the location of those resources (helpful if someone is trying to figure out where to get dropped off, for instance), but knowing there’s an accessible entrance or restroom at all is a start.

The information isn’t automatically created or sourced from blueprints or anything — like so much on Google, it comes from you, the user. Any registered user can note the presence of accessible facilities the way they’d note things like in-store pickup or quick service. Just go to “About” in a location’s description and hit the “Describe this place” button at the bottom.

Powered by WPeMatico

Facebook, YouTube, Netflix and more get eye-tracking apps from Tobii

Posted by | accessibility, Apps, cerebral palsy, eye tracking, Gadgets, gaze tracking, hardware, TC, tobii | No Comments

Modern apps and services are a mixed bag when it comes to accessibility, and people with conditions that prevent them from using the usual smartphone or mouse and keyboard don’t often have good alternatives. Eye-tracking tech leader Tobii has engineered a solution with a set of popular apps that are built for navigation through gaze alone.

Working with a third-party developer that specializes in accessibility development, the company’s new suite of apps includes: Facebook, FB Messenger, WhatsApp, Instagram, Google, Google Calendar, Google Translate, Netflix, Spotify, YouTube, MSN and Android Messages.

These custom apps are for Tobii’s eye-tracking I-Series tablets or Windows PCs using Tobii peripherals and software.

Previously, users would generally have to use the generic web interfaces for those services, or some kind of extra layer on top of the native apps. It can work, but the buttons and menus are generally not designed for use via eye tracking, and may be small or finicky.

The new versions are still based on the web apps, but designed with gaze tracking in mind, with large, clear controls on one side and the app’s normal interface on the right. There are simple directional controls, of course, but also context and app-specific ones, like “genre” when browsing Netflix.

The company highlights one user, Delaina Parrish (in the lead image), who relies on apps like Instagram to build her Fearless Independence brand but has been limited in how easily she could use them due to her cerebral palsy. “These accessible apps have improved my daily productivity, my channels of communicating personally and for business, and my overall independence,” she said in the Tobii press release.

It’s hard to overestimate the difference between a tool or interface that’s “good enough” and able to be used by people with disabilities, and one that’s built with accessibility as a goal from the start. The new apps should be available on compatible devices now.

Powered by WPeMatico

Modified HoloLens helps teach kids with vision impairment to navigate the social world

Posted by | accessibility, artificial intelligence, augmented reality, Blindness, disabiliites, disability, Gadgets, hardware, HoloLens, Microsoft, project tokyo, vision impaired, vision impairment, Wearables | No Comments

Growing up with blindness or low vision can be difficult for kids, not just because they can’t read the same books or play the same games as their sighted peers; Vision is also a big part of social interaction and conversation. This Microsoft research project uses augmented reality to help kids with vision impairment “see” the people they’re talking with.

The challenge people with vision impairment encounter is, of course, that they can’t see the other people around them. This can prevent them from detecting and using many of the nonverbal cues sighted people use in conversation, especially if those behaviors aren’t learned at an early age.

Project Tokyo is a new effort from Microsoft in which its researchers are looking into how technologies like AI and AR can be useful to all people, including those with disabilities. That’s not always the case, though it must be said that voice-powered virtual assistants are a boon to many who can’t as easily use a touchscreen or mouse and keyboard.

The team, which started as an informal challenge to improve accessibility a few years ago, began by observing people traveling to the Special Olympics, then followed that up with workshops involving the blind and low vision community. Their primary realization was of the subtle context sight gives in nearly all situations.

“We, as humans, have this very, very nuanced and elaborate sense of social understanding of how to interact with people — getting a sense of who is in the room, what are they doing, what is their relationship to me, how do I understand if they are relevant for me or not,” said Microsoft researcher Ed Cutrell. “And for blind people a lot of the cues that we take for granted just go away.”

In children this can be especially pronounced, as having perhaps never learned the relevant cues and behaviors, they can themselves exhibit antisocial tendencies like resting their head on a table while conversing, or not facing a person when speaking to them.

To be clear, these behaviors aren’t “problematic” in themselves, as they are just the person doing what works best for them, but they can inhibit everyday relations with sighted people, and it’s a worthwhile goal to consider how those relations can be made easier and more natural for everyone.

The experimental solution Project Tokyo has been pursuing involves a modified HoloLens — minus the lens, of course. The device is also a highly sophisticated imaging device that can identify objects and people if provided with the right code.

The user wears the device like a high-tech headband, and a custom software stack provides them with a set of contextual cues:

  • When a person is detected, say four feet away on the right, the headset will emit a click that sounds like it is coming from that location.
  • If the face of the person is known, a second “bump” sound is made and the person’s name announced (again, audible only to the user).
  • If the face is not known or can’t be seen well, a “stretching” sound is played that modulates as the user directs their head towards the other person, ending in a click when the face is centered on the camera (which also means the user is facing them directly).
  • For those nearby, an LED strip shows a white light in the direction of a person who has been detected, and a green light if they have been identified.

Other tools are being evaluated, but this set is a start, and based on a case study with a game 12-year-old named Theo, they could be extremely helpful.

Microsoft’s post describing the system and the team’s work with Theo and others is worth reading for the details, but essentially Theo began to learn the ins and outs of the system and in turn began to manage social situations using cues mainly used by sighted people. For instance, he learned that he can deliberately direct his attention at someone by turning his head towards them, and developed his own method of scanning the room to keep tabs on those nearby — neither one possible when one’s head is on the table.

That kind of empowerment is a good start, but this is definitely a work in progress. The bulky, expensive hardware isn’t exactly something you’d want to wear all day, and naturally different users will have different needs. What about expressions and gestures? What about signs and menus? Ultimately the future of Project Tokyo will be determined, as before, by the needs of the communities who are seldom consulted when it comes to building AI systems and other modern conveniences.

Powered by WPeMatico

Logitech accessory kit makes the Xbox Adaptive Controller even more accessible

Posted by | accessibility, Gadgets, Gaming, hardware, Logitech, Microsoft, TC, xbox, xbox adaptive controller | No Comments

Microsoft’s Xbox Adaptive Controller was a breath of fresh air in a gaming world that has largely failed to consider the needs of people with disabilities. Now Logitech has joined the effort to empower this diverse population with an expanded set of XAC-compatible buttons and triggers.

Logitech’s $100 Adaptive Gaming Kit comes with a dozen buttons in a variety of sizes, two large analog levers to control the triggers, and a Velcro-style pad to which they can all be securely attached. It’s hopefully the start of a hardware ecosystem that will be at least a significant fraction of the diversity available to the able population.

The visibility of gamers with disabilities has grown both as the communities have organized and communicated their needs, and as gaming itself has moved towards the mainstream. Turns out there are millions of people who, for one reason or another, can’t use a controller or mouse and keyboard the way others can — and they want to play games too.

Always one of the more reliably considerate companies when it comes to accessibility issues, Microsoft began developing the XAC a couple years back — though admittedly after years of, like the rest of the gaming hardware community, failing to accommodate disabled gamers.

Logitech was an unwitting partner, having provided joysticks for the project without being told what they were for. But when the XAC was unveiled, Logitech was stunned and chagrined.

“This is something that, shame on us, we didn’t think about,” said Mark Starrett, Logitech G’s senior global product manager. “We’ve been trying to diversify gaming, like getting more girls to play, but we totally did not think about this. But you see the videos Microsoft put out, how excited the kids are — it’s so motivating to see that, it makes you want to continue that work.”

And to their credit, the team got in contact with Microsoft soon after and said they’d like to collaborate on some accessories for the system.

In some ways this wouldn’t be particularly difficult: The XAC uses 3.5mm headphone jacks as its main input, so it can accept signals from a wide range of devices, from its own buttons and sticks to things like blow tubes, so there’s no worries about proprietary connections, for instance. But when it comes to accessible devices and systems like this, there are often other rigorous standards in place that need to be upheld throughout, so it’s necessary to work closely with both the platform provider (Microsoft) and, naturally, the people who will actually be using them.

“This community, you can’t make anything for them without doing it with them,” said Starrett. “When we design a gaming keyboard or mouse, we engage pros, players, all that stuff, right? So with this, it’s absolutely critical to watch them with every piece.”

“The biggest takeaway is that everybody is so different: every challenge, every setup, everyone we talked to,” he continued. “We had a 70, 80 year old guy who plays Destiny and has arthritis — all we really needed to do was put a block on the back of his controller, because he couldn’t pull the trigger. Then we worked with a girl who has a quadstick, she was playing Madden like a pro with something you just puff and blow on. Another guy played everything with his feet. So we spent a lot of time on the site just watching.”

The final set of buttons they arrived at includes three very large ones, four smaller ones (though still big compared with ordinary controller buttons), four “light touch” buttons that can be easily activated by any contact, and two big triggers. Because they knew different gamers would use the sets differently, there’s a set of labels in the box that can be applied however they like.

Then there are two hook and loop (i.e. Velcro) mats to which the buttons can be attached, one rigid and the other flexible, so it can be draped over a leg, the arm of a couch, etc.

Even the packaging the buttons come in is accessible: A single strip of tape pulls out and causes the whole box to unfold, and then everything is in non-sealed reusable bags. The guide is wordless so it can be used in any country, by any player.

It’s nice to see such consideration at work, and no doubt the players who will benefit from these products will be happy to have a variety of options to choose from. I was starting to think I could use a couple of these buttons myself.

Starrett seemed very happy with the results, and also proud that the work had started something new at Logitech.

“The groups we talked to brought a lot of different things to mind for us,” he said. “We’re always updating things, but now we’re updating everything with an eye to accessibility. It’s helped Logitech as a company to learn about this stuff.”

You can pick up Logitech’s Adaptive Gaming kit here for $100.

Powered by WPeMatico

This tactile display lets visually impaired users feel on-screen 3D shapes

Posted by | accessibility, Blindness, Disabilities, disability, Gadgets, hardware, Stanford University, tactile display, visually impaired | No Comments

Using a computer and modern software can be a chore to begin with for the visually impaired, but fundamentally visual tasks like 3D design are even harder. This Stanford team is working on a way to display 3D information, like in a CAD or modeling program, using a “2.5D” display made up of pins that can be raised or lowered as sort of tactile pixels. Taxels!

The research project, a collaboration between graduate student Alexa Siu, Joshua Miele and lab head Sean Follmer, is intended to explore avenues by which blind and visually impaired people can accomplish visual tasks without the aid of a sighted helper. It was presented this week at SIGACCESS.

tactile display2The device is essentially a 12×24 array of thin columns with rounded tops that can be individually told to rise anywhere from a fraction of an inch to several inches above the plane, taking the shape of 3D objects quickly enough to amount to real time.

“It opens up the possibility of blind people being, not just consumers of the benefits of fabrication technology, but agents in it, creating our own tools from 3D modeling environments that we would want or need – and having some hope of doing it in a timely manner,” explained Miele, who is himself blind, in a Stanford news release.

Siu calls the device “2.5D,” since of course it can’t show the entire object floating in midair. But it’s an easy way for someone who can’t see the screen to understand the shape it’s displaying. The resolution is limited, sure, but that’s a shortcoming shared by all tactile displays — which it should be noted are extremely rare to begin with and often very expensive.

The field is moving forward, but too slowly for some, like this crew and the parents behind the BecDot, an inexpensive Braille display for kids. And other tactile displays are being pursued as possibilities for interactions in virtual environments.

Getting an intuitive understanding of a 3D object, whether one is designing or just viewing it, usually means rotating and shifting it — something that’s difficult to express in non-visual ways. But a real-time tactile display like this one can change the shape it’s showing quickly and smoothly, allowing more complex shapes, like moving cross-sections, to be expressed as well.

tac

Joshua Miele demonstrates the device

The device is far from becoming a commercial project, though as you can see in the images (and the video below), it’s very much a working prototype, and a fairly polished one at that. The team plans on reducing the size of the pins, which would of course increase the resolution of the display. Interestingly another grad student in the same lab is working on that very thing, albeit at rather an earlier stage.

The Shape Lab at Stanford is working on a number of projects along these lines — you can keep up with their work at the lab’s website.

Powered by WPeMatico

Live Caption, Google’s automatic captioning technology, is now available on Pixel 4

Posted by | accessibility, Android, captions, Google, Media, Mobile, PIXEL, Pixel 4, speech processing | No Comments

Live Caption, Google’s automatic captioning system first introduced at its I/O developer conference this May, is now officially available, alongside the launch of the new Pixel 4. But unlike some of the other technologies highlighted at the company’s Pixel hardware event yesterday, Live Caption won’t be limited to Google’s new smartphone alone. After the initial debut on Pixel 4, the automatic captioning technology will roll out to Pixel 3, Pixel 3 XL, Pixel 3a and Pixel 3a XL before year-end, says Google, and will become more broadly available in 2020.

The company has offered automatic captions on YouTube for a decade, but that same sort of experience isn’t available across the wider web and mobile devices. For example, Google explains, you can’t read captions for things like the audio messages sent by your friends, on trending videos published elsewhere on social media and on the content you record yourself.

There’s a significant accessibility issue with the lack of captions in all these places, but there’s a convenience issue, as well.

If you’re in a loud environment, like a commuter train, or trying to watch content privately and forgot your headphones, you may need to just use the captions. Or maybe you don’t want to blare the audio, which disturbs others around you. Or perhaps, you want to see the words appear because you’re having trouble understanding the audio, or just want to be sure to catch every word.

With the launch of the Pixel 4, Live Caption is also available for the first time to the general public.

The technology will capture and automatically caption videos and spoken audio on your device, except for phone and video calls. This captioning all happens in real time and on your device — not in the cloud. That means it works even if your device lacks a cell signal or access to Wi-Fi. The captions also stay private and don’t leave your phone.

Google Live Caption UIDemo720 16MB

This is similar to how the Pixel 4’s new Recorder app functions. It, too, will do its speech-to-text processing all on your device, in order to give you real-time transcriptions of your meetings, interviews, lectures or anything else you want to record, without compromising your privacy.

You can launch the Live Captions feature with a tap from the volume slider that appears, then reposition the caption box anywhere on your screen so it doesn’t get in the way of what you’re viewing.

Currently, the feature supports English only. But Google says it’s working to add more languages in the future.

After today’s launch on Pixel 4 and the rollout to the rest of the modern Pixel line of smartphones this year, it will start to show up in other new Android phones. Google says it’s working with other manufacturers to make the technology available to more people as soon as next year.

 

Powered by WPeMatico

Google announces Action Blocks, a new accessibility tool for creating mobile shortcuts

Posted by | accessibility, Android, artificial intelligence, Assistant, Google, Google Assistant, Mobile, mobile software, smartphones, TC, world wide web | No Comments

Google today announced Action Blocks, a new accessibility tool that allows you to create shortcuts for common multi-step tasks with the help of the Google Assistant. In that respect, Action Blocks isn’t all that different from Shortcuts on iOS, for example, but Google is specifically looking at this as an accessibility feature for people with cognitive disabilities.

“If you’ve booked a rideshare using your phone recently, you’ve probably had to go through several steps: unlock your phone, find the right app, navigate through its screens, select appropriate options, and enter your address into the input box,” writes Google accessibility software engineer Ajit Narayanan. “At each step, the app assumes that you’re able to read and write, find things by trial-and-error, remember your selections, and focus for a sustained period of time.”

Google’s own research shows that 80% of people with severe cognitive disabilities, like advanced dementia, autism or Down syndrome, don’t use smartphones, in part because of these barriers.

BedtimeStory 1

Action Blocks are essentially a sequence of commands for the Google Assistant, so everything the Assistant can do can be scripted using this new tool, no matter whether that’s starting a call or playing a TV show. Once the Action Block is set up, you can create a shortcut with a custom image on your phone’s home screen.

For now, the only way to get access to Action Blocks is to join Google’s trusted tester program. It’s unclear when this will roll out to a wider audience. When it does, though, I’m sure a variety of users will want to use of this feature.

Powered by WPeMatico

Amazon’s Echo Show can now identify household pantry items held in front of its camera

Posted by | accessibility, Alexa, Amazon, echo, Echo Show, Gadgets, Show and tell | No Comments

Amazon is introducing a new feature to its Alexa Show device designed to help blind and other low-vision customers identify common household pantry items by holding them in front of Alexa’s camera and asking what it is. The feature uses a combination of computer vision and machine learning techniques in order to recognize the objects the Echo Show sees.

The Echo Show is the version of the Alexa-powered smart speaker that tends to sit in customers’ kitchens because it helps them with other kitchen tasks, like setting timers, watching recipe videos or enjoying a little music or TV while they cook.

But for blind users, the Show will now have a new duty: helping them better identify those household pantry items that are hard to distinguish by touch — like cans, boxed foods, or spices, for example. 

To use the feature, customers can just say things like “Alexa, what am I holding?” or “Alexa, what’s in my hand?” Alexa will also give verbal and audio cues to help the customers place the item in front of the device’s camera.

Amazon says the feature was developed in collaboration with blind Amazon employees, including its principal accessibility engineer, Josh Miele, who gathered feedback from both blind and low-vision customers as part of the development process. The company also worked with the Vista Center for the Blind in Santa Cruz on early research, product development and testing.

“We heard that product identification can be a challenge and something customers wanted Alexa’s help with,” explained Sarah Caplener, head of Amazon’s Alexa for Everyone team. “Whether a customer is sorting through a bag of groceries, or trying to determine what item was left out on the counter, we want to make those moments simpler by helping identify these items and giving customers the information they need in that moment,” she said.

Smart home devices and intelligent voice assistants like Alexa have made life easier for disabled individuals, as it allows them to do things like adjust the thermostats and lights, lock the doors, raise the blinds and more. With “Show and Tell,” Amazon hopes to reach the wide market of blind and low-vision customers, as well. According to the World Health Organization, there are an estimated 1.3 billion with some sort of vision impairment, Amazon says.

That being said, Echo devices aren’t globally available — and even when they are offered in a particular country, the device may not support the local language. Plus, the feature itself is U.S.-only at launch.

Amazon isn’t alone in making accessibility a selling point for its smart speakers and screens. At Google’s I/O developer conference this year, it introduced a range of accessibility projects, including Live Caption, which transcribes real-time audio; Live Relay, for helping the deaf make phone calls; Project Diva, for helping those who don’t speak use smart assistants; and Project Euphonia, which helps make voice recognition work for those with speech impairments.

Show and Tell is available now to Alexa users in the U.S. on first and second-generation Echo Show devices.

 

Powered by WPeMatico

Comcast adds gaze control to its accessible remote software

Posted by | accessibility, Comcast, Gadgets, gaze tracking, Media, x+1, xfinity | No Comments

The latest feature for Comcast’s X1 remote software makes the clicker more accessible to people who can’t click it the same as everyone else. People with physical disabilities will now be able to change the channel and do all the usual TV stuff using only their eyes.

TVs and cable boxes routinely have horrendous interfaces, making the most tech-savvy among us recoil in horror. And if it’s hard for an able-bodied person to do, it may well be impossible for someone who suffers from a condition like ALS, or has missing limbs or other motor impairments.

Voice control helps, as do other changes to the traditional 500-button remote we all struggled with for decades, but gaze control is now beginning to be widely accessible as well, and may prove an even better option.

Comcast’s latest accessibility move — this is one area where the company seems to be genuinely motivated to help its customers — is to bring gaze control to its Xfinity X1 web remote. You load it up on a compatible computer or tablet, sync it with your cable box once, and then the web interface acts as your primary controller.

Users will be able to do pretty much all the everyday TV stuff using gaze: change channels, search and browse the guide, set and retrieve recordings, launch a live sport-tracking app and call up and change accessibility options like closed captioning.

A short video showing how one man finds the tech useful is worth a watch:

It’s amazing to think that among all the things Jimmy Curran has worked to make himself capable of in spite of his condition, changing the channel was not one of them. Perhaps there was some convoluted way of going about it, but it’s still an oversight on the part of TV interfaces that has limited accessibility for years.

Voice controls may also be more easily usable by people with conditions that affect their speech; Google is applying machine learning to the task with its Project Euphonia.

Users will need a gaze control setup of their own (this isn’t uncommon for folks with physical disabilities), after which they can direct the browser on it to xfin.tv/access, which will start the pairing process.

Powered by WPeMatico

Apple’s Voice Control improves accessibility OS-wide on all its devices

Posted by | accessibility, Apple, Gadgets, hardware, iOS, macos, Mobile, Speech Recognition, TC, voice control, wwdc, WWDC 2019 | No Comments

Apple is known for fluid, intuitive user interfaces, but none of that matters if you can’t click, tap, or drag because you don’t have a finger to do so with. For users with disabilities the company is doubling down on voice-based accessibility with the powerful new Voice Control feature on Macs and iOS (and iPadOS) devices.

Many devices already support rich dictation, and of course Apple’s phones and computers have used voice-based commands for years (I remember talking to my Quadra). But this is a big step forward that makes voice controls close to universal — and it all works offline.

The basic idea of Voice Control is that the user has both set commands and context-specific ones. Set commands are things like “Open Garage Band” or “File menu” or “Tap send.” And of course some intelligence has gone into making sure you’re actually saying the command and not writing it, like in that last sentence.

But that doesn’t work when you have an interface that pops up with lots of different buttons, fields, and labels. And even if every button or menu item could be called by name, it might be difficult or time-consuming to speak everything out loud.

To fix this Apple simply attaches a number to every UI item in the foreground, which a user can show by saying “show numbers.” Then they can simply speak the number or modify it with another command, like “tap 22.” You can see a basic workflow below, though of course without the audio cues it loses a bit:

Remember that these numbers may be more easily referenced by someone with little or no vocal ability, and could in fact be selected from using a simpler input like a dial or blow tube. Gaze tracking is good but it has its limitations, and this is a good alternative.

For something like maps, where you could click anywhere, there’s a grid system for selecting where to zoom in or click. Just like Blade Runner! Other gestures like scrolling and dragging are likewise supported.

Dictation has been around for a bit but it’s been improved as well. You can select and replace entire phrases, like “Replace ‘be right back’ with ‘on my way.’ ” Other little improvements will be noted and appreciated by those who use the tool often.

All the voice processing is done offline, which makes it both quick and robust to things like signal problems or use in foreign countries where data might be hard to come by. And the intelligence built into Siri lets it recognize names and context-specific words that may not be part of the base vocabulary. Improved dictation means selecting emoji and adding dictionary items is a breeze.

Right now Voice Control is supported by all native apps, and third party apps that use Apple’s accessibility API should be able to take advantage of it easily. And even if they don’t do it specifically, numbers and grids should still work just fine, since all the OS needs to know are the locations of the UI items. These improvements should appear in accessibility options as soon as a device is updated to iOS 13 or Catalina.

Powered by WPeMatico