Android Accessibility – The Missing Manual

This presentation was created for the Bangalore Accessibility Week at Intuit, October 2015. It’s a collection of hard to discover information on making an Android application accessible.

Testing

DevicesTesting is done both on actual devices and with automation, such as Calabases and Android Lint. Intuit’s product development also relies heavily on user testing, including users with disabilities.

Intuit also has a mobile device library that allows anyone within Intuit to check out a mobile device for testing. This has significantly lowered equipment cost and makes it much easier to test applications on an assortment of phones. This can be important as phone manufacturers may break accessibility, such as the Samsung keyboard.

Android Lint

Android lint allows you to run accessibility tests within your development environment. It’s easy to find and fix the issues.

AccessibilityChecks

This session, Improve your Android App’s Accessibility, from Google IO 2015 introduces AccessibilityChecks and teases the upcoming testing app.

TalkBack

TalkBack is the built in screen reader for Android devices. ChromeVox is the screen reader used in Chrome and on ChromeOS

Turn on TalkBack

This video shows how to enable TalkBack and how to use it as a developer. It also shows how to use the context menu for reading the entire screen and turning of TalkBack temporarily.

Two Fingered Gestures

It is easy to use custom gestures with Android. If the app depends on a single finger gesture, TalkBack will pass the same gesture with two fingers. This is very helpful with signatures and swipe navigation.

Android Accessibility Shortcut

Android provides a convenient shortcut for enabling TalkBack on a device. This short video shows how to use it.

Programming

accessibilityLiveRegion

Place this declaration on the container that includes data that changes dynamically. The new content will be announced when it appears.

android:accessibilityliveregion="polite"

This short video shows how a live-region’s content is announced whenever it is changed by deselecting a row within the table. This is an HTML page with aria-live=“assertive”

AccessibilityAction

  • Swipes and other hard to discover actions
  • Actions are activated from the Local Context Menu
  • While this could be used to provide hints for actions, I haven’t found the documentation/examples on how this is accomplished.
Create AccessibilityAction
/**
 * @param actionId The id for this action. This should either be one of
 * the standard actions or a specific action for your app. In that case it
 * is required to use a resource identifier.
 */
 public AccessibilityAction(int id, CharSequence label)
 new AccessibilityAction(R.id.dismiss, getString(R.string.dismiss));
 new AccessibilityAction(ACTION_CLICK, getString(R.string.play_song));
// Constants for all the standard actions with default label:
 AccessibilityAction.ACTION_CLICK
Handling a Custom Action
eventView.setAccessibilityDelegate(new AccessibilityDelegate {
 @Override
 public onInitializeAccessibilityNodeInfo(
        View host, AccessibilityNodeInfo info) {
   super.onInitializeAccessibilityNodeInfo(host, info);
   info.addAction(new AccessibilityAction(R.id.dismiss,
 }
 @Override
   getString(R.string.dismiss)));
     public boolean performAccessibilityAction(
            View host, int action, Bundle args) {
      if (action == R.id.dismiss) {} // Logic for action
     }
 });

android:importantForAccessibility

diagram of UI layers within Android and setting noHideDescendants on unused layers. Use Auto on current layer

If you are using a stacked interface, where the hidden layers are still receiving focus, you can use importantForAccessibility to define the layers that are not important. Hamburger menus would be the most common use for this technique.

Set the visible layer to “auto”, which lets Android manage it’s functionality. Set the “hidden” layers to “noHideDesendants”, which should remove the layer and its children from the Accessibility API. Switch these values when you hide/show layers.

android:importantForAccessibility = "auto"
android:importantForAccessibility = "yes"
android:importantForAccessibility = "no"
android:importantForAccessibility = "noHideDescendants"

ListPopupWindow

diagram of a modal layer within Android. Use <a href=setModal(true) to make sure it acts like a modal view”>

For popups, such as a set of options A better option, however, may be to use a ListPopupWindow or PopupWindow if you’re attempting to display information in a modal context. Just be sure to setModal(true) to ensure the contents of that window, and only the contents of that window are focusable with TalkBack.

Forms

Forms are critical for a great user experience and we need to make sure our users understand what each input represents.

Form Labels

Which is correct for your app?

  • android:hint
  • android:labelFor
  • android:contentDescription

android:hint

The hint is like the HTML “placeholder” attribute. It is a visible label within the input and is surfaced to the Accessibility API when the input first receives focus. However, the hint is ignored when the input has a value. SSB Bart published a great article on this topic:
Android Accessibility Properties and TalkBack

Android form using android:hint, but no visual label

  • This create a placeholder text string within an input
  • This was the preferred method but is a hack
  • The hint is removed when a user adds a value to the input
  • Still a valid method of adding a label to an input

labelFor

The strongest method for adding a label to an input is the “labelFor” method. If your app has a visual text label, add “labelFor” to this text view and point it to the form input. The user will always know what the form input represents.
Android app using visual labels

<TextView
 android:layout_height="match_parent"
 android:labelFor="@+id/edit_item_name"
 android:text="Invoice amount"/>
<EditText
 android:id="@+id/edit_item_name"
 android:layout_height="wrap_content"
 android:hint="Invoice Amount"/>

contentDescription

ContentDescription is much like HTML’s “aria-label”. It’s an invisible text string that is surfaced directly to the Accessibility API. Android documentation specifically warns against using the contentDescription directly on the input.

Note: For EditText fields, provide an android:hint attribute instead of a content description, to help users understand what content is expected when the text field is empty. When the field is filled, TalkBack reads the entered content to the user, instead of the hint text.
Making Applications Accessible – Android Developer Portal

  • Invisible description for TalkBack
  • Should not be used directly on an input
  • You can use it on an input’s container and combine with labelFor

textinputlayout

It is possible to use “contentDescription” and “labelFor” to include a hidden label for your application. For instance, this pattern works with the recently introduced “textinputLayout” for Material Design layouts. This same pattern should work with a basic container around a form input.
Material design for form inputs include the android:hint as a visual label.

<textinputlayout
 android:labelfor="@id/signupemail"
 android:contentdescription="Email"
 android:accessibilityliveregion="polite">
  <edittext
   android:id="@id/signupemail"
   android:hint="Email"
   android:seterror="Create a valid email address"
    …/>
</textinputlayout> 

This is the pattern suggested for Material Design in Marshmallow. It has some bugs with Android support, but these should be solved soon. More information: Accessible Android Inputs with Material Design

Checking for TalkBack

AccessibilityManager am = (AccessibilityManager)
         getSystemService(ACCESSIBILITY_SERVICE);
boolean isAccessibilityEnabled = am.isEnabled();
boolean isExploreByTouchEnabled = am.isTouchExplorationEnabled();

You can check to see if the user has talkBack enabled and then make modifications to your application. For instance, this could be used to add continue and back buttons to a swipe-based navigation interface.

More Android Documentation

Girls Who Code Summer Camp at Intuit

Intuit sponsored an intensive 7-week camp for Girls Who Code during the summer of 2015. This is a fantastic opportunity for young girls to learn the basics of computer programming and move beyond to product design and development. It culminated with the young engineers presenting their final projects, which were quite impressive.

I had the opportunity to do an hour long presentation on accessibility and we did a follow up session at the Usability Labs to experience the world with limited senses. These sessions had a big impact on the girls and how they perceive their role in developing inclusive products
Continue Reading Girls Who Code Summer Camp at Intuit

Use aria-labelledby to provide context to unordered lists

VoiceOver announces the heading when the list is selected

It’s not uncommon for a web page to include multiple sets of lists. This is especially true for a web site that aggregates information. The Yahoo! Finance home page contains at least 12 lists.

Screen readers allow the user to navigate a page via lists and announce the number of items in each list. But what if we could make this navigation more relevant? This can be done via the aria-labelledby attribute.

Continue Reading Use aria-labelledby to provide context to unordered lists

Accessible Android Inputs with Material Design

Update – August 2016

I first wrote this post shortly after textinputlayout was introduced and there was a distinct lack of documentation. My original code example for setting the error was incorrect. Per Victor Tsaran:

There is no such attribute as android:setError. There is a method called setError in the View class that can set the error message dynamically. That method actually works with TalkBack.
Victor Tsaran

I will be updating the code examples soon. For now, do not include the android:seterror line shown below. I’m leaving it right now for archival purposes.


Google has done an admirable job defining the Material Design style guide. They’ve also begun rolling out new APIs that make it much easier to implement the interaction designs within Android and HTML. However, there are still some gaps. This article looks at the popular Text Input for Android interaction. Please note: the code in this article is not fully documented and the best practice may change as the Google Accessibility team updates their documentation. Consider this a beta pattern and I will gladly update it as we learn better practices.
Continue Reading Accessible Android Inputs with Material Design

That Deaf Guy AKA Ken Bolingbroke

The following was written by a Yahoo! engineer to help his co-workers understand his deafness. He wrote this to answer the myriad of questions and to make meetings more productive. It was originally published on the Yahoo! Accessibility Lab’s web site.


The Background

I was diagnosed with a hearing loss when I was four years old. I’d had the hearing loss all along, they just didn’t figure it out until then. I had no hearing at all in my right ear, and a progressive loss in my left ear that gradually got worse over the years until I could no longer hear at all by the time I was 17. While I still had some hearing, I was able to hear better with a hearing aid that amplified sound, but once my hearing loss became too severe, hearing aids could no longer help me.

When I was 18, I got an early, experimental cochlear implant that enabled me to hear again, not quite as well as I could hear several years earlier with a hearing aid, but enough that I could understand speech with the aid of lipreading. This device never received FDA approval, and I stopped using it in 1994, then had it removed in 2000.

From 1994 to 2009, I had no usable hearing at all. Strictly speaking, I could still hear in my left ear, but only very loud noises. VERY loud noises, like say, a running jet engine .. if I were close enough. So for example, a few years ago, there was a fire drill at Hewlett Packard. The alarms were loud enough that many people were covering their ears with their hands. I still could not hear that at all.

In September 2009, I got a new cochlear implant in my left ear. In February 2010, I got another one in my right ear, but this one was particularly complicated, because of the prior two surgeries on that ear, so it doesn’t work nearly as well as the one in my left ear. Additionally, this last surgery messed up my sense of balance, and left me with severe dizziness for several weeks, and at this writing, still hasn’t entirely abated. I can get especially dizzy when standing up after sitting for an extended period. So if you see me staggering awkwardly down the hall, I’m not necessarily drunk.

Incidentally, the ability to locate sounds requires two ears. Up until I was 17, I could only hear in my left ear and later, only in my right ear. So until February, I could never hear in both ears, and hence, I was never able to locate the direction of sounds. Now that I can actually hear in both ears, in theory, I should be able to do that, but so far, that ability hasn’t manifested.

Why am I deaf?

When doctors finally figured out that I had a hearing loss, they could only speculate as to why. I was the eighth of ten children, and I’m the only one with a hearing loss. So they assumed that it was a result of circumstances around the time of my birth — I was born not breathing, I got sick, I was given antibiotics, etc, and doctors speculated that one of these could have caused my hearing loss. However, several years ago, one of my brothers had a deaf son. And then a deaf daughter. And after another daughter with normal hearing, a second deaf daughter came along. At some point along the way, the first two were given genetic tests and found positive for a syndrome that typically causes a progressive hearing loss, worse in one ear than the other. I only learned this last year This article was originally written several years ago.

My deaf nephew and nieces have the same kind of cochlear implant I got. My youngest deaf niece got her implant just a few weeks ago.

So what can I hear now?

cochlear inplant

Oddly, the implant I received in 2009 is far more advanced than the one I got back in 1988. With this new implant, I’ve been hearing better than I’ve ever heard in my life, hearing things I’ve never heard before. The technology is absolutely amazing.

However! I am still severely hearing impaired. Despite hearing things I’ve never heard before, it’s still not normal hearing.

First, this is still relatively new to me. When I first turned on my left ear in September 2009, everything was just random noise, with voices sounding no different than a car engine. I had to learn how to hear all over again, and I’m still learning and over time, I may be able to understand speech better than I can now.

Second, there are fundamental limits. I have 16 electrodes implanted in my left cochlea, and only 15 of them work. In my right cochlea, there are only 12 functioning electrodes. The entire range of possible sound frequencies have to be split down across these 12 or 15 electrodes. They do some innovative programming to make adjacent electrodes work together to create an extra “virtual” electrode, increasing the overall range possible. But it’s still a fairly limited range.

This means that I’m not able to hear the difference between sounds that are close on the frequency range. For example, it’s very difficult for me to hear the difference between the ‘k’, ‘t’, and ‘p’ sounds. I can barely hear the ‘s’ sound at all.

More generally, I’m not able to hear high pitch sounds as well as low pitch sounds. So I can’t understand higher female voices as well as I can understand low male voices, for example.

Because that’s a limitation in the technology, I may never be able to improve on that. Or perhaps someone will be able to improve the technology such that it will improve my ability to hear differences between similar sounds.

How does that apply to practical situations?

I generally drive with my radio set to a public radio station, so I can practice listening to speech without any visual cues. Mostly, I don’t understand anything. Occasionally, I can understand particularly long words, and then that gives me a clue, and I start picking up parts of other phrases. So for example, I might catch the word “economy”, and then I know it’s probably a news report on the current state of the economy, and that gives me a clue as to what to listen for and I start catching phrases about Congress discussing some bill, or some such. And then the topic changes, and I’m lost again.

I’m told that radio people are selected for their clear speech, so presumably that’s an ideal situation. And I can just barely understand occasional words and phrases from people with a presumably clear speaking ability. As a point of comparison, I have not yet been able to understand anything my teenage children speak, but I’m told that they mumble to each other (they normally use sign language with me and their mother).

If you’ve met me, you know that I can generally understand more with you than I do from the radio. That’s because when I have visual cues handy, namely, lipreading provides extra clues to help me figure out what you’re saying. For example, if you were to say “cat” or “pat”, since I have a hard time hearing the difference between the ‘k’ and ‘p’ sounds, I would not be able to hear which word you speak, but the way your lips move to say “cat” is very different than they way they move to say “pat”, and so I can identify what you say by the complementary combination of what I hear and what I lipread.

And no, I can’t just lipread, because lipreading is a lot more ambiguous than my hearing. For example, with lipreading, the ‘m’, ‘b’, and ‘p’ sounds all look the same (but sound very different!). So I cannot lipread the difference between “bat”, “mat”, and “pat”. But I can hear the difference, so that’s where I need to both hear and lipread to be able to understand speech.
Ken Bolingbroke at Yahoo!

And even then, it’s still an iffy deal. If there’s extra noise, like other people speaking or a loud fan, or whatever, then that negates my ability to hear. For that matter, my ability to understand is so sensitive that I can be knocked off just by someone with a sore throat that’s changing his voice. I’m finding  that just the wind blowing in my implants’ microphones is enough to drown out your voice.

Meetings are especially difficult for me. The main speaker isn’t standing face-to-face with me, so lipreading is more difficult. If the speaker turns to face the whiteboard or someone at the opposite side of the room, then I can’t lipread at all. And worse, when someone somewhere at the table starts speaking, I have to look around until I can figure out who is speaking (see above, as to why I can’t locate the direction of sound), and then hope it’s someone facing me … and in the meantime, I can’t understand anything they’re saying. And if they’re not facing me at all, then I’m still lost.

Beyond that, at this point, since listening to speech is a new thing for me, after a 15 year hiatus with no hearing at all, it’s surprisingly exhausting. It’s a strain for me to listen and try to understand speech. It seems odd to think of it this way, perhaps, but you’re hearing every day, 24 hours a day (even when you’re sleeping, you’re still hearing), all your life, so it’s ingrained for you. But I’ve only been hearing again for a few months, and I turn off my implants at least every night, sometimes longer, so it’s remarkably exhausting. These last few months with my restored hearing, any time I have a particularly heavy day of listening, I find that it wipes me right out.

Ouch! That’s bad, what do we do?

In meetings, it would greatly help me if you could provide me with a written agenda of what will be discussed. When I know the context of what is being said, that makes it much easier to follow what’s being said. Give me a heads-up when you change the topic, so I’m not trying to shoehorn your speech into the prior context.

Face me when you’re speaking. If I’m not looking at your face when you’re talking, then I’m not understanding anything you’re saying. This is especially difficult when multiple people are speaking. If you could raise your hand until you have my attention before you speak, that would ensure I have the best possible chance of understanding what you say.

If you take notes, share them with me after the meeting. Listening to speech takes my entire focus, so if I try to take notes, I lose focus and lose track of what’s being said.And if you have any action items that involve me, make sure that I have them, preferably in writing.

What about sign language?

Well sure, I know ASL. But who else does? Not enough people around here to make it useful for me. And my pet peeve is that interpreting doesn’t do a good job of translating the technical jargon used in IT. Case in point, at the open house where I applied for this job, I had an ASL interpreter helping me out, which was a good thing, because the crowd noise pretty much negated my hearing in that situation. But as I was talking to someone in the Search group, the interpreter signs “D – B – M” which completely confused me, because it didn’t fit in the context at that moment. We pause and worked out that what was said was “Debian.” The interpreter hears “Dee bee enn” and incidentally, “n” and “m” is another pair of sounds that are very close on the sound frequency scale, so she thinks it was “m”, and that’s how you get from “Debian” to “DBM”. So ASL isn’t really useful for interpreting technical discussions, although it does come in handy for general conversation.

Related articles