Jun 23

That Deaf Guy AKA Ken Bolingbroke

The following was written by a Yahoo! engineer to help his co-workers understand his deafness. He wrote this to answer the myriad of questions and to make meetings more productive. It was originally published on the Yahoo! Accessibility Lab’s web site.


The Background

I was diagnosed with a hearing loss when I was four years old. I’d had the hearing loss all along, they just didn’t figure it out until then. I had no hearing at all in my right ear, and a progressive loss in my left ear that gradually got worse over the years until I could no longer hear at all by the time I was 17. While I still had some hearing, I was able to hear better with a hearing aid that amplified sound, but once my hearing loss became too severe, hearing aids could no longer help me.

When I was 18, I got an early, experimental cochlear implant that enabled me to hear again, not quite as well as I could hear several years earlier with a hearing aid, but enough that I could understand speech with the aid of lipreading. This device never received FDA approval, and I stopped using it in 1994, then had it removed in 2000.

From 1994 to 2009, I had no usable hearing at all. Strictly speaking, I could still hear in my left ear, but only very loud noises. VERY loud noises, like say, a running jet engine .. if I were close enough. So for example, a few years ago, there was a fire drill at Hewlett Packard. The alarms were loud enough that many people were covering their ears with their hands. I still could not hear that at all.

In September 2009, I got a new cochlear implant in my left ear. In February 2010, I got another one in my right ear, but this one was particularly complicated, because of the prior two surgeries on that ear, so it doesn’t work nearly as well as the one in my left ear. Additionally, this last surgery messed up my sense of balance, and left me with severe dizziness for several weeks, and at this writing, still hasn’t entirely abated. I can get especially dizzy when standing up after sitting for an extended period. So if you see me staggering awkwardly down the hall, I’m not necessarily drunk.

Incidentally, the ability to locate sounds requires two ears. Up until I was 17, I could only hear in my left ear and later, only in my right ear. So until February, I could never hear in both ears, and hence, I was never able to locate the direction of sounds. Now that I can actually hear in both ears, in theory, I should be able to do that, but so far, that ability hasn’t manifested.

Why am I deaf?

When doctors finally figured out that I had a hearing loss, they could only speculate as to why. I was the eighth of ten children, and I’m the only one with a hearing loss. So they assumed that it was a result of circumstances around the time of my birth — I was born not breathing, I got sick, I was given antibiotics, etc, and doctors speculated that one of these could have caused my hearing loss. However, several years ago, one of my brothers had a deaf son. And then a deaf daughter. And after another daughter with normal hearing, a second deaf daughter came along. At some point along the way, the first two were given genetic tests and found positive for a syndrome that typically causes a progressive hearing loss, worse in one ear than the other. I only learned this last year This article was originally written several years ago.

My deaf nephew and nieces have the same kind of cochlear implant I got. My youngest deaf niece got her implant just a few weeks ago.

So what can I hear now?

cochlear inplant

Oddly, the implant I received in 2009 is far more advanced than the one I got back in 1988. With this new implant, I’ve been hearing better than I’ve ever heard in my life, hearing things I’ve never heard before. The technology is absolutely amazing.

However! I am still severely hearing impaired. Despite hearing things I’ve never heard before, it’s still not normal hearing.

First, this is still relatively new to me. When I first turned on my left ear in September 2009, everything was just random noise, with voices sounding no different than a car engine. I had to learn how to hear all over again, and I’m still learning and over time, I may be able to understand speech better than I can now.

Second, there are fundamental limits. I have 16 electrodes implanted in my left cochlea, and only 15 of them work. In my right cochlea, there are only 12 functioning electrodes. The entire range of possible sound frequencies have to be split down across these 12 or 15 electrodes. They do some innovative programming to make adjacent electrodes work together to create an extra “virtual” electrode, increasing the overall range possible. But it’s still a fairly limited range.

This means that I’m not able to hear the difference between sounds that are close on the frequency range. For example, it’s very difficult for me to hear the difference between the ‘k’, ‘t’, and ‘p’ sounds. I can barely hear the ‘s’ sound at all.

More generally, I’m not able to hear high pitch sounds as well as low pitch sounds. So I can’t understand higher female voices as well as I can understand low male voices, for example.

Because that’s a limitation in the technology, I may never be able to improve on that. Or perhaps someone will be able to improve the technology such that it will improve my ability to hear differences between similar sounds.

How does that apply to practical situations?

I generally drive with my radio set to a public radio station, so I can practice listening to speech without any visual cues. Mostly, I don’t understand anything. Occasionally, I can understand particularly long words, and then that gives me a clue, and I start picking up parts of other phrases. So for example, I might catch the word “economy”, and then I know it’s probably a news report on the current state of the economy, and that gives me a clue as to what to listen for and I start catching phrases about Congress discussing some bill, or some such. And then the topic changes, and I’m lost again.

I’m told that radio people are selected for their clear speech, so presumably that’s an ideal situation. And I can just barely understand occasional words and phrases from people with a presumably clear speaking ability. As a point of comparison, I have not yet been able to understand anything my teenage children speak, but I’m told that they mumble to each other (they normally use sign language with me and their mother).

If you’ve met me, you know that I can generally understand more with you than I do from the radio. That’s because when I have visual cues handy, namely, lipreading provides extra clues to help me figure out what you’re saying. For example, if you were to say “cat” or “pat”, since I have a hard time hearing the difference between the ‘k’ and ‘p’ sounds, I would not be able to hear which word you speak, but the way your lips move to say “cat” is very different than they way they move to say “pat”, and so I can identify what you say by the complementary combination of what I hear and what I lipread.

And no, I can’t just lipread, because lipreading is a lot more ambiguous than my hearing. For example, with lipreading, the ‘m’, ‘b’, and ‘p’ sounds all look the same (but sound very different!). So I cannot lipread the difference between “bat”, “mat”, and “pat”. But I can hear the difference, so that’s where I need to both hear and lipread to be able to understand speech.
Ken Bolingbroke at Yahoo!

And even then, it’s still an iffy deal. If there’s extra noise, like other people speaking or a loud fan, or whatever, then that negates my ability to hear. For that matter, my ability to understand is so sensitive that I can be knocked off just by someone with a sore throat that’s changing his voice. I’m finding  that just the wind blowing in my implants’ microphones is enough to drown out your voice.

Meetings are especially difficult for me. The main speaker isn’t standing face-to-face with me, so lipreading is more difficult. If the speaker turns to face the whiteboard or someone at the opposite side of the room, then I can’t lipread at all. And worse, when someone somewhere at the table starts speaking, I have to look around until I can figure out who is speaking (see above, as to why I can’t locate the direction of sound), and then hope it’s someone facing me … and in the meantime, I can’t understand anything they’re saying. And if they’re not facing me at all, then I’m still lost.

Beyond that, at this point, since listening to speech is a new thing for me, after a 15 year hiatus with no hearing at all, it’s surprisingly exhausting. It’s a strain for me to listen and try to understand speech. It seems odd to think of it this way, perhaps, but you’re hearing every day, 24 hours a day (even when you’re sleeping, you’re still hearing), all your life, so it’s ingrained for you. But I’ve only been hearing again for a few months, and I turn off my implants at least every night, sometimes longer, so it’s remarkably exhausting. These last few months with my restored hearing, any time I have a particularly heavy day of listening, I find that it wipes me right out.

Ouch! That’s bad, what do we do?

In meetings, it would greatly help me if you could provide me with a written agenda of what will be discussed. When I know the context of what is being said, that makes it much easier to follow what’s being said. Give me a heads-up when you change the topic, so I’m not trying to shoehorn your speech into the prior context.

Face me when you’re speaking. If I’m not looking at your face when you’re talking, then I’m not understanding anything you’re saying. This is especially difficult when multiple people are speaking. If you could raise your hand until you have my attention before you speak, that would ensure I have the best possible chance of understanding what you say.

If you take notes, share them with me after the meeting. Listening to speech takes my entire focus, so if I try to take notes, I lose focus and lose track of what’s being said.And if you have any action items that involve me, make sure that I have them, preferably in writing.

What about sign language?

Well sure, I know ASL. But who else does? Not enough people around here to make it useful for me. And my pet peeve is that interpreting doesn’t do a good job of translating the technical jargon used in IT. Case in point, at the open house where I applied for this job, I had an ASL interpreter helping me out, which was a good thing, because the crowd noise pretty much negated my hearing in that situation. But as I was talking to someone in the Search group, the interpreter signs “D – B – M” which completely confused me, because it didn’t fit in the context at that moment. We pause and worked out that what was said was “Debian.” The interpreter hears “Dee bee enn” and incidentally, “n” and “m” is another pair of sounds that are very close on the sound frequency scale, so she thinks it was “m”, and that’s how you get from “Debian” to “DBM”. So ASL isn’t really useful for interpreting technical discussions, although it does come in handy for general conversation.

Related articles

Mar 11

Accessible responsive images

Responsive web design, creating a single page that morphs with the view port size, is a major feature of modern web design. There are many factors to consider for performance and accessibility. This article will touch on responsive design’s impact on image accessibility.
Continue Reading Accessible responsive images

Mar 07

Make your presentations accessible

Accessibility lecture at the University of WashingtonI give a lot of presentations at conferences that include people with vision and hearing disabilities. I try to make my presentations informative for the entire audience. Here are some of my tips.

Create a vocabulary list for sign language interpreters

Before I give a presentation, I spend a few minutes going through the slides and writing down terms that may not be easy to interpret. This may include technology and coding terms, names of people or products, and terms that are not relevant to the discussion. I try to print this in advance, but sometimes I simply keep a note in Evernote, and show this to the interpreters prior to the presentation.

The response from the interpreters has always been very positive. Your efforts will be appreciated. Here is the vocabulary list I created for my presentation at CSUN 2013 Infographics, making an image speak a thousand words.

Non-standard words within presentation

Infographic
Otter

First infographic sample has “Mahatma Gandhi”

Travel infographic:     MapQuest

Coding terms:
longdesc
ARIA
aria-labelledby
aria-describedby
iframe
seamless
attribute
JavaScript
CSS
Search Engine Optimization (SEO)
Contextual
describedat

People:
Jennison Assuncion
webaxe

Keep Visuals Minimal

I try to keep my slides minimal for many reasons. One is to let the blind members of the audience know they are not missing anything visually. I want people to spend less time looking at the screen and more time listening to what I have to say. Perhaps that is selfish, but I believe it provides more of an equal experience for all.

I will start the presentation by telling the audience the slides will not have significant content and that I will describe what is on the screen when it is relevant. This presentation on Mobile Accessibility is a good example. Some slides included screen shots of mobile products, I gave descriptions of the products and their features. Other slides included images that did not need to be described.

Upload your presentation to Slideshare prior to the event

Keeping a minimal slide design can be frustrating for those in the audience that want to take photos or notes about resources mentioned in your presentation. We’ve all seen and heard people taking photos during a presentation, mainly because that moment may be their only chance to capture a link or reference.

I always post my presentation to SlideShare prior to a presentation and give the link on the first slide. I let people know in advance they can download the slides, this lets the audience relax and listen to what I have to say. If possible, I will tweet the URL before the conference starts to give the audience a preview.

It’s all about the presenter’s notes

My philosophy is to keep the slides minimal but put the important information in the presenter’s notes. This is a feature of Keynote and PowerPoint that allows you to leave comments about each slide. Most people use this as a reminder of what to say, without making it public. I use it to publicize the resources for each slide.

Prior to uploading to Slideshare, I create a .pdf version of the presentation and I make sure it includes the speaker notes. Slideshare will parse that pdf and include the speaker note resources within their transcription. Here’s an example of an  iOS7 Accessibility presentation I gave at the Mobile+Web conference. It helps to uncheck the option within Keynote to include the date on slides. This gets annoying.

Make your presentation accessible

While it’s commendable that Slideshare is able to parse the pdf and create a transcript, this is not the most accessible way to view the content. I use this transcript as the basis for a blog post that combines the Slideshare version of the presentation, embeds of included videos, and a semantic representation of the slides and the relevant speaker notes. This is what I consider to be the final result of a conference presentation.

This wrap up of the presentation YUI + Accessibility includes the slides, a video recording of the talk, links to resources, and the relevant information from each slide and sample code.

Final Suggestions

  • I taught at Palomar College for 7 years and have a degree in Radio and Telecommunications, so I’m no stranger to standing in front of people and talking. Practice makes perfect and you should take any opportunity possible to speak in public. Local meetups are a great opportunity to speak in small groups about a subject you know well.
  • Watch Christian Heilmann speak whenever you have the opportunity. I am always energized and inspired by his presentations. Further, you know he’s always going to say something new. I believe it’s important to avoid canned presentations and treat each audience with respect by at least customizing the presentation for each event.
  • Christian has also created a great article that has helped me significantly: A few tricks about public speaking and stage technology. His suggestions about using technology and prep are tips you’ll only learn from constant practice.
  • Avoid coffee! This is something I’ve learned the hard way. I can go on some massively bizarre detours while talking on a caffeine buzz. I’ll have a cup of coffee in the morning, but avoid caffeine for a few hours prior to speaking. However, hot lemon tea is  your friend. This is an old radio trick, as it helps clear your throat. Also keep water handy on the podium.
  • Arrive early and watch the prior speaker. This shows respect for your fellow speaker and gives you a chance to watch the audience reactions, technology snafus, and get an idea of the knowledge level of the crowd.
  • Use social media to extend your presentation beyond the room. Announce everything on Twitter, including particularly helpful links mentioned in the presentation. Just don’t get spammy. Announce your Twitter handle on the intro slide for those live tweeting your talk.
  • Small audiences are a good thing. It’s great to look out at a packed room and feel important. However, some of my best experiences have been with less than a dozen people in the room. I gave one presentation about building search engines in London where the question and answers led to a patent: Creating Vertical Search Engines for Individual Search Queries. So give them the same energy you’d save for 100 people and take it as an opportunity to make it more interactive.
  • Last but not least, a cool laptop sticker helps people remember you. :-)
    Ted Drake and his dog

 

Mar 03

Accessibility + YUI – creating accessible forms

This presentation was created for the YUI Conference, November 2013 by Sarbbottam and Ted Drake. Sample code is available at GitHub Bruce Lee toy photos courtesy [CC] images by Shaun Wong on Flickr. Watch the full presentation (includes closed captions):


You can also view the slides:

Accessibility + YUI

Sarbbottam | Ted Drake YUI Conf 2013

“Mistakes are always forgivable, if one has the courage to admit them.”
― Bruce Lee

1:6 Medicom Bruce Lee Inaccessible web sites are usually caused by ignorance rather than bad intentions. This presentation will introduce what is needed for accessibility and how Sarbbottam used ARIA, JavaScript, Progressive Enhancement, and semantic HTML to create a truly accessible and dynamic form. This will help you with your projects as well.

  • Perceivable
  • Operable
  • Understandable
  • Robust

The WCAG 2.0 accessibility specifications focus on the user’s experience. It distills this to 4 key factors. Essentially, the user needs to know:

  • what is on the page
  • be able to focus on the content
  • interact with the objects
  • the product should work with all combinations of browsers, devices, and assistive technology.

ARIA Today

Action

Now that we have the basics for accessibility, let’s look at how Sarbottam created a visually dynamic form that provides ample feedback for screen reader users. This form includes:

  • Progressive enhancement (works without javascript)
  • Everything is keyboard accessible
  • Works in multi-language/direction/keyboard

Let’s look at how a screen reader interprets our sample form. Watch for the following elements in this video:

  • Each form input has clearly defined label, state, and properties, i.e required.
  • The screen reader lets the user know how to interact with dropdown components
  • Screen changes are announced to the user.

Drop Down

This drop down button uses background images for the flag and triangle. The only text node is the country code value. But is this enough for a user? The drop down updates the button’s aria-label to let the user know the button’s intention. Further, after the user has chosen a country, the aria-label is updated to show it’s selected value.

country code dropdown buttonWhat is this button?

This button includes a flag, a triangle, and the text “+852”. The flag and triangle are using spans with background images. What does the +852 mean? How can the user know exactly what this will do?

<a
    href="#foo"
    role="button"
    aria-haspopup="true"
    aria-label="Hong Kong (+852) Country Code for optional recovery phone number">
        <span class="flag-hk"></span>&nbsp;
        <span class="drop-down-arrow-container">
            <span class="drop-down-arrow"></span>
        </span>
        &nbsp;+852
</a>

Many times people assume their background image is providing enough information. However, background images provide no context for the screen reader or voice recognition user. This drop down button is clearly labeled with the country name, the phone number extension, and the context (optional phone number). Further, the user knows this will generate a menu via the aria-haspopup=”true” attribute. The aria-label attribute is updated when the user selects a new value.

This video shows how the drop down button is announced as a pupup button with the full information. This interaction uses onkeydown to grab the arrow keys. onkeypress was exact character code of the key pressed. This was a problem with international keyboards. Escape key closes the drop down and is announced as the help text. See the aria practices: #focus_tabindex

Live Regions

ARIA live regions trigger screen readers to announce content when it changes on the screen. This could be when an object is given display:block, when content is inserted via innerHTML, or similar moments.

<p
    id="password-validation-message"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
</p>

The password field connects to a paragraph that displays the password’s strength with aria-live=”polite”. This paragraph is empty when the page loads, but content will be inserted via JavaScript as the user creates their password. This means the new content will be announced after the user stops typing. Use assertive to interrupt the user. Nothing is announced while it is empty.

<p
    id="password-validation-message"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
        Password must contain 8 characters.
</p>

The paragraph now includes text. This will be announced when the user pauses. ARIA live regions can be triggered via innerHTML content changes.

<p
    id="password-validation-message"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
        Not bad, but you can make it better.
</p>

Every time the content changes, the user will be notified. You are already doing the presentation changes, the ARIA attributes just surface that content to the assistive technology.

This video shows how the password strength indicator is announced as the user enters their password.

Username Suggestions

autofill suggestions for user name

The username suggestions drop down uses ARIA to define the label and possible error messages. The suggestions have the menu role. Using live regions, a hidden div is used to surface suggested usernames as the user arrows through the choices.

<input
    type="text"
    id="user-name"
    autocomplete="off"
    aria-required="true"
    aria-describedby="validation"
    placeholder="Username"
    aria-labelledby="user-name-label">

The username text input turns off HTML5 autocomplete, uses aria-required for required status, aria-describedby points to potential error message, and aria-labelledby points to the label.

<p class="clipped"
    id="suggestions-read-out-container"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all"></p>

The class hides this paragraph visually. aria-live forces the changes to be announced immediately, aria-atomic announces changed content, not the entire paragraph each time, aria-relevant announces all additions and removals.

This JS snippet shows how the content is inserted into the live region via innerHTML.

highlightSuggestion : function(suggestion) {       
    var readOutText = suggestion.get('innerHTML');       
    suggestion && suggestion.addClass('suggestions-hovered');  
     
    if(this.selectedIndex === this.list.length - 1) {         
        readOutText += this.endOfsuggestionsMessage;       
    }  
     
    this.suggestionsReadOutContainer.set('innerHTML', readOutText);     
},
<p
    class="clipped"
    id="suggestions-read-out-container"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
        bruce.ninjamaster.lee
    </p>

This video shows how the username suggestions give the user information on available options and how to navigate

Validation

This form includes some basic form validation. When an input has been defined as invalid, we will add the aria-invalid=”true” attribute

<input
    type="text"
    aria-required="true"
    aria-describedby="name-message"
    placeholder="First name"
    aria-labelledby="first-name-label">
<p
    id="name-message"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
    </p>

The input is connected to the error message container via aria-describedby. The paragraph container has aria-live=”assertive” to announce the error message when it is populated.

<input
    type="text"
    aria-required="true"
    aria-describedby="name-message"
    placeholder="First name"
    aria-invalid= "true"
    aria-labelledby="first-name-label">
<p
    id="name-message"
    aria-live="polite"
    aria-atomic="false"
    aria-relevant="all">
        Enter Name
</p>

Add aria-invalid=”true” to the input when it is defined as invalid. The error message will be announced as soon as it is populated due to the aria-live attribute. The error message will also be announced when the user places focus in the input.

This video shows the First and last name inputs. The initial focus announces the placeholder, label, and the required state. It also shows the error state inputs are announced as invalid and the error message is read as the help text. NVDA and JAWS on windows will announce the error message without the delay.

Yahoo User Interface Library

Accessibility is built into all YUI widgets. All YUI widgets include ARIA, Keyboard accessibility, and HTML best practices. Use these with confidence. Please note: 3rd party components within the gallery may not be accessible.

Feb 11

QuickBooks Desktop Accessibility

Find out how QuickBooks Desktop for Windows was rebuilt to make it accessible. QuickBooks for Desktop was originally developed before Microsoft’s accessibility APIs. The program was built upon custom drawn elements and the accessibility was always minimal.

However, a small group of developers and users worked together in 2013 to fix the issues within the core and added screen reader scripting to make QuickBooks 2014 accessible.

This presentation was developed for the ATIA 2014 conference in Orlando to show what is possible, even with a legacy product, when there is a commitment to making an accessible product.

Continue Reading QuickBooks Desktop Accessibility