I don’t know about you, but I have a terrible time typing “accessibitily”. That right there was a common typo that keeps popping up. My fingers get tangled typing the last 6 letters. So here’s a little tip for my fellow Mac users. I’m sure there is a similar version for PC users.
Bangalore Accessibility Week October 6-10, 2014 Ted Drake, Intuit Accessibility
- This presentation borrows heavily from the great presentations given by the Google Accessibility team at Google iO 2014
- Android Accessibility Videos
- Illustration by unknown: http://www.pinterest.com/pin/559501953680642594/
The following was written by a Yahoo! engineer to help his co-workers understand his deafness. He wrote this to answer the myriad of questions and to make meetings more productive. It was originally published on the Yahoo! Accessibility Lab’s web site.
I was diagnosed with a hearing loss when I was four years old. I’d had the hearing loss all along, they just didn’t figure it out until then. I had no hearing at all in my right ear, and a progressive loss in my left ear that gradually got worse over the years until I could no longer hear at all by the time I was 17. While I still had some hearing, I was able to hear better with a hearing aid that amplified sound, but once my hearing loss became too severe, hearing aids could no longer help me.
When I was 18, I got an early, experimental cochlear implant that enabled me to hear again, not quite as well as I could hear several years earlier with a hearing aid, but enough that I could understand speech with the aid of lipreading. This device never received FDA approval, and I stopped using it in 1994, then had it removed in 2000.
From 1994 to 2009, I had no usable hearing at all. Strictly speaking, I could still hear in my left ear, but only very loud noises. VERY loud noises, like say, a running jet engine .. if I were close enough. So for example, a few years ago, there was a fire drill at Hewlett Packard. The alarms were loud enough that many people were covering their ears with their hands. I still could not hear that at all.
In September 2009, I got a new cochlear implant in my left ear. In February 2010, I got another one in my right ear, but this one was particularly complicated, because of the prior two surgeries on that ear, so it doesn’t work nearly as well as the one in my left ear. Additionally, this last surgery messed up my sense of balance, and left me with severe dizziness for several weeks, and at this writing, still hasn’t entirely abated. I can get especially dizzy when standing up after sitting for an extended period. So if you see me staggering awkwardly down the hall, I’m not necessarily drunk.
Incidentally, the ability to locate sounds requires two ears. Up until I was 17, I could only hear in my left ear and later, only in my right ear. So until February, I could never hear in both ears, and hence, I was never able to locate the direction of sounds. Now that I can actually hear in both ears, in theory, I should be able to do that, but so far, that ability hasn’t manifested.
Why am I deaf?
When doctors finally figured out that I had a hearing loss, they could only speculate as to why. I was the eighth of ten children, and I’m the only one with a hearing loss. So they assumed that it was a result of circumstances around the time of my birth — I was born not breathing, I got sick, I was given antibiotics, etc, and doctors speculated that one of these could have caused my hearing loss. However, several years ago, one of my brothers had a deaf son. And then a deaf daughter. And after another daughter with normal hearing, a second deaf daughter came along. At some point along the way, the first two were given genetic tests and found positive for a syndrome that typically causes a progressive hearing loss, worse in one ear than the other. I only learned this last year .
My deaf nephew and nieces have the same kind of cochlear implant I got. My youngest deaf niece got her implant just a few weeks ago.
So what can I hear now?
Oddly, the implant I received in 2009 is far more advanced than the one I got back in 1988. With this new implant, I’ve been hearing better than I’ve ever heard in my life, hearing things I’ve never heard before. The technology is absolutely amazing.
However! I am still severely hearing impaired. Despite hearing things I’ve never heard before, it’s still not normal hearing.
First, this is still relatively new to me. When I first turned on my left ear in September 2009, everything was just random noise, with voices sounding no different than a car engine. I had to learn how to hear all over again, and I’m still learning and over time, I may be able to understand speech better than I can now.
Second, there are fundamental limits. I have 16 electrodes implanted in my left cochlea, and only 15 of them work. In my right cochlea, there are only 12 functioning electrodes. The entire range of possible sound frequencies have to be split down across these 12 or 15 electrodes. They do some innovative programming to make adjacent electrodes work together to create an extra “virtual” electrode, increasing the overall range possible. But it’s still a fairly limited range.
This means that I’m not able to hear the difference between sounds that are close on the frequency range. For example, it’s very difficult for me to hear the difference between the ‘k’, ‘t’, and ‘p’ sounds. I can barely hear the ‘s’ sound at all.
More generally, I’m not able to hear high pitch sounds as well as low pitch sounds. So I can’t understand higher female voices as well as I can understand low male voices, for example.
Because that’s a limitation in the technology, I may never be able to improve on that. Or perhaps someone will be able to improve the technology such that it will improve my ability to hear differences between similar sounds.
How does that apply to practical situations?
I generally drive with my radio set to a public radio station, so I can practice listening to speech without any visual cues. Mostly, I don’t understand anything. Occasionally, I can understand particularly long words, and then that gives me a clue, and I start picking up parts of other phrases. So for example, I might catch the word “economy”, and then I know it’s probably a news report on the current state of the economy, and that gives me a clue as to what to listen for and I start catching phrases about Congress discussing some bill, or some such. And then the topic changes, and I’m lost again.
I’m told that radio people are selected for their clear speech, so presumably that’s an ideal situation. And I can just barely understand occasional words and phrases from people with a presumably clear speaking ability. As a point of comparison, I have not yet been able to understand anything my teenage children speak, but I’m told that they mumble to each other (they normally use sign language with me and their mother).
If you’ve met me, you know that I can generally understand more with you than I do from the radio. That’s because when I have visual cues handy, namely, lipreading provides extra clues to help me figure out what you’re saying. For example, if you were to say “cat” or “pat”, since I have a hard time hearing the difference between the ‘k’ and ‘p’ sounds, I would not be able to hear which word you speak, but the way your lips move to say “cat” is very different than they way they move to say “pat”, and so I can identify what you say by the complementary combination of what I hear and what I lipread.
And no, I can’t just lipread, because lipreading is a lot more ambiguous than my hearing. For example, with lipreading, the ‘m’, ‘b’, and ‘p’ sounds all look the same (but sound very different!). So I cannot lipread the difference between “bat”, “mat”, and “pat”. But I can hear the difference, so that’s where I need to both hear and lipread to be able to understand speech.
And even then, it’s still an iffy deal. If there’s extra noise, like other people speaking or a loud fan, or whatever, then that negates my ability to hear. For that matter, my ability to understand is so sensitive that I can be knocked off just by someone with a sore throat that’s changing his voice. I’m finding that just the wind blowing in my implants’ microphones is enough to drown out your voice.
Meetings are especially difficult for me. The main speaker isn’t standing face-to-face with me, so lipreading is more difficult. If the speaker turns to face the whiteboard or someone at the opposite side of the room, then I can’t lipread at all. And worse, when someone somewhere at the table starts speaking, I have to look around until I can figure out who is speaking (see above, as to why I can’t locate the direction of sound), and then hope it’s someone facing me … and in the meantime, I can’t understand anything they’re saying. And if they’re not facing me at all, then I’m still lost.
Beyond that, at this point, since listening to speech is a new thing for me, after a 15 year hiatus with no hearing at all, it’s surprisingly exhausting. It’s a strain for me to listen and try to understand speech. It seems odd to think of it this way, perhaps, but you’re hearing every day, 24 hours a day (even when you’re sleeping, you’re still hearing), all your life, so it’s ingrained for you. But I’ve only been hearing again for a few months, and I turn off my implants at least every night, sometimes longer, so it’s remarkably exhausting. These last few months with my restored hearing, any time I have a particularly heavy day of listening, I find that it wipes me right out.
Ouch! That’s bad, what do we do?
In meetings, it would greatly help me if you could provide me with a written agenda of what will be discussed. When I know the context of what is being said, that makes it much easier to follow what’s being said. Give me a heads-up when you change the topic, so I’m not trying to shoehorn your speech into the prior context.
Face me when you’re speaking. If I’m not looking at your face when you’re talking, then I’m not understanding anything you’re saying. This is especially difficult when multiple people are speaking. If you could raise your hand until you have my attention before you speak, that would ensure I have the best possible chance of understanding what you say.
If you take notes, share them with me after the meeting. Listening to speech takes my entire focus, so if I try to take notes, I lose focus and lose track of what’s being said.And if you have any action items that involve me, make sure that I have them, preferably in writing.
What about sign language?
Well sure, I know ASL. But who else does? Not enough people around here to make it useful for me. And my pet peeve is that interpreting doesn’t do a good job of translating the technical jargon used in IT. Case in point, at the open house where I applied for this job, I had an ASL interpreter helping me out, which was a good thing, because the crowd noise pretty much negated my hearing in that situation. But as I was talking to someone in the Search group, the interpreter signs “D – B – M” which completely confused me, because it didn’t fit in the context at that moment. We pause and worked out that what was said was “Debian.” The interpreter hears “Dee bee enn” and incidentally, “n” and “m” is another pair of sounds that are very close on the sound frequency scale, so she thinks it was “m”, and that’s how you get from “Debian” to “DBM”. So ASL isn’t really useful for interpreting technical discussions, although it does come in handy for general conversation.
Responsive web design, creating a single page that morphs with the view port size, is a major feature of modern web design. There are many factors to consider for performance and accessibility. This article will touch on responsive design’s impact on image accessibility.
Continue Reading Accessible responsive images
Create a vocabulary list for sign language interpreters
Before I give a presentation, I spend a few minutes going through the slides and writing down terms that may not be easy to interpret. This may include technology and coding terms, names of people or products, and terms that are not relevant to the discussion. I try to print this in advance, but sometimes I simply keep a note in Evernote, and show this to the interpreters prior to the presentation.
The response from the interpreters has always been very positive. Your efforts will be appreciated. Here is the vocabulary list I created for my presentation at CSUN 2013 Infographics, making an image speak a thousand words.
Non-standard words within presentation
First infographic sample has “Mahatma Gandhi”
Travel infographic: MapQuest
Search Engine Optimization (SEO)
Keep Visuals Minimal
I try to keep my slides minimal for many reasons. One is to let the blind members of the audience know they are not missing anything visually. I want people to spend less time looking at the screen and more time listening to what I have to say. Perhaps that is selfish, but I believe it provides more of an equal experience for all.
I will start the presentation by telling the audience the slides will not have significant content and that I will describe what is on the screen when it is relevant. This presentation on Mobile Accessibility is a good example. Some slides included screen shots of mobile products, I gave descriptions of the products and their features. Other slides included images that did not need to be described.
Upload your presentation to Slideshare prior to the event
Keeping a minimal slide design can be frustrating for those in the audience that want to take photos or notes about resources mentioned in your presentation. We’ve all seen and heard people taking photos during a presentation, mainly because that moment may be their only chance to capture a link or reference.
I always post my presentation to SlideShare prior to a presentation and give the link on the first slide. I let people know in advance they can download the slides, this lets the audience relax and listen to what I have to say. If possible, I will tweet the URL before the conference starts to give the audience a preview.
It’s all about the presenter’s notes
My philosophy is to keep the slides minimal but put the important information in the presenter’s notes. This is a feature of Keynote and PowerPoint that allows you to leave comments about each slide. Most people use this as a reminder of what to say, without making it public. I use it to publicize the resources for each slide.
Prior to uploading to Slideshare, I create a .pdf version of the presentation and I make sure it includes the speaker notes. Slideshare will parse that pdf and include the speaker note resources within their transcription. Here’s an example of an iOS7 Accessibility presentation I gave at the Mobile+Web conference. It helps to uncheck the option within Keynote to include the date on slides. This gets annoying.
Make your presentation accessible
While it’s commendable that Slideshare is able to parse the pdf and create a transcript, this is not the most accessible way to view the content. I use this transcript as the basis for a blog post that combines the Slideshare version of the presentation, embeds of included videos, and a semantic representation of the slides and the relevant speaker notes. This is what I consider to be the final result of a conference presentation.
This wrap up of the presentation YUI + Accessibility includes the slides, a video recording of the talk, links to resources, and the relevant information from each slide and sample code.
- I taught at Palomar College for 7 years and have a degree in Radio and Telecommunications, so I’m no stranger to standing in front of people and talking. Practice makes perfect and you should take any opportunity possible to speak in public. Local meetups are a great opportunity to speak in small groups about a subject you know well.
- Watch Christian Heilmann speak whenever you have the opportunity. I am always energized and inspired by his presentations. Further, you know he’s always going to say something new. I believe it’s important to avoid canned presentations and treat each audience with respect by at least customizing the presentation for each event.
- Christian has also created a great article that has helped me significantly: A few tricks about public speaking and stage technology. His suggestions about using technology and prep are tips you’ll only learn from constant practice.
- Avoid coffee! This is something I’ve learned the hard way. I can go on some massively bizarre detours while talking on a caffeine buzz. I’ll have a cup of coffee in the morning, but avoid caffeine for a few hours prior to speaking. However, hot lemon tea is your friend. This is an old radio trick, as it helps clear your throat. Also keep water handy on the podium.
- Arrive early and watch the prior speaker. This shows respect for your fellow speaker and gives you a chance to watch the audience reactions, technology snafus, and get an idea of the knowledge level of the crowd.
- Use social media to extend your presentation beyond the room. Announce everything on Twitter, including particularly helpful links mentioned in the presentation. Just don’t get spammy. Announce your Twitter handle on the intro slide for those live tweeting your talk.
- Small audiences are a good thing. It’s great to look out at a packed room and feel important. However, some of my best experiences have been with less than a dozen people in the room. I gave one presentation about building search engines in London where the question and answers led to a patent: Creating Vertical Search Engines for Individual Search Queries. So give them the same energy you’d save for 100 people and take it as an opportunity to make it more interactive.
- Last but not least, a cool laptop sticker helps people remember you.