Artificial Intelligence and Accessibility – GAAD 2020 – Hello A11y

Artificial Intelligence and Accessibility for Global Accessibility Awareness Day 2020

Machine learning and artificial intelligence are keywords used to describe the process of building complex interactions based on large amounts of data. While the industry has been evolving steadily since the 1950’s, we are now at a point where there’s universal access to the technology. This has had a great impact on assistive technology and how we build solutions for today and strategies for the future. This presentation was created for the Hello A11y conference to celebrate Global Accessibility Awareness Day 2020

Today’s agenda

  • Machine Learning vs. Artificial Intelligence
  • Recent evolution
  • Where do we go from here
  • Key Trends in AI for Accessibility

Machine Learning and Artificial Intelligence

Machine Learning

Computer learns from exploring data, discovering connections, and solving a problem.

Car learns how to analyze risks in an intersection

Artificial Intelligence

Computer takes initiative to do something based on what it has learned.

Car takes corrective actions to avoid an accident.

A.I. is not special

Anyone can work on A.I.

Where do we go from here?

Make it Simple

Apple Watch Detection

The Apple Watch contains two features that require no user input. The watch can detect irregular heartbeats and falls. In fact, not interacting with the screen is an interaction, for the fall monitoring it will send a request for help.

QuickBooks Mileage Tracking

QuickBooks Mobile app creates mileage reports. Simply give it location permission and it tracks start, finish, mileage, and provides potential expense deductions. You can be the driver or passenger. The average users finds 37% more expense deductions.

Make it Work

Seeing AI by Microsoft

Seeing AI is a great app from Microsoft. But much of it’s functionality, including: object detection, barcode scanner, facial recognition, and currency identification, can also be found with OrCam, Lookout, and other applications. But SeeingAI has a feature that makes it easier for people to successfully scan product barcodes. It uses AI to scan the object and detect the barcode before it is completely visible. It then gives audio directions to rotate the object until it can scan the code. 

Safe Exit for All

Researchers at Wichita State University have developed Safe Exit For All. It’s an emergency exit navigation system which dynamically adjusts for danger zones. It provides customized directions based on the user’s disability, such as mobility or vision

Make it Affordable

Detecting Dyslexia and Autism

Change Dyslexia’s Dytective game can detect a child’s risk of having dyslexia. A collaboration of researchers in Europe have created a method to diagnose autism by tracking the person’s eye movements within a web page. These AI-driven tools allow earlier diagnosis and treatment with minimal costs.

Independent Developers

Satya Panda and Dr. Bindu Sravani Nayak used open-source machine learning libraries and affordable data servers to create an application that can differentiate oral diseases based on multiple factors and simplify the process of diagnosis and treatment. This allows dentists to see more patients and increase treatment efficacy.They are an example of the democratization of ML/AI and niche projects that can be built by everyone.

Make it for Everyone

QuickBooks Capital

Unconscious bias is a significant barrier to small business funding. QuickBooks Capital analyzes 26 billion transactions and data points to provide loans based on a business’s ability to repay. 60% of customers say they didn’t qualify for loans before QuickBooks Capital.

Android Live Transcribe

Android’s Live Transcribe brings live captioning to Android phones. It supports 70 languages and uses the device’s neural network to complete the speech to text transcription without being dependent on a network connection. Google has also introduced sound detection and archived transcriptions. They also recently open sourced the project to encourage transcription integration in more products.

Make it Awesome

Indoor Navigation via computer vision

Smith-Kettlewell Institute is researching the use of computer vision to provide indoor navigation without architectural additions, such as blue-tooth beacons. The project uses a combination of building layout drawings and recognizable landmarks, such as exit signs. It uses the phone’s camera and AI to determine the person’s location and to give step by step directions.

Clew is an app that uses computer vision to creat path tracing. A person would be guided from to a location. For instance going from a meeting room to the restroom. The mobile device would record the landmarks and create a path to lead the person back to their original starting point.

Voice Recognition for everyone

Mozilla’s Common Voice is an initiative to teach machines how real people speak. Volunteers are building a giant data set of voice recordings from all accents, languages, and speech ability. VoiceITT and Google’s Project Euphonia are focusing specifically on understanding dysarthric speech.

Learn More


Leave a Reply

Your email address will not be published. Required fields are marked *