Web4All Conference Notes – Day 3

Crowd sourcing accessibility evaluations

2013-6: 350 government web sites and 2,000 non-government sites have been evaluated for accessibly in China

conformance testing included

  • Automatic Evaluation
  • Manual Assessment

Crowd sourcing can integrate the power of crowds to solve the manual assessment bottleneck

It was proposed in 2006 and has been used in reCaptcha, spoken wikipedia, labeling.

the current crowd sourcing is not suitable for web accessibility because the assessment tasks require a high level of expertise and experience.

There was an assignment of tasks. The results were compared to

  • total work
  • time out
  • give up
  • errors detected.

An algorithm was developed to compare these values to determine a cost model. This allows them to look at historical data to find that a person is more efficient at one of the rulesets. For instance a completely blind person may be great at form labels but not at color contrast.

Assessment of semantic taxonomies for blind indoor navigation based on a shopping center use case

Location-based services (LBS)
Untitled

  • many LBS are available  thanks to smart phones
  • provide turn by turn navigation support using vocal instructions
  • we know little about what environmental elements an features are useful, such as tactile paving or braille buttons

The did a survey of taxonomies

Looking at these data sets, they created a simplified taxonomy based on their similarities

  • Pathways
  • doorways
  • elevators
  • venues
  • obstacles (not included in the previous taxonomies)

These elements defined by their fixed positions within floor map. Vocal instructions use this information to generate vocal instructions. Locate tactile paving:

  • “proceed 9 meters on braille blocks, and turn right”
  • “proceed 20 meters, there are obstacles on both sides”

Announcements of obstacles and tactile paving was confusing and unnecessary for one guide dog user.

Do web users with autism experience barriers when searching for information within web pages

The study looked at eye gazing to see if there was a difference between two groups: with and without autism.

With a series of search tasks, the group with autism had less success than the control group for completing the tasks.

tracking the eye gaze. Five elements: a, b, c, d, e. Their eye map could be a-b-c-e-d

Check the variance between the two groups.

DysMusic

The #DysMusic study is creating a language independent test for detecting #dyslexia in children. #w4a2017 @luzrello

Most dyslexia detection tools are still linguistics based, which isn’t appropriate until the child is already 7-12 years old. This study tries to find a detection method that is non-language based, this would allow detection at a much younger age.

There is a memory game with music elements.

Tasks

  • Find the matching sounds
  • distinguish between sounds
  • short time interval perception

Raw sound is modified via frequency, length, rise time, rhythm.  Only one property is modified at a time. People with dyslexia tend to have trouble detecting rise time changes.

Accessibility Challenge

Producing Accessible Statistics Diagrams in R

Data visualization is increasingly important. R is an existing language for statistics. Jonathan (co-writer) had been using R to output printed diagrams of statistics. They worked together to convert R into an accessible SVG format

Histograms and Boxplots were discrete data presentations  targeted layout for the initial project. Time series and scatter plots are continuous data graphs

Extract the important data points, convert it to an xml document, and attach this to the SVG. The final experience provide easy navigation (arrow keys), supports screen readers via aria live regions.

GazeTheWeb

GazeTheWeb is a simplified browser designed for eye tracking navigation. #w4a2017 #a11y

Math Melodies

Math Melodies makes math easier to learn for children that are blind or low-vision. Math exercises as puzzles, audio icon maps, different exercises. It was funded via crowdfunding and has been downloaded 1400 times

NavCog

NavCog is a navigation project from CMU for blind individuals. It uses low energy blue tooth beacons.

Installation of the beacons is not scalable across large areas. To crowd source the task, they created a set of instructions to walk through the process of configuring and installing the beacons.

LuzDeploy

LuzDeploy is a Facebook messenger bot: easy to use.

VizLens:

VizLens is a crowd sourced interpretation of interfaces, such as microwave oven. Multiple volunteers are recruited to generate labels for the interface. the app then uses augmented reality to virtually overlay the labels.

Chatty Books

Chatty Books is an html5 + Daisy reader that creates an audio version of documents. It can now convert from pdf to multimedia Daisy.

  1. PDF – NiftyReader (text)
  2. export to multimedia daisy or epub3
  3. drag and drop to chatty books, the daisy player and library
  4. upload daisy content to chatty books service (cloud) and use chatty books app on iPad

Able to read my mail

Simplified email program for people with learning and intellectual disabilities. Gmail plugin that converts to simplified text or pictograms.

Closed ASL Interpreting for online videos

Created a framework for incorporating an interpreter. Closed Interpreting, instead of Closed captioning.

the interpreter window needs to be flexible to allow the user to move it around and change size to reduce distractions. IT’s closed, so i can be turned on/off

Moving the eyes back and forth for long periods of time can be exhausting. so the window can be moved to be closer to the screen’s content.

eye-gaze tracking to pause the video when looking away from the video.

Closed Interpreting [CI]

Provide a video interface that allows closed interpreting, like closed captioning. The interface provides a second screen that includes an ASL interpreter

The users appreciated the ability to customize the interpreter’s location. They also liked the ability to pause the interpreter as the gaze moved from content to the interpreter

Difference between :root and html

The :root selector targets the highest level parent, which would be the <html> tag in an HTML document. The :root has a higher specificity, as it is a pseudo-class instead of a plain element.

CSS-Tricks has a great description on this: :root by Sarah Cope.

In this example, the background of the page would be red, as :root is more specific than html.

:root {background:red;}
html {background:green;}

The :root selector is supported across all major browsers.

Keyboard Accessibility with the Space Bar

Keyboard accessibility is critical for your users that depend on voice recognition, onscreen keyboards, screen readers, ergonomic accessories, and your power users that prefer to avoid grabbing the mouse for every task. Most people test to make sure every interactive element can receive focus via the tab key and that buttons and links work with the enter key. We’ve gotten pretty good at making sure links have href attributes and using role=”button” on our pseudo-buttons.

But this doesn’t cover a significant portion of keyboard accessibility: the space bar. This page includes various interactive elements to see which ones will work with a space bar.

As developers, you don’t have to worry about whether an element works correctly with a spacebar if you use the proper, semantic tag. But if you find yourself re-assigning roles with ARIA than you may need to listen for the space bar key.
Continue Reading Keyboard Accessibility with the Space Bar