Learn from extraordinary people working to advance accessibility through innovation. Acclaimed neuroscientist David Eagleman is translating and feeding data through the skin using wearable tech. Haben Girma has refused to let the fact that she is deafblind limit her and believes that inclusion is a choice. Jyotsna Kaki lost her sight as a young adult and is now an Accessibility Testing Program Manager at Google. All are dedicated advocates for accessible technology.
These are part of the google computing engine cloud service and are extremely fast for AI-based computing
Visual Positioning Service, Google Lens, and other Visual/AI Combinations
Google’s Visual Positioning Service uses Augmented Reality to find key visual points for indoor navigation withiut GPS or beacons.
Google Lens is also able to pull objects out of photos and create a data presentation. For instance, a photo of the Chicago cityscape could be analyzed and then provide information about the individual buildings.