Equity is only the starting point

getCurrentRoute = () {
roadName (Park Boulevard, Joshua Tree Park)
headingView (195) }

Historically, creating accessibility for the low/no vision community has been focused on simply providing access to information. From its inception, the mission of Scenic Audio was to take things further. It was created to do more than offer equity of information. It was built to offer those with low/no vision unique access to an enriching, beloved experience – the great American road trip.

A new vision for data and technology

Scenic Audio starts by leveraging over 4MM miles of Street View imagery. Then, because we know precisely where a user is, we can pull in real-time data that might be currently effecting the view outside the car window – things like current weather, seasonal variation, wind speed/direction, UV index and visibility. Additionally, a user’s heading gives us the ability to understand the chronology in which the world is unfolding outside the car. We can tell a user what’s behind them, what they’re passing now, what’s just up the road.

4MM

Miles of street view imagery

+ 17 data sources
Create a rich context that goes beyond what is seen

Scenic Audio also uses as many as 17 different API’s at any given moment to sus out a deeper level of contextual understanding about the passing world outside the car. It helps Scenic Audio not just know what’s out there, but also what’s changing as the user rolls down the road. These API’s include everything from local bird populations, cultural interests, regional architecture and more.

 All this technology pulled together results in a massive shift in accessibility. It transforms the experience of riding in a car for a low/no vision passenger. Ultimately, they’re given a unique perspective as they’re able to better “see” and comprehend what’s passing by outside in a much deeper way.

Mirroring the natural road trip experience

Road trips magically transition between two states, the day-dreamy passive state of watching the world go by, and the more engaged conversational moments when we might note something of interest that sparks a discussion. We wanted to use technology to help replicate those two different states of engagement.

Scenic Audio allows the user to enter a relaxed, passive state by using real-time image differentiation, so it knows when to alert a user about something new or interesting that has come into view, something that would break us out of our daydream.

To mirror the more engaged, conversational state of being on a road trip, Scenic Audio uses a simple tap gesture that prompts a real-time creation of a new scene narration on the fly. So, when the driver or another passenger mentions something outside that they’ve noticed, our user can instantly trigger an updated description in order to participate in the conversation. Or they can start a conversation of their own, at any point, about what’s outside.

The utility of thoughtful design

We wanted the innovation of Scenic Audio to transcend the use of data and technology. We needed a design system that expanded on accessibility for an audience of over 246MM. We needed to develop an accessibility-first design foundation.

Design choices, like intentionally not relying on specific colors for indicators or iconography, took on a new level of importance. Multimodal usage becomes a leading design principle when optimizing for screen readers, magnifiers, gesture-based actions and text-to-speech features. Simple ideas like using toggles that are both visually and accessibly labeled became crucial.

Even often overlooked decisions like font choice became imperative.

Scenic Audio is visually communicated with Atkinson Hyperlegible, a font designed for the Braille institute specifically for vision impaired users. Atkinson Hyperlegible is a modernist font in which subtle variations within similar letterforms allow for better distinction and legibility.