During Google’s I/O 2019 developers conference this week, the company demonstrated an impressive new feature for mobile operating system Android Q. Called Live Caption, the feature enables real-time transcription for any video or audio that users play on their smartphones. No matter if they’re listening or watching via YouTube, Skype, Instagram, Pocket Casts, or other applications, Live Caption overlays the text on top of whatever is being used. Additionally, Live Caption will work on top of original video or audio recordings on users’ phones.
“For 466 million deaf and hard of hearing people around the world, captions are more than a convenience — they make content more accessible. We worked closely with the Deaf community to develop a feature that would improve access to digital media,” Google wrote in a blog post, Sharing What’s New in Android Q.
The captions utilize on-device machine learning, working offline so they don’t require data about user activity to the cloud. Design-wise, the captions appear in a black box and are moveable around the screen, and also notable, the feature works even with the volume muted or low, reports The Verge.
“Building for everyone means ensuring that everyone can access our products. We believe technology can help us be more inclusive, and AI is providing us with new tools to dramatically improve the experience for people with disabilities,” said Google CEO Sundar Pichai during his I/O keynote.
Google has a short video available on YouTube that describes Live Caption.