In the midst of the various new features and tools announced on Google I / O 2019, Google placed significant emphasis on expanding and investing in the availability of its products and services. Google unveiled exciting new features that are now available or coming soon to the Android ecosystem in an effort to make technology more accessible and easy to use for everyone.
Live Captioning and Live Relay:
Perhaps the most effective announcement made was that Google will now support captioning. When this feature is enabled on a device, subtitles for all video or phone calls are displayed. No more need for sound whatsoever! Whether a user belongs impared or just in a quiet area without disturbing others, this feature promises to be incredibly useful. Although Live Caption is first launched as English, Google hopes to add support for multiple languages soon. Live Relay is another feature intended to help users who are hearing impaired. It can be turned on during phone calls, so the conversation can switch from an audio experience to a chat-like, visual experience on a device. Both Live Relay and Live Caption will allow users to get a real-time transcription of what is said in any video or audio, in any app, across the entire operating system of the device.
Google Lens Text-to-speech
Google revealed that Google Lens will be able to read as well as translate all text aloud. This feature's codebase measures around 100KB, making it accessible to users on cheaper smartphones in parts of the world with low literacy rates. This feature has powerful and practical applications. Google showed a mother where in India she was able to go on her day despite not being able to read without constant school age help.
Not only can this app read text aloud in over 100 languages, but it can also overlay translated text at the top of the original when using this camera-based app. With data vision and augmented reality, the camera becomes quite an impressive tool for understanding the world. This feature is first launched in Google Go, the lightweight entry-level smartphone app.
This incredibly beneficial project born of Google's AI for Social Good program is intended to help people suffering from the neurodegenerative disorder ALS, people who have had a stroke, and people with other speech impediments or disabilities , can communicate more effectively. Google wants to use machine learning to make difficult-to-understand speech and facial expressions into text, so people can speak more easily. Project Euphonia collects data by registering people with ALS to build voice recognition models so that they collect large enough voice recognition technology data sets to understand people with speech difficulties. Google is exploring the idea of personal communication models in the hope that voice technology can better serve more people.
Google has put a ton of work into designing and developing products for people with disabilities. Often this work leads to better products for all users and a more inclusive platform. Google reiterated through the conference that they want to "build a more useful Google for everyone". With these exciting announcements of new features, it is clear that they are following through. We at Big Nerd Ranch are incredibly excited to work with these new platform features and tools, and hope you are too!