In honor of Global Accessibility Awareness Day on May 15, Google is rolling out a fresh wave of features aimed at making its platforms more inclusive. From enhanced screen readers to better captions and broader language support, the updates span both Android and Chrome—and they’re all powered by Google’s Gemini AI.
Smarter Image Descriptions with TalkBack
Android’s TalkBack screen reader just got a major boost. Using Gemini AI, it can now describe images in detail—even when no alt text is provided. This is a big win for users who are blind or have low vision.
The real shift? Interactivity.
Instead of a one-way description, users can now ask follow-up questions about an image. Curious about the brand of a guitar in a photo? Or want to know what else is in the background? You can ask, and the AI responds. This level of detail extends across the entire screen, making it easier to get context on anything from a product listing to a social media post.

Captions That Catch More Than Just Words
Google’s updated Expressive Captions feature now picks up the little things that standard captions miss—like tone, inflection, and background noise. Elongated words like “nooo,” or sounds like throat clearing and whistling, now show up in captions. It might seem small, but for people who rely on captions, these nuances can make a big difference in understanding mood, tone, or sarcasm.

Chrome Gets Friendlier for Visual Impairments
Over on Chrome, accessibility is getting a practical upgrade.
One of the biggest changes: Optical Character Recognition (OCR) is now supported for scanned PDFs. That means screen readers can finally access and read text that was previously locked in image-based files.
Chrome for Android also gets a new Page Zoom feature. It lets users increase text size without breaking the page layout—a long-overdue improvement for those with visual impairments who’ve struggled with clunky, distorted pages.

Breaking Language Barriers with African Speech Recognition
In a push for more global accessibility, Google is investing in speech recognition for African languages. The company is releasing open-source data for 10 languages to help developers build tools for underserved communities.
This move aims to reduce the digital divide by making voice technology accessible in parts of the world often left out of mainstream AI development.
Accessibility at the Core
These updates signal more than just new features—they reflect a shift in how Google designs its products. By integrating AI at the core of accessibility tools, the company is making digital spaces more usable for everyone.
For millions of users with disabilities, this means better tools to navigate, understand, and interact with the world online. And for developers, it offers new building blocks to create more inclusive tech.