Google has announced a new feature in Chrome that takes advantage of Google’s great ability to recognize images to create alternative image text descriptions to help blind and visually impaired users.
The search giant uses machine learning to create descriptions of millions of images, which dominate the Internet and social media in particular, as much of the online content is visible.
Not everyone can see these images, as blind and visually impaired users rely on screen readers or Braille screens, but these devices rely on website developers remembering to create so-called “alternative text”, which provides a description of what’s in the picture.
While many large websites contain alternative text, smaller sites often don’t, and alternative text doesn’t always appear on social media, as images move faster than some systems can keep up with.
Google’s new feature relies on the same technology that allows users to search for images by keyword, and the image description is automatically created.
Laura Allen, senior program manager at the Accessibility Team within Chrome, said she understands the problem because she has a poor vision: there are currently millions and millions of unnamed images on the web.
“When you reach one of those images using a screen reader or Braille screen, you’ll hear “picture”, “unnamed drawing” or a very long series of numbers, which is the name of the irrelevant file.”
Translations are not perfect, and if the algorithm is unsure of the image, you will not try to categorize them at all, and the tool was able to name more than 10 million images within a few months of testing.
The feature is slowly deployed to users, as the chrome browser promotes it specifically for people who use screen readers to encourage them to try it.
This feature is only available to users who have screen readers that take out spoken comments or Braille, but photo descriptions are read by the screen reader, but they will not appear visually on the screen.