Facebook has announced that it has improved its automatic alternative text (AAT) feature. AAT was introduced in 2016 as a way to generate alternative text on photos to help the visually impaired understand what was going on in the picture, with today’s update the feature can detect the content of images ten times more reliably and provide better information.
According to the social network, the boost in concepts and improvements in reliability mean that AAT can provide information on more pictures. Facebook said that it can detect activities, landmarks, types of animals and more – an example description generated by AAT could read “May be a selfie of 2 people, outdoors, the Leaning Tower of Pisa.”
In the previous iteration of AAT, if you uploaded a picture of two friends having their photos taken and people in the background were passing by, AAT would generate a message saying “May be an image of five people” as it included those in the background; the new iteration is more intelligent and incorporates positional location and the relative size of elements. The new system would say there are two people in the centre of the photo and others scattered towards the fringes of the image.
Artificial intelligence has been introducing great benefits for those that are visually impaired in recent years. When internet connections were slow, alt text was frequently used as a back-up against photos not loading, the alt text could be used by screen readers to tell visually impaired people what the image was showing. As connections got faster, alt text wasn’t used as much but artificial intelligence tools like AAT are now able to fill the gap and provide good descriptions of what’s shown in a shot.