As part of its “accessibility by design” philosophy, Microsoft has launched its latest accessibility application, called Seeing AI. The app can capture, recognize and describe to the user various structural elements in documents, product labels and other text-based visuals.
The app is an ongoing effort by Microsoft Cognitive Services to leverage the company’s expertise in deep learning and bring it into a practical product for the visually impaired.
With the Seeing AI app, users can be less reliant on others to read product labels, documents and other textual content. It also works with other visual content such as faces, describing them as accurately as possible to the user.
At a demo at the Microsoft Future of Artificial Intelligence event in Sydney, cloud solutions architect at Microsoft, Kenny Johar Singh, who has lost 75 percent of his vision to a retinal condition, used the app to accurately scan a product label.
The use cases of this app are tremendous, as are its potential to uplift millions of users worldwide who suffer from visual impairments.
The Seeing AI app is merely one of Microsoft’s many initiatives to deeply integrate accessibility into core products like Windows 10 and Cortana.
A prototype of the Seeing AI app was launched last year by Microsoft CEO Satya Nadella last year at the Build 2016 conference. Watch the video here:
Hey, thanks for visiting! We’re just a small team working extra-hard to help you cut to the chase on the most important news from around the world, so we’d appreciate your help spreading the word. Come join us on: Facebook and Twitter