Skip to main content
G3ict

4 Areas Where AI Makes the World More Accessible

July 17, 2018

In today’s world, so much relies on information. But this can make navigating everyday life even more of a barrier for people with a sensory, physical or cognitive impairment. Fortunately, the advent of Artificial Intelligence (AI) and machine learning technology is helping people with disabilities both interact with the physical world and use digital devices and services.

In our previous article, we’ve looked at some great examples of charity projects using AI to achieve outcomes including conservation, language interpretation and disease management. Below, we explore the top ways AI is being used by charities and other organisations to transform the lives of disabled people and make the digital world more accessible.

Microsoft and Tech Trust are offering free, one-to-one workshops for charities of all sizes to start harnessing the benefits of AI and charities that are just starting to think about how AI could help them are being asked to come forward with their ideas.

To get involved, submit your short application before 20th July.

Voice interaction

For people with limited sight or mobility, the advent of voice-enabled tech is opening up the Web in brand new ways. AI devices that use voice commands, such as Amazon Echo and Google Home, along with mobile virtual assistants such as Siri and Cortana, are bringing this into the mainstream.

The RNIB (Royal National Institute of Blind People) even has an advice page on its website about how blind people could use the Amazon Echo to interact with the web hands-free and eyes-free.

As well as making the Web more accessible for people with sight or mobility issues, voice command technology is giving people a more natural way of interacting with internet services.

Image of Amazon Echo device

Amazon Echo device (Courtesy of Andres Urena)

Built on the new Amazon Echo Show – a new Alexa assistant that includes a screen – the Elderly Care pilot being developed by Age UK and Accenture uses a combination of voice activation, on-screen prompts and underlying cloud-based AI technology to help older people improve their wellbeing in a number of ways.

By just speaking commands, they can get easy access to reading and learning materials, communicate with friends and family, find local events, and even answer the phone and door. It also includes a ‘Family and Carer’ portal that lets family and caregivers check on the older person and their daily activities, such as whether they have taken their medication or made new requests for caregivers.

Image, text and visual recognition

AI technology is helping visually impaired and blind people interpret images as well as just text. Facebook, for instance, has developed screen-reading captioning tools that describe photos to visually impaired users, and Google’s Cloud Vision API can understand the context of objects in photos.

And this isn’t just confined to images on the Internet. Microsoft’s Seeing AI app had been developed that narrates the visual world for the blind and low vision community, helping them to do things like see currency, read handwriting and text, recognise products from barcodes, recognise colours and recognise people around them and their emotions – all through their mobile phone cameras.

Since its launch in July 2017, the app has improved independence by assisting users in completing over 7 million tasks independently and has been downloaded by 200,000 users – tasks that previously would have required a sighted person’s assistance.

Speech-to-text

At the opposite end of the spectrum, AI is helping to meet the challenges of the deaf community. Microsoft is partnering with Rochester Instistute of Technology’s National Technical Institute for the Deaf, one of the university’s nine colleges, to pilot the use of Microsoft’s AI-powered speech and language technology to support students in the classroom who are deaf or hard of hearing.

Image shows lecture at the Rochester Institute of Tech during which live AI captioning is being trialled for deaf students.

Lecture at the Rochester Institute of Technology during which live AI captioning is being trialed for deaf students (Courtesy of Microsoft).

AI is powering real-time language captioning for students during lectures, using an advanced form of automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text, so they can get information at the same time as their hearing peers and learn how to spell complex scientific terms.

And AI is also being trained to lipread: an end-to-end lipreading system developed by Google DeepMind and researchers from the University of Oxford is able to outperform all other automatic lip-reading systems, say the researchers.

This technology could have many applications for deaf people, and could soon be seen in improved hearing aids, silent dictation in public spaces and speech recognition in noisy environments.

Text Summarisation

Not all accessibility solutions are aimed at speech, visual and mobility – many people with different types of cognitive impairments or learning disorders also need help navigating technology and everyday life.

Salesforce has been working on algorithm that automatically summarises text using machine learning. This could help break down barriers for people with cognitive impairments such as dyslexia, attention deficit disorders, memory issues or low literacy, who might struggle to digest large chunks of text from the internet.

The researchers say the robotic summaries have been ‘surprisingly coherent and accurate’ so far, so people of all cognitive abilities can keep up with the deluge of information such as social media, news, emails and other text they are bombarded with day to day.

Applications for the Microsoft AI workshops are open now until the 20thJuly for a limited number of charities to attend. To apply, go to application page here.

Source: Charity Digital News

Related Information

AI Captioning Microsoft