Skip to main content

Using AI Technology for Subtitles in Real-Time

January 02, 2019

Valencia’s Polytechnic University (UPV), Spain and its Machine Learning and Language Processing (MLLP) research group have developed the subtitles system using AI technology to provide the information in real-time. This is to make the UPV events and conferences more accessible for those with hearing disabilities.

Using AI technology to improve disability attention services

According to a study by the National Confederation of Deaf People (CNSE), in Spain there are over a million people over age six with hearing disabilities. This represents 8 percent of the population.

Data from the National Statistics Institute (INE) suggests that over 97 percent of these people communicate with oral language.

Sometimes understanding events and didactic activities at university can present a problem for people with hearing disabilities.

The real-time subtitles system

polysubs-300x175.pngPolisubs, the real-time subtitle system

Using the real-time system, users can follow the event by reading the subtitles through an IOS and Android app, or on a website on their phone. If the event is recorded, the subtitles are automatically added to the video.

Polisubs works by receiving the ambient sound via a microphone system and sending it to the UPV’s central servers. In the central servers, the ambient sound is processed with an Artificial Intelligence system and a flow of text subtitles is created. The quality is 97 percent in comparison to a human transcriber, although certain individuals may still have comprehension issues using this system.

The future of the system

Currently, the real-time subtitles system works simultaneously in Spanish, Valenciano, and English, but it is also being developed in other languages.

Moving forward, the UPV is continuing the deployment of these systems in 2019. It aims to make this technology available in all its halls and classrooms in the future.

Source: SciTech Europa