SciTransfer
aiD · Project

AI-Powered Sign Language Translation for Mobile Devices and Emergency Services

digitalTestedTRL 5

Imagine you're deaf and need to call emergency services — right now, that's nearly impossible without a human interpreter. This project built AI that can translate between sign language and spoken language in real time, running directly on a phone. They created three working demos: a news service with augmented reality sign language, an emergency relay that lets deaf people contact 112, and a digital tutor that teaches sign language interactively. The deep learning models were compressed small enough to run on ordinary smartphones, not just expensive servers.

By the numbers
13
consortium partners
6
countries represented
EUR 1,587,000
EU contribution
3
pilot applications built (AR news, emergency relay, digital tutor)
5
industry partners in consortium
2
SMEs in consortium
7
total deliverables
The business problem

What needed solving

Over 70 million deaf people worldwide face daily communication barriers — from calling emergency services to watching the news to learning new skills. Human sign language interpreters are expensive, scarce, and unavailable 24/7. Businesses in telecom, media, and education face growing legal pressure under accessibility regulations to serve deaf customers, but current solutions don't scale.

The solution

What was built

The project built pilot software implementing three AI-powered applications: an augmented reality news service with virtual sign language, an automated emergency relay service for deaf callers, and an interactive digital sign language tutor. Deep learning models were compressed to run on standard mobile phones.

Audience

Who needs this

Telecom operators required to make emergency services accessible to deaf usersNews broadcasters and streaming platforms needing automated sign language interpretationEdTech companies building sign language learning appsGovernment agencies implementing EU Accessibility Act complianceHealthcare providers needing patient communication tools for deaf patients
Business applications

Who can put this to work

Telecommunications & Emergency Services
enterprise
Target: Telecom operators and emergency call centers

If you are a telecom provider or emergency services operator dealing with the legal requirement to make 112 accessible to deaf citizens — this project developed an Automated Relay Service pilot that translates sign language video calls into text or speech in real time. With 13 consortium partners across 6 countries, the technology was tested with actual end-users and compressed to run on mobile devices.

Media & Broadcasting
enterprise
Target: News broadcasters and streaming platforms

If you are a media company dealing with the cost of live sign language interpreters for news broadcasts — this project developed an AR news service that generates virtual sign language overlays automatically using deep learning. The pilot was built and evaluated by deaf end-users, potentially replacing or supplementing expensive human interpreters for routine content.

Education & EdTech
SME
Target: Language learning platforms and accessibility training providers

If you are an EdTech company looking to enter the sign language learning market — this project developed an Interactive Digital Tutor application that teaches sign language using AI feedback. The deep network compression techniques allow the tutor to run on ordinary mobile devices, making it accessible to the vast majority of potential users without specialized hardware.

Frequently asked

Quick answers

What would it cost to license or deploy this technology?

The project was funded with EUR 1,587,000 under the MSCA-RISE program across 13 partners. Licensing terms would need to be negotiated with the coordinator (Cyprus University of Technology). As a research mobility program, commercial licensing arrangements may require further development investment beyond the project scope.

Can this scale to industrial deployment?

The project specifically developed deep network compression techniques to scale models from servers down to commodity mobile devices. This is a strong indicator of scalability intent. However, the outputs are pilot-stage demos, not production-ready products — scaling to millions of users would require additional engineering and infrastructure.

What is the IP situation and how can I license the results?

IP generated under MSCA-RISE projects typically belongs to the institutions that created it, with EU rules requiring open access to publications. The consortium includes 5 industry partners and 2 SMEs across 6 countries, so IP ownership may be distributed. Contact the coordinator at Cyprus University of Technology to discuss licensing.

Does this comply with EU accessibility regulations?

The European Accessibility Act (EAA) requires digital services to be accessible by 2025. This project directly addresses communication accessibility for deaf people with three pilot applications. Based on available project data, the technology was designed with end-user engagement and evaluated by deaf users, which supports regulatory compliance claims.

How mature is the technology — can I use it today?

The project delivered pilot software implementing three demos (AR news, emergency relay, digital tutor) and ran from 2019 to 2023. These are functional pilots evaluated by end-users, not market-ready products. Additional development would be needed for production deployment, but the core AI models and compression techniques are proven.

Can this integrate with our existing communication systems?

The pilots were designed around real-world use cases — emergency call systems, news broadcasts, and mobile apps. The deep learning models use signal processing and can handle multi-modal input. Based on available project data, integration would require adaptation work, but the mobile-first compression approach means the tech is designed for standard consumer devices.

Consortium

Who built it

The aiD consortium brings together 13 partners from 6 countries (Belgium, Cyprus, Estonia, Greece, UK, US), with a healthy 38% industry ratio — 5 industry partners including 2 SMEs alongside 4 universities and 2 research organizations. This cross-Atlantic spread covering both EU and US markets is unusual for Horizon 2020 and suggests broader commercial potential. The coordinator is Cyprus University of Technology, a higher education institution. The mix of academic AI expertise and industry implementation capacity indicates the technology has been shaped by commercial perspectives, though the MSCA-RISE funding mechanism prioritizes researcher mobility over product development.

How to reach the team

Contact the coordinator at Cyprus University of Technology (TECHNOLOGIKO PANEPISTIMIO KYPROU) through their institutional channels or the project website.

Next steps

Talk to the team behind this work.

Want to explore licensing this AI sign language technology for your platform or service? SciTransfer can connect you directly with the research team and help structure a collaboration.