If you are an online travel platform struggling with customers abandoning complex searches — this project developed a virtual travel agent with emotional intelligence that guides users through holiday selection using natural conversation, facial expression reading, and adaptive dialogue. The system was built to handle exactly this kind of multi-step decision process where traditional search interfaces fall short.
AI Virtual Assistants That Read Emotions and Talk Like Humans
Imagine talking to a computer character that actually notices when you smile, nod, or look confused — and adjusts how it responds, just like a real person would. This project built virtual assistants that use your camera and microphone to pick up on your facial expressions and tone of voice, then reply with realistic speech and a 3D animated face that shows emotions. Think of it as a customer service agent that can read the room. They tested it with two real use cases: a virtual travel agent that helps you find holidays, and a "speaking book" character you can chat with about a novel.
What needed solving
Businesses that rely on online customer interactions — travel booking, e-commerce, support — lose conversions because web interfaces cannot read customer confusion, frustration, or interest. Traditional chatbots miss non-verbal cues entirely, leading to flat, robotic experiences that drive users away. Companies need virtual assistants that can actually hold a natural conversation and adapt to what the customer is feeling.
What was built
The project delivered a complete virtual assistant creation toolbox including: a multilingual adaptive dialogue system, emotion-aware speech analysis and synthesis, a 3D animated face with expressive behavior generation, and web/mobile integration. Two working demo applications were built — a virtual travel agent and a speaking book character — tested across 3 milestone iterations with 23 total deliverables.
Who needs this
Who can put this to work
If you are a contact center operator dealing with rising agent costs and inconsistent service quality — this project built multilingual virtual agents that detect customer emotions through voice and facial cues and adapt their responses accordingly. The system supports web and mobile interfaces and was delivered as a final integrated system with documented web standard technologies.
If you are an e-commerce company losing sales because customers cannot find what they want through filters and search bars — this project created conversational AI assistants with 3D animated faces that retrieve information through natural dialogue. The system was tested in multilingual settings across 4 countries, making it relevant for cross-border retail.
Quick answers
What would it cost to license or adopt this technology?
The project was a publicly funded Research and Innovation Action, so core results are likely available for licensing through the University of Nottingham and consortium partners. Specific licensing terms would need to be negotiated directly with the IP holders. Budget details are not available in the dataset.
Can this scale to handle thousands of simultaneous users?
The consortium delivered a final integrated system with documented support for web standard technologies, and a second system with improved real-time capabilities in distributed environments. These suggest the architecture was designed with scalability in mind, though production-scale stress testing at thousands of concurrent users would likely require additional engineering.
Who owns the intellectual property?
IP is shared among the 8 consortium partners across 4 countries (DE, FR, NL, UK), with the University of Nottingham as coordinator. Licensing would follow standard EU Horizon 2020 rules where each partner owns their contribution. Contact the coordinator for specific IP arrangements.
Does this work in multiple languages?
Yes. The project explicitly delivered multilingual capabilities, including multilingual adaptive dialogue, multilingual speech analysis, and multilingual realizations tested across milestone systems M1, M2, and M3. The consortium spanned 4 countries, supporting cross-language development.
How mature is the technology — is it ready for deployment?
The project delivered 23 deliverables including 15 demonstration-level outputs, progressing through 3 milestone iterations to a final integrated system. However, this was a research project ending in 2017, so the technology would need adaptation and updating for current deployment, particularly given how fast AI and NLP have evolved since then.
What input hardware does the system require?
The system uses standard audio and video signals as input — a webcam and microphone. It was demonstrated on web and mobile device user interfaces, meaning no specialized hardware is needed beyond what most devices already have.
Can it be customized for my specific industry?
The project was explicitly designed as a general-purpose creation tool for virtual assistants, with 2 specific industrial applications built as proof of concept (travel agent and speaking book). The adaptive dialogue toolbox and task-based dialogue system were built to support new domains, suggesting customization is architecturally possible.
Who built it
The consortium of 8 partners across 4 countries (Germany, France, Netherlands, UK) is research-heavy, with 5 universities and 1 research organization making up 75% of the team. The 2 industry partners (both SMEs) provide some commercial grounding but this is clearly an academically driven project led by the University of Nottingham. For a business looking to adopt this technology, the upside is strong academic depth in AI, speech processing, and computer vision — the downside is that bridging from research prototype to production product would require additional commercial engineering partners. The 25% industry ratio is below average for projects with near-term commercial ambitions.
- THE UNIVERSITY OF NOTTINGHAMCoordinator · UK
- IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINEparticipant · UK
- UNIVERSITEIT TWENTEparticipant · NL
- CEREPROC LTDparticipant · UK
- INSTITUT MINES-TELECOMthirdparty · FR
- UNIVERSITAET AUGSBURGparticipant · DE
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRSparticipant · FR
The University of Nottingham (UK) coordinated this project. Search for ARIA-VALUSPA project leads in the Computer Science department to find the principal investigator.
Talk to the team behind this work.
Want to explore licensing the emotion-aware virtual assistant technology from ARIA-VALUSPA for your customer-facing applications? SciTransfer can connect you with the right people in the consortium.