If you are an online retailer struggling with low conversion rates and impersonal shopping experiences — this project developed photorealistic virtual characters with emotional sensitivity and natural dialogue that can serve as AI shopping assistants. These characters read customer mood through facial cues and adapt their responses, potentially replacing static chatbots with lifelike virtual sales associates across your website or VR showroom.
Photorealistic AI Virtual Characters That See, Talk, and Respond to Emotions in Real Time
Imagine talking to a computer character that looks completely real, reads your facial expressions, and responds naturally — like a video game character that actually understands you. A team of 12 organizations across Europe, including the Oscar-winning VFX studio Framestore, built a complete toolkit for creating these lifelike digital humans. They can hold conversations, pick up on your mood, and interact through touch and movement in both virtual reality headsets and regular screens. Think of it as giving businesses their own photorealistic digital person that can greet customers, guide them, or present products — without ever needing a coffee break.
What needed solving
Businesses today rely on flat chatbots and text-based assistants that cannot read body language, show empathy, or build trust through visual presence. Customer engagement, training, and brand experiences suffer because digital interactions feel robotic and impersonal. Companies in retail, entertainment, and corporate services need lifelike virtual representatives that can hold natural conversations and respond to human emotions — but building this from scratch requires expertise in real-time graphics, AI behavior, and emotion recognition that most companies simply don't have.
What was built
The project built a complete pipeline and open API toolkit for creating photorealistic virtual characters with emotional awareness, demonstrated through 9 working prototypes. Key outputs include: a real-time facial animation system driven by interactive cues, a single-GPU character player for high-quality interactive rendering, an agent behavioral synthesis engine that puppeteers avatars from emotional input, a fully integrated sentient agent combining dialogue management with non-verbal behavior and haptic feedback, and a fast renderer enabling cutting-edge real-time interactions.
Who needs this
Who can put this to work
If you are a game studio or ad agency spending weeks on character animation and facial performance capture — this project built real-time tools for facial and body animation that run on a single GPU, along with an agent behavioral synthesis system that can puppeteer avatars from emotional input. With 9 demonstrated prototypes across 27 deliverables, the pipeline covers everything from character creation to interactive performance.
If you are an enterprise running call centers or onboarding hundreds of employees yearly — this project created virtual agents that combine dialogue management, socially aware non-verbal behavior, and haptic feedback into a single integrated system. These agents can serve as always-available training coaches or customer service representatives that respond to emotional states and evolve based on user behavior.
Quick answers
What would it cost to license or implement this technology?
The project received EUR 4,102,070 in EU funding across 12 partners over 3 years. Licensing terms would need to be negotiated with the coordinator (Universitat Pompeu Fabra) and the industry partners who built specific components. Several partners like Framestore and Cubic Motion are commercial companies, suggesting parts of the pipeline may already have commercial licensing paths.
Can this scale to handle thousands of simultaneous users?
The project demonstrated a single-GPU real-time character player and a fast renderer for cutting-edge interactions. The architecture uses open APIs and parallel execution of dialogue, non-verbal behavior, and haptic feedback. Scaling to thousands of concurrent users would likely require additional infrastructure engineering beyond what was demonstrated.
Who owns the intellectual property and how can I license it?
IP is distributed across the 12-partner consortium spanning 6 countries. The 6 industry partners (including Framestore, Brainstorm, Cubic Motion, IKinema, and InfoCert) likely hold commercial rights to their specific components. Contact the coordinator at Universitat Pompeu Fabra to discuss licensing arrangements.
How ready is this for production use today?
The project delivered 9 public demonstrations including a fully integrated sentient agent and a final showcase at a key industry event. The technology has been tested and validated through working prototypes, but moving to a commercial product would require productization and support infrastructure.
Does it integrate with our existing systems?
The project was specifically designed around open APIs for communication between components. The Agent Integration Demonstration shows parallel execution of dialogue management, non-verbal behavior, and haptic feedback via a defined protocol. Integration with game engines was demonstrated through the Background Agent Environment Creation system.
What platforms does it support — VR, AR, web?
Based on project objectives, the technology targets AR, VR, and more traditional forms of media interfaces. The Fast Renderer Demonstration showed cutting-edge interactions, and the Virtual Character Player runs in real time on a single GPU, suggesting deployment across multiple platforms is feasible.
Is this compliant with data protection regulations for facial emotion recognition?
The consortium includes InfoCert, described as Europe's largest certification authority, which likely contributed to trust and compliance aspects. However, emotion recognition from facial data falls under sensitive processing in GDPR. Any deployment would require careful privacy assessment specific to your use case.
Who built it
The 12-partner consortium across 6 countries (Belgium, Germany, Spain, France, Italy, UK) is exceptionally well-balanced for commercialization, with a 50% industry ratio. The presence of Oscar-winning VFX company Framestore signals top-tier visual quality, while technology developers Brainstorm, Cubic Motion, and IKinema bring real-time animation and motion capture expertise. InfoCert, Europe's largest certification authority, adds trust and identity verification capabilities. Three universities (UPF, Augsburg, Inria) provide deep research foundations. With 3 SMEs in the mix and 6 industry partners total, the path from research to product has active commercial players already involved — making technology transfer more straightforward than in purely academic consortia.
- UNIVERSIDAD POMPEU FABRACoordinator · ES
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUEparticipant · FR
- TINEXTA INFOCERT SPAparticipant · IT
- BRAINSTORM MULTIMEDIA SLparticipant · ES
- UNIVERSITE RENNES IIthirdparty · FR
- UNIVERSITAET AUGSBURGparticipant · DE
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRSthirdparty · FR
Universitat Pompeu Fabra (Barcelona, Spain) coordinated this project. SciTransfer can help you reach the right person on the research team.
Talk to the team behind this work.
Want to explore licensing photorealistic virtual character technology for your business? SciTransfer can connect you with the PRESENT consortium and help you identify which components match your needs. Contact us for a tailored introduction.