If you are a VFX studio spending weeks on manual character animation and 3D modeling — this project developed 3 authoring tools that capture an actor's full appearance and motion using RGB cameras, then let you edit, restyle, and transfer those motions to other characters. The demo deliverables showed working motion adaptation and style transfer, meaning you could turn realistic captures into cartoon-style animations automatically.
Cheaper, Faster 3D Avatar Creation Tools for Film, Games, and VR Production
Imagine you want to put a real person into a video game or VR movie. Today, that means expensive motion-capture suits, manual 3D modeling, and weeks of work. INVICTUS built tools that use regular RGB cameras to capture a person's appearance and movement all at once — shape, texture, even how their clothes move. Then editors can change that character's style, transfer their motions to another body, or drop them into a virtual scene using VR tools, cutting out most of the manual labor.
What needed solving
Creating realistic digital humans for films, games, and VR experiences is painfully slow and expensive. Traditional pipelines require motion-capture suits, manual 3D modeling, and armies of animators — pushing costs up and timelines out. Studios need faster, cheaper ways to capture real actors and turn them into editable digital characters.
What was built
The project built 3 authoring tools: a volumetric capture system using RGB cameras to record an actor's full appearance and motion simultaneously, a motion editing tool that transfers and adapts movements between characters with style control (demonstrated in 2 iterations), and a collaborative VR story authoring tool for placing and animating characters in virtual scenes. All 12 planned deliverables were completed.
Who needs this
Who can put this to work
If you are a game studio dealing with the high cost and slow turnaround of creating realistic player characters — INVICTUS built volumetric capture tools that create game-ready avatars from camera footage, with real-time rendering support. The spatio-temporal avatar manipulators demonstrated in the project let you re-animate captured characters for interactive gameplay without rebuilding models from scratch.
If you are an XR studio struggling to populate immersive experiences with believable human characters — this project delivered a collaborative VR story authoring tool that lets directors and designers place and animate volumetric characters directly inside virtual scenes. With 6 partners across 3 countries including 3 industry players, the consortium specifically targeted both high-end offline and real-time rendering pipelines.
Quick answers
How much would it cost to license or adopt these tools?
The project has not published pricing information. With 3 industry partners in the consortium (50% industry ratio), commercialization pathways likely exist. Contact the coordinator at Université de Rennes or the industry partners for licensing terms.
Can these tools handle production-scale work for a feature film or AAA game?
The project delivered 12 deliverables including 3 demonstrated prototypes for volumetric capture, motion style transfer, and avatar manipulation. These were designed for both high-end offline productions (film quality) and real-time rendering, suggesting readiness for production workflows. Based on available project data, full industrial-scale validation details would need to come from the consortium.
Who owns the intellectual property from this project?
As a Horizon 2020 Innovation Action with 6 partners across France, Germany, and Ireland, IP is typically shared among consortium members according to their grant agreement. The 3 industry partners and 1 SME likely hold commercialization rights for their contributions. Licensing inquiries should go through the coordinator.
Does this work with our existing production pipeline?
INVICTUS was designed to feed into both traditional media (film, animation) and interactive media (VR, AR, games) pipelines. The tools use standard RGB cameras for capture input, which lowers the hardware barrier. The VR authoring tool was specifically built for collaborative editing of scenes with volumetric characters.
What stage of development are these tools at now?
The project closed in December 2022 after delivering all 12 planned deliverables including 3 functional demonstrations. The motion adaptation and style transfer tool went through two iterations (first and final demonstrators), showing iterative refinement. As an Innovation Action, the work targeted technology readiness levels 5-7.
Can we see a working demo?
Based on available project data, 3 demonstrator deliverables were produced: volumetric spatio-temporal avatar manipulators, and two versions of the motion adaptation and style transfer system. The project website at invictusproject.eu may have demo videos or contact information for arranging demonstrations.
Who built it
The INVICTUS consortium brings together 6 partners from 3 countries (France, Germany, Ireland) with a healthy 50% industry ratio — 3 industry players alongside 1 university and 2 research organizations. This balance signals that the tools were built with real production needs in mind, not just academic curiosity. The consortium includes 1 SME, and the coordinator is Université de Rennes in France, which anchors the research side. For a business buyer, the presence of 3 industry partners means someone in the consortium already understands your production constraints and has shaped these tools accordingly.
- UNIVERSITE DE RENNESCoordinator · FR
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUEthirdparty · FR
- INTERDIGITAL R&D FRANCEparticipant · FR
Université de Rennes (France) — reach out through the project website or university's research department for licensing and collaboration inquiries.
Talk to the team behind this work.
Want to connect with the INVICTUS team to explore licensing or integration? SciTransfer can arrange an introduction and help you evaluate fit for your production pipeline.