If you are an automotive OEM or Tier-1 supplier developing Advanced Driver Assistance Systems — this project developed cloud-based semi-automated video annotation tools that dramatically reduce the cost and error rate of labeling petabyte-scale driving datasets. With each mid-range car expected to have 10 cameras generating 10TB per day, manual annotation is no longer viable. These tools let you train better computer vision models faster.
Cloud-Based Video Annotation Tools That Cut Costs for Self-Driving Car Development
Imagine every car on the road has 10 cameras filming everything — lanes, pedestrians, signs, other cars. Now someone has to watch all that footage and label every single object so the car's computer can learn to drive. Right now, humans do this labeling by hand, which is painfully slow and expensive. This project built cloud-based software that does most of that labeling automatically, turning months of manual work into something manageable at massive scale.
What needed solving
Car manufacturers and ADAS developers face an impossible bottleneck: every connected car with 10 cameras generates 10TB of video per day, and all of it needs to be labeled — objects, events, road scenes — before machine learning algorithms can learn from it. Human annotation at this scale is prohibitively expensive, error-prone, and simply cannot keep up with the data volume that modern connected vehicles produce.
What was built
The project built cloud-based semi-automated video annotation tools for petabyte-scale automotive datasets. Key outputs include import/export interfaces with an annotation data model and storage system (delivered as a working demonstrator), plus 16 total deliverables covering the full pipeline from on-board lightweight analysis to cloud-based crowdsourcing annotation platforms.
Who needs this
Who can put this to work
If you are a software company building autonomous driving or navigation algorithms and struggling to create large enough labeled datasets — this project built scalable annotation pipelines running on cloud infrastructure. Instead of hiring armies of manual labelers for petabyte-scale video, the semi-automated tools handle the bulk of object, event, and scene labeling across road traffic data.
If you are a mapping or geospatial company processing vehicle camera footage to build HD road maps — this project created tools for large-scale video analysis and annotation that can run both in the cloud and with lightweight on-board processing. The 14-partner consortium included cartography expertise, and the platform supports crowdsourced data collection from connected vehicles back to a central cloud.
Quick answers
What would it cost to adopt this video annotation platform?
The project did not publish pricing or licensing fee data. Since this was a Research and Innovation Action (not a commercial product launch), costs would depend on negotiation with the consortium partners. The coordinator Vicomtech (Spain) would be the starting point for licensing discussions.
Can this handle industrial-scale data volumes?
Yes — the platform was specifically designed for petabyte-scale video datasets. The project objective states that by 2020 a single mid-range car with 10 cameras would generate 10TB per day, and the system leverages cloud computing elasticity to handle that volume. This is industrial scale by design.
What about intellectual property and licensing?
The project was funded as a Research and Innovation Action (RIA) under Horizon 2020 with 14 partners across 6 countries. IP ownership typically follows EU grant rules where each partner owns their contributions. Licensing terms would need to be negotiated with the relevant consortium members.
Does this work with our existing camera hardware?
The project focused on software tools for video annotation and analysis, not specific camera hardware. Based on available project data, the platform includes import/export interfaces (documented in a dedicated demonstrator deliverable) suggesting it was designed to work with standard video formats from CMOS image sensors used in vehicles.
How mature is this technology — is it ready for production use?
The project ran from 2016 to 2018 and produced 16 deliverables including a demonstrator for import/export interfaces and annotation data storage. This indicates a working prototype was built and tested, though there is no evidence of full commercial deployment in the available data.
Is there regulatory alignment for automotive use?
Based on available project data, the tools support the development of ADAS systems where European automotive industry is described as the world leader. The annotation capabilities help build the training datasets required for validating ADAS algorithms, which is increasingly relevant as automotive safety regulations tighten around autonomous features.
Who built it
The Cloud-LSVA consortium is well-balanced for commercial potential: 14 partners across 6 European countries (Belgium, Germany, Spain, France, Ireland, Netherlands), with 7 industry partners making up 50% of the group. This strong industry presence — alongside 3 universities and 2 research organizations — signals that the technology was developed with real-world deployment in mind, not just academic publication. The coordinator is Vicomtech, a Spanish technology center specializing in visual interaction and communications. The consortium's geographic spread covers major European automotive markets (Germany, France, Spain), which is strategically important given that the European automotive industry leads globally in ADAS.
- FUNDACION CENTRO DE TECNOLOGIAS DE INTERACCION VISUAL Y COMUNICACIONES VICOMTECHCoordinator · ES
- VALEO SCHALTER UND SENSOREN GMBHparticipant · DE
- DUBLIN CITY UNIVERSITYparticipant · IE
- COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESparticipant · FR
- TOMTOM GLOBAL CONTENT BVparticipant · NL
- IBM IRELAND LIMITEDparticipant · IE
- INTEL DEUTSCHLAND GMBHparticipant · DE
- UNIVERSITY OF LIMERICKparticipant · IE
- TOMTOM INTERNATIONAL BVparticipant · NL
- TASS INTERNATIONALparticipant · NL
- INTEL CORPORATIONparticipant · BE
- INTEMPORAparticipant · FR
- EUROPEAN ROAD TRANSPORT TELEMATICS IMPLEMENTATION COORDINATION ORGANISATION - INTELLIGENT TRANSPORT SYSTEMS & SERVICES EUROPEparticipant · BE
- TECHNISCHE UNIVERSITEIT EINDHOVENparticipant · NL
Vicomtech (Fundacion Centro de Tecnologias de Interaccion Visual y Comunicaciones), based in Spain — a technology center specializing in visual computing and communications.
Talk to the team behind this work.
Want an introduction to the Cloud-LSVA team to discuss licensing their annotation platform for your ADAS or mapping pipeline? Contact SciTransfer — we connect businesses with EU research teams.