If you are a city authority dealing with slow incident detection across dozens of camera feeds that overwhelm your operators — this project developed a real-time audio-visual analytics platform with automated decision-making tools that flag events like crowd surges or traffic disruptions instantly, while keeping citizen privacy intact through built-in voice anonymization and privacy preservation mechanisms.
AI-Powered City Surveillance Analytics That Protects Privacy While Improving Urban Services
Imagine a city covered in cameras and microphones — at traffic lights, in parks, on public transport. Right now, most of that audio and video data just sits there or gets watched manually. MARVEL built a system that analyzes all of it automatically, in real time, to detect things like traffic jams, noise pollution, or crowd incidents — but here's the catch: it does this while anonymizing voices and faces so nobody's privacy gets violated. Think of it like giving a city a brain that can see and hear everything happening on its streets, but one that's been trained to forget who people are.
What needed solving
Cities generate enormous amounts of audio and video data from public cameras and sensors, but most of it goes unanalyzed because processing it all in real time is technically difficult and privacy regulations make centralizing it risky. Municipal operators are stuck choosing between manual monitoring (expensive, slow, error-prone) and automated systems that raise serious privacy and GDPR concerns. The result: slow incident response, poor situational awareness, and missed opportunities to improve urban services.
What was built
MARVEL built a complete Edge-to-Fog-to-Cloud platform for real-time audio-visual city analytics, including: a Minimum Viable Product, an integrated computing stack with security and privacy mechanisms, multimodal AI models for scene recognition and event detection with built-in voice anonymization, a federated learning system that keeps data processing distributed, a decision-making toolkit, and a data management and distribution layer — all tested through demonstrator deployments.
Who needs this
Who can put this to work
If you are a smart infrastructure company struggling to process massive volumes of sensor data from distributed city installations — this project developed an Edge-to-Fog-to-Cloud computing stack with federated learning that processes audio-visual data right where it's captured, cutting bandwidth costs and enabling fast time-to-insights without centralizing all raw data.
If you are a transport operator dealing with congestion monitoring and incident detection across a sprawling road or transit network — this project developed multimodal AI models that fuse audio and video streams to recognize traffic events in real time, deployed across 18 partner sites in 12 countries, with a proven Minimum Viable Product and executed demonstrators.
Quick answers
What would it cost to deploy this system in our city or facility?
The project data does not include pricing or cost-per-deployment figures. Since this was an EU Research and Innovation Action with 18 consortium partners, licensing and deployment costs would need to be negotiated directly with the technology providers in the consortium. The edge-fog-cloud architecture is designed to work with existing camera and sensor infrastructure, which could reduce hardware costs.
Can this scale to a full city deployment, not just a pilot zone?
The system was specifically designed for extreme-scale data analytics across smart city environments, processing high-volume, high-velocity audio-visual streams. The Edge-to-Fog-to-Cloud architecture distributes processing load so it scales horizontally. Demonstrators were executed in both initial and final versions across the consortium's 12-country network.
Who owns the IP and how can we license the technology?
As an EU-funded RIA project coordinated by IDRYMA TECHNOLOGIAS KAI EREVNAS (FORTH, Greece) with 18 partners, IP ownership is shared across the consortium according to EU grant agreement rules. Licensing would need to be arranged with the specific partners who developed the components you need — the 10 industry partners and 5 SMEs in the consortium are the most likely licensing contacts.
How does this handle GDPR and privacy regulations?
Privacy was a core design requirement, not an afterthought. The project delivered dedicated E2F2C Privacy Preservation Mechanisms, voice anonymization technology, and privacy-aware audio-visual intelligence modules. The federated learning approach keeps data processing local rather than centralizing personal data, which aligns with GDPR data minimization principles.
How long would integration take with our existing camera and sensor network?
The project produced a complete integrated platform (MARVEL Integrated Framework, final version) plus a Management and Distribution Toolkit for deployment. Based on available project data, the system was designed to work with standard audio-visual capture infrastructure. Integration timelines would depend on your existing setup and which modules you need.
What's the current maturity — is this ready for production?
The project delivered a Minimum Viable Product and ran demonstrators through both initial and final execution phases. AI/ML models were deployed and optimized in both initial and final versions. This puts the technology past prototype stage and into tested-and-demonstrated territory, though production hardening for a specific city deployment would still be needed.
Who built it
MARVEL assembled a commercially oriented consortium of 18 partners spanning 12 European countries, with a 56% industry ratio — well above the typical EU research project. The mix of 10 industry partners (including 5 SMEs), 3 universities, and 4 research organizations means this wasn't just an academic exercise. The strong industry presence, combined with the Greek research institute FORTH coordinating, suggests mature technology transfer pathways. For a business looking to adopt this technology, the 10 industry partners are the most likely route to commercial licensing or deployment partnerships, while the geographic spread across 12 countries means the system was tested under diverse regulatory and infrastructure conditions.
- IDRYMA TECHNOLOGIAS KAI EREVNASCoordinator · EL
- AUDEERING GMBHparticipant · DE
- AARHUS UNIVERSITETparticipant · DK
- UNIVERZITET U NOVOM SADU FAKULTET TEHNICKIH NAUKAparticipant · RS
- COMUNE DI TRENTOparticipant · IT
- INFORMATION TECHNOLOGY FOR MARKET LEADERSHIPparticipant · EL
- PRIVANOVA SASparticipant · FR
- INFINEON TECHNOLOGIES AGparticipant · DE
- INSTYTUT CHEMII BIOORGANICZNEJ POLSKIEJ AKADEMII NAUKparticipant · PL
- ATOS SPAIN SAparticipant · ES
- TAMPEREEN KORKEAKOULUSAATIO SRparticipant · FI
- CONSIGLIO NAZIONALE DELLE RICERCHEparticipant · IT
- FONDAZIONE BRUNO KESSLERparticipant · IT
- SPHYNX TECHNOLOGY SOLUTIONS AGparticipant · CH
- ATOS IT SOLUTIONS AND SERVICES IBERIA SLthirdparty · ES
- NETCOMPANY S.A.participant · LU
- ZELUS IKEparticipant · EL
The coordinator is FORTH (Foundation for Research and Technology - Hellas) in Greece. SciTransfer can facilitate an introduction to the right technical contact within the consortium.
Talk to the team behind this work.
Want to explore how MARVEL's privacy-aware city analytics could work for your municipality or infrastructure? SciTransfer can connect you directly with the right consortium partner for your use case — contact us for a tailored one-page brief.