If you are a border security agency dealing with unpredictable migration surges fueled by online misinformation — this project developed an integrated analysis platform across 3 prototype releases that scans social media, news, and multimedia to detect perception gaps and disinformation campaigns targeting potential migrants. The tools were validated with border agencies across a 14-partner consortium spanning 7 countries.
AI Platform That Detects Misinformation and Media Manipulation Across Multiple Channels
Imagine someone abroad sees a flashy social media post claiming life in Europe is completely different from what it actually is — maybe much better, maybe much worse. Those misconceptions can be weaponized to trigger security threats or manipulate migration flows. MIRROR built a set of AI-powered tools that scan social media, news, and multimedia to spot where the image of Europe diverges sharply from reality. Think of it as a fact-checking radar that works across languages and media types, helping security agencies and policymakers see manipulation campaigns before they spiral out of control.
What needed solving
Government security agencies and border authorities face growing waves of online misinformation that distort how potential migrants perceive Europe, sometimes deliberately weaponized as hybrid threats. Detecting these manipulation campaigns across social media, news, and multimedia in multiple languages is beyond what standard monitoring tools can do. Without specialized cross-media analysis, agencies are left reacting to crises instead of spotting disinformation patterns early.
What was built
An integrated analysis platform that progressed through 3 prototype releases — from initial component integration to a final release with all analysis tools connected. The platform combines automated text analysis, multimedia analysis, and social network analysis tools, backed by a systematic methodology for detecting perception-reality discrepancies across media types. The project produced 39 deliverables in total.
Who needs this
Who can put this to work
If you are a media monitoring company struggling to track cross-platform disinformation campaigns for your clients — this project developed automated text, multimedia, and social network analysis tools that detect discrepancies between online narratives and ground truth. The platform integrates analysis across multiple media types including social media, giving you a single view of how narratives evolve and where manipulation occurs.
If you are a security consultancy helping governments or corporations assess hybrid threats — this project built a methodology and toolset for systematic intermedia analysis that identifies coordinated disinformation and its potential security impact. Developed with input from 3 industry partners and validated through pilot exercises, the tools can detect patterns across social networks, news outlets, and multimedia content.
Quick answers
What would it cost to license or deploy this platform?
The project was a publicly funded Research and Innovation Action, so the platform itself was developed with EU funding. Licensing terms would need to be negotiated directly with the consortium coordinator (Leibniz Universität Hannover) and relevant partners. Based on available project data, no commercial pricing has been published.
Can this scale to monitor millions of social media posts in real time?
The platform progressed through 3 prototype releases, with the final release integrating all analysis tools. The objective describes automated text, multimedia, and social network analysis capabilities. However, specific throughput benchmarks or real-time processing volumes are not detailed in the available project data.
Who owns the intellectual property and how is it licensed?
IP is distributed across the 14-partner consortium spanning 7 countries, governed by the EU grant agreement. With 3 SMEs and 3 industry partners in the consortium, some components may be available for commercial licensing. Specific IP arrangements would need to be clarified with the coordinator.
Has this been tested with real security agencies?
Yes. The project objective explicitly states that solutions were validated with border agencies and policymakers via pilots. The consortium includes civil society organizations and border agencies as end users, ensuring the tools were tested against real operational needs.
What languages and media types does the platform cover?
The platform combines automated text analysis, multimedia analysis, and social network analysis across various media types including social media. The consortium spans 7 countries (AT, DE, EL, IT, MT, NL, SE), suggesting multi-language capability. Specific language coverage details would need to be confirmed with the project team.
How does this differ from existing social media monitoring tools?
Unlike commercial monitoring tools that track brand mentions or sentiment, MIRROR specifically detects discrepancies between how Europe is perceived abroad and the actual reality — a perception-reality gap analysis. It combines threat analysis methodology with cross-media technical analysis, which standard monitoring tools do not offer.
What is the timeline to deploy this in a new organization?
The project ran from 2019 to 2022 and produced a final prototype release with all integrated components. Based on available project data, deployment timelines for new organizations have not been specified. Integration would likely require customization discussions with the development partners.
Who built it
The 14-partner consortium across 7 countries (Austria, Germany, Greece, Italy, Malta, Netherlands, Sweden) is well-suited for a cross-border security challenge. With 4 universities and 2 research organizations providing technical depth in AI and media analysis, plus 3 industry partners and 3 SMEs (21% industry ratio) bringing commercial perspective, the mix balances research rigor with practical deployment needs. The inclusion of border agencies and civil society organizations as consortium members — not just advisors — means the tools were built alongside actual end users. The coordinator, Leibniz Universität Hannover, is a major German research university with strong credentials in AI and information systems.
- GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVERCoordinator · DE
- BUNDESMINISTERIUM FUR LANDESVERTEIDIGUNGparticipant · AT
- TOTALFORSVARETS FORSKNINGSINSTITUTparticipant · SE
- HENSOLDT ANALYTICS GMBHparticipant · AT
- ETHNIKO KENTRO EREVNAS KAI TECHNOLOGIKIS ANAPTYXISparticipant · EL
- UNIVERSITAT WIENparticipant · AT
- UNIVERSITA TA MALTAparticipant · MT
- RIJKSUNIVERSITEIT GRONINGENparticipant · NL
- CONOSCENZA E INNOVAZIONE SOCIETA ARESPONSABILITA LIMITATA SEMPLIFICATAparticipant · IT
- FONDAZIONE AGENFOR INTERNATIONAL-IMPRESA SOCIALEparticipant · IT
- Malta Police Forceparticipant · MT
- POLISMYNDIGHETEN SWEDISH POLICE AUTHORITYparticipant · SE
Leibniz Universität Hannover (Germany) — reach out via their research partnerships office or the project website contact page
Talk to the team behind this work.
Want to explore how MIRROR's misinformation detection tools could strengthen your security operations? SciTransfer can connect you directly with the right consortium partner for your needs.