SciTransfer
FANDANGO · Project

AI-Powered Fake News Detection Platform for Media Companies and Public Institutions

digitalTestedTRL 6

Imagine you're reading the news and can't tell what's real anymore — about climate change, immigration, or politics. FANDANGO built an AI-powered platform that cross-checks news stories against trusted data sources, analyzes images and videos for manipulation, and scores how credible a source really is. Think of it like a fact-checking assistant that works across multiple languages and media types at once. It was tested on three real-world topics where misinformation hits hardest: climate, immigration, and European political affairs.

By the numbers
EUR 2,879,250
EU funding for platform development
8
consortium partners across the project
5
countries represented in the consortium
3
real-world validation domains (Climate, Immigration, European Context)
27
total project deliverables produced
7
demo deliverables with working prototypes
3
SMEs in the consortium
The business problem

What needed solving

Misinformation is costing media companies credibility, forcing governments into reactive crisis communications, and exposing tech platforms to regulatory penalties. There is no unified way to cross-check news content across text, images, video, and social media sources simultaneously — especially across multiple languages. Organizations need automated, scalable tools that can flag false or manipulated content before it spreads.

The solution

What was built

FANDANGO built an integrated big data platform with 7 working prototypes: copy-move detection for audio-visual content, spatio-temporal analytics for out-of-context markers, multilingual text analytics for misleading messages, source credibility scoring with social graph analytics, machine-learnable fake news scoring, lightweight data shipping components, and pre-processing tools. All were validated across 3 real-world domains.

Audience

Who needs this

News agencies and broadcasters needing automated fact-checking at scaleGovernment communications departments combating misinformation campaignsSocial media platforms building trust and safety / content moderation systemsCorporate communications teams monitoring brand-related disinformationElection monitoring organizations and democracy-focused NGOs
Business applications

Who can put this to work

Media and Publishing
enterprise
Target: News agencies, broadcasters, and digital publishers

If you are a media company dealing with the flood of unverified content reaching your newsroom — this project developed multilingual text analytics and copy-move detection prototypes that can flag manipulated audio-visual content and misleading text before you publish. The platform was validated across 3 real-world domains and built by a consortium of 8 partners including 4 industry players.

Government and Public Administration
enterprise
Target: National or EU-level government communications departments

If you are a government institution dealing with misinformation campaigns that distort public perception on sensitive topics like immigration or climate policy — this project built source credibility scoring and social graph analytics prototypes that trace how false narratives spread. The tools can help your team monitor information flows and respond with verified data from trusted sources.

Social Media and Tech Platforms
enterprise
Target: Content moderation and trust & safety teams at social platforms

If you are a tech platform struggling with automated misinformation at scale — this project developed machine-learnable scoring for fake news decision-making and spatio-temporal analytics that detect out-of-context content markers. The big data platform integrates multiple data sources and was designed to break interoperability barriers between different content types.

Frequently asked

Quick answers

What would it cost to license or adopt this technology?

The project received EUR 2,879,250 in EU funding and was coordinated by Engineering Ingegneria Informatica SPA, a major Italian IT company. Licensing terms would need to be negotiated directly with the consortium. Based on available project data, no public pricing model has been disclosed.

Can this scale to handle enterprise-level content volumes?

The platform was designed as a big data solution with lightweight data shipping components and pre-processing tools specifically built to handle large volumes of news data, social media, and open data sources. It was tested across 3 distinct domains (Climate, Immigration, European Context), suggesting it can process diverse content types at scale.

What is the IP situation — can we use this commercially?

The project was funded as an Innovation Action (IA) under Horizon 2020, where consortium members typically retain IP rights to their contributions. With 8 partners across 5 countries and 3 SMEs involved, licensing would likely require agreements with specific technology owners. Contact the coordinator for IP terms.

Does this work for languages other than English?

Yes. The project explicitly developed multilingual text analytics for misleading message detection. The consortium spans 5 countries (Belgium, Greece, Spain, Ireland, Italy), indicating multi-language capability was a core requirement, not an afterthought.

How mature is this technology — is it ready for deployment?

The project produced 7 demo deliverables including working prototypes for copy-move detection, multilingual text analytics, source credibility scoring, and machine-learnable fake news scoring. These were validated in 3 real-world scenarios. The technology is at prototype-to-pilot stage, not yet a turnkey product.

Does it handle video and image manipulation, not just text?

Yes. Dedicated prototypes were built for copy-move detection on audio-visual content and spatio-temporal analytics that identify out-of-context fakeness markers in visual media. These complement the text-based detection tools.

What regulatory requirements does this help with?

With the EU's Digital Services Act and upcoming AI Act imposing obligations on platforms to combat disinformation, this technology directly supports compliance. The project was built under a Responsible Research and Innovation approach, aligning with European transparency standards.

Consortium

Who built it

The FANDANGO consortium of 8 partners across 5 European countries (Belgium, Greece, Spain, Ireland, Italy) is led by Engineering Ingegneria Informatica SPA, one of Italy's largest IT companies — a strong signal of commercial intent. With 4 industry partners (50% industry ratio) and 3 SMEs, the consortium leans heavily toward market application rather than pure research. The mix of 1 university and 1 research organization provides the scientific backbone, while the industry majority suggests the technology was built with real-world deployment in mind. For a business buyer, the coordinator's size and reputation reduce adoption risk compared to a university-led project.

How to reach the team

Engineering Ingegneria Informatica SPA (Italy) — major IT services company, reachable through corporate channels

Next steps

Talk to the team behind this work.

Want a tailored briefing on how FANDANGO's fake news detection tools could fit your media verification or content moderation workflow? Contact SciTransfer for a one-page solution brief and introduction to the right consortium partner.