SciTransfer
WeVerify · Project

AI-Powered Fake News Detection Platform for Media and Corporate Brand Protection

digitalPilotedTRL 7

Imagine someone sends you a shocking video or news article — how do you know it's real? WeVerify built a toolkit that acts like a digital forensics lab for online content. It checks images, videos, and social media posts using AI to spot fakes, tracks how disinformation spreads across networks, and keeps a shared blockchain ledger of confirmed fakes so nobody falls for the same trick twice. Think of it as a "reverse image search on steroids" combined with a community of journalists who cross-check each other's findings.

By the numbers
2,700+
existing users of predecessor InVID verification plugin
9
consortium partners across the project
6
countries represented in consortium
EUR 2,499,450
EU contribution to the project
3
minimum partner nodes in blockchain verification network
20
total project deliverables produced
The business problem

What needed solving

Online disinformation and AI-generated fake content (deepfakes) are causing real financial and reputational damage to businesses, media organizations, and governments. Even experienced journalists struggle to verify whether images, videos, and social media posts are authentic. Companies need automated, scalable tools to detect fakes before they spread — but building this in-house requires AI expertise most organizations don't have.

The solution

What was built

The project delivered a full verification platform (reaching v3.0) with cross-modal content verification tools, a blockchain-based public database of confirmed fakes, disinformation flow analysis with an early warning system, and both open source and premium commercial versions designed for newsroom integration.

Audience

Who needs this

News agencies and digital newsrooms needing fast content verificationCorporate brand protection teams fighting disinformation campaignsSocial media platforms required to moderate synthetic contentElection monitoring and democracy protection organizationsInsurance and financial firms investigating fraudulent visual evidence
Business applications

Who can put this to work

Media & Broadcasting
any
Target: News agencies, TV broadcasters, and digital newsrooms

If you are a news organization dealing with the growing flood of manipulated images, deepfake videos, and fabricated social media posts — this project developed a cross-modal verification platform tested by journalists at Deutsche Welle and AFP. It integrates into existing content management systems and already had a community of more than 2,700 users through the InVID verification plugin predecessor.

Corporate Communications & Brand Management
mid-size
Target: PR firms, brand protection agencies, and corporate communications departments

If you are a brand protection team dealing with disinformation campaigns that damage your company's reputation — this project built social network analysis tools and a disinformation flow analysis system that tracks how false narratives spread online. The early warning system can detect fabricated content before it goes viral, giving your team time to respond.

Government & Public Sector
enterprise
Target: Election monitoring bodies, intelligence agencies, and public safety organizations

If you are a government agency tasked with protecting democratic processes from online manipulation — this project delivered a blockchain-based public database of known fakes verified by at least 3 independent nodes. The platform supports human rights organizations and emergency response teams with micro-targeted debunking tools validated across 6 countries.

Frequently asked

Quick answers

What does the platform cost and is there a free version?

The WeVerify platform core is open source, designed to engage citizen journalists and communities at no cost. A premium version is also offered for newsrooms needing advanced features and integration with in-house content management systems. Specific pricing for the premium tier is not disclosed in the project data.

Can this handle the volume of content a large news organization processes daily?

The platform went through multiple development cycles (v2.0 and v3.0) with dedicated scalability, throughput, and robustness evaluations documented in each release. It was validated by professional journalists at major organizations including Deutsche Welle and AFP, as well as a community of more than 2,700 users of the predecessor InVID plugin.

Who owns the intellectual property and can we license the technology?

The core platform and algorithms are open source, meaning they can be freely used and adapted. The premium version with advanced newsroom integrations was developed by the 9-partner consortium led by Ontotext AD (Bulgaria). Licensing terms for premium features would need to be discussed with the consortium.

How does the blockchain database of known fakes actually work?

The system uses a distributed ledger where at least 3 partner nodes independently verify and record confirmed fakes. Hashing functions for text, images, and media are combined with the identity of the verification agent — such as a journalist or fact-checker — creating a tamper-proof record that any platform participant can check against.

What types of fake content can the system detect?

The platform handles cross-modal verification — meaning it can analyze text, images, video, and social media posts together. It specifically addresses AI-generated synthetic multimedia content (deepfakes) using deep learning, and tracks disinformation flow across social networks with an early warning system.

Is this still actively maintained after the project ended in 2021?

The project closed in November 2021. Based on available project data, the open source components and the community of 2,700+ users from the InVID plugin suggest ongoing usage. The project website at weverify.eu and consortium partners like AFP and Deutsche Welle indicate continued interest, but current maintenance status should be confirmed directly.

Does this comply with EU regulations on content moderation?

The platform was developed under EU Horizon 2020 funding with a specific focus on democracy protection and human rights. Its participatory verification approach with human-in-the-loop machine learning aligns with EU principles on AI transparency. Specific compliance with the Digital Services Act or AI Act should be verified with the consortium.

Consortium

Who built it

The 9-partner consortium across 6 countries (Belgium, Bulgaria, Germany, Greece, France, UK) is strongly industry-oriented with 4 industrial partners and a 44% industry ratio. The coordinator Ontotext AD from Bulgaria is a private company specializing in knowledge graph and semantic technology. Major media partners include Deutsche Welle (Germany's international broadcaster) and Agence France-Presse, providing real-world validation environments. EU DisinfoLab adds policy and research credibility. With 1 SME and a EUR 2,499,450 budget, the consortium balances technical AI development with immediate deployment into working newsrooms — a strong indicator of market relevance.

How to reach the team

Ontotext AD (Bulgaria) — semantic technology and knowledge graph company, reachable through weverify.eu

Next steps

Talk to the team behind this work.

Want to integrate disinformation detection into your newsroom or brand protection workflow? SciTransfer can connect you with the WeVerify consortium partners who built and tested this platform with AFP and Deutsche Welle.