Core contributor in SIMARGL (malware/stegomalware recognition), AIDA (AI for law enforcement), STARLIGHT (AI against high-priority threats), and AssureMOSS (ML for vulnerability detection).
PLURIBUS ONE SRL
Italian cybersecurity SME applying machine learning to threat detection, law enforcement analytics, and secure software testing.
Their core work
Pluribus One is a Sardinia-based cybersecurity SME specializing in AI and machine learning applied to security challenges — from malware detection and web application testing to AI-driven analytics for law enforcement. They build intelligent tools for threat recognition, software vulnerability analysis, and secure deployment of deep learning models. Beyond security, they apply their ML expertise to digital health, developing algorithms for monitoring brain diseases and chronic conditions through digital endpoints.
What they specialise in
TESTABLE focused on security and privacy testing patterns for web applications; AssureMOSS addressed secure open-source software certification.
ALOHA developed runtime-adaptive deep learning frameworks for heterogeneous architectures including IoT devices.
IDEA-FAST develops digital endpoints for fatigue and sleep assessment; ALAMEDA applies ML to Parkinson's, MS, and stroke monitoring.
AIDA, STARLIGHT, and LETS-CROWD all serve law enforcement with predictive analytics, crowd security, and counter-terrorism tools.
How they've shifted over time
Pluribus One started in 2017-2018 with a focus on human-centered security technologies and deep learning for IoT — projects like LETS-CROWD (crowd protection) and ALOHA (deep learning on edge devices) defined their early profile. From 2019 onward, they shifted decisively toward AI-powered cybersecurity and law enforcement analytics, with projects like AIDA, STARLIGHT, and SIMARGL applying machine learning to malware detection, dark web monitoring, and predictive policing. They also expanded into digital health, applying their ML capabilities to brain disease monitoring and chronic condition assessment.
Pluribus One is converging on AI-powered security operations — expect them to pursue projects combining autonomous threat detection, secure AI deployment, and privacy-preserving analytics.
How they like to work
Pluribus One operates exclusively as a consortium participant, never as coordinator — they join projects to contribute specialized AI/ML and security testing capabilities rather than to lead large-scale research agendas. With 188 unique partners across 26 countries, they integrate easily into diverse consortia and clearly prefer breadth of collaboration over repeated partnerships. This makes them a flexible, low-friction partner who can slot into security or AI work packages without needing to drive the overall project direction.
Extensive European network spanning 188 unique consortium partners across 26 countries, built through consistent participation in mid-to-large security and digital consortia. Their reach is pan-European with no single dominant geographic cluster beyond their Italian base.
What sets them apart
Pluribus One sits at a rare intersection: they combine deep ML/AI research capability with hands-on cybersecurity engineering and law enforcement domain knowledge — a combination few SMEs can credibly offer. Their experience spans both the defensive side (malware detection, software security testing) and the analytical side (predictive policing, dark web intelligence), making them a versatile security AI partner. Their additional track record in digital health shows they can transfer ML expertise across domains, which is valuable for interdisciplinary consortia.
Highlights from their portfolio
- AIDATheir highest-funded project (€341K), applying AI and big data analytics to law enforcement including deep web and dark net monitoring — represents their core competency.
- IDEA-FASTUnusual cross-domain move into digital health (running until 2026), showing their ML skills transfer beyond security into clinical endpoint development for neurodegenerative diseases.
- STARLIGHTLarge-scale security project (until 2026) focused on AI autonomy and resilience for law enforcement, positioning them at the frontier of trusted AI for public safety.