If you are a computing center struggling to fully utilize your heterogeneous hardware — mixing CPUs, GPUs, and accelerators from different vendors — this project developed a complete software stack with dynamic resource allocation and application malleability that lets workloads automatically shift to the best-suited hardware. Tested across 14 partner organizations in 8 countries, the tools come with final releases, user documentation, and installation guides.
Ready-to-Use Software That Makes Europe's Biggest Supercomputers Run Faster
Imagine you have a kitchen with 10 different appliances — oven, blender, sous vide, air fryer — but they all speak different languages and you have to manually figure out which one to use for each step of your recipe. DEEP-SEA built the "smart kitchen manager" for supercomputers: software that automatically figures out which processor or memory type is best for each part of a calculation, then assigns work accordingly. It handles Europe's next-generation exascale machines, which can perform a billion billion calculations per second. The result is that scientists and engineers running massive simulations get answers faster while the supercomputer wastes less energy.
What needed solving
Companies and research organizations running large-scale simulations on European supercomputers face a growing hardware diversity problem: modern machines mix CPUs, GPUs, and specialized accelerators with different memory types, but existing software cannot efficiently use all of them at once. This means expensive compute time is wasted, simulations take longer than they should, and porting code between systems requires costly manual rework.
What was built
The project delivered a complete software stack for exascale supercomputers: final releases of programming tools with documentation and installation guides, resource management interfaces for MPI libraries, a system-level programming environment with malleability support (letting applications dynamically grow or shrink their resource use), and interoperability layers connecting MPI, GASPI, GPI-2, and OmpSs-2 programming models. In total, 24 deliverables were produced.
Who needs this
Who can put this to work
If you are an engineering company paying for expensive supercomputer time to run crash simulations or fluid dynamics but your jobs waste cycles because the software cannot efficiently use mixed processor types — DEEP-SEA delivered programming tools and resource management software that optimize code across heterogeneous architectures. This means your simulation jobs finish faster on the same hardware, cutting your HPC costs.
If you are a pharma company running molecular simulations on European supercomputers and hitting bottlenecks when scaling across thousands of processors — this project built MPI libraries, memory placement policies for deep memory hierarchies, and scalability tools specifically designed for the European Processor Initiative hardware. With 24 deliverables including final tool releases, the software is ready for integration into existing HPC workflows.
Quick answers
What would this software cost us to adopt?
DEEP-SEA builds on international open-source packages widely used in the HPC community. The software stack — including MPI libraries, resource managers, and programming tools — is developed as open-source extensions, so licensing costs are expected to be zero or minimal. Integration and customization costs would depend on your existing infrastructure.
Can this handle our industrial-scale workloads?
Yes, the software is explicitly designed for exascale systems — machines capable of a billion billion calculations per second. The project co-designed the tools with real EU applications and tested them across a consortium of 14 partners including 7 research organizations operating major compute centers. The Modular Supercomputer Architecture support means it scales from single-node to full system level.
What about intellectual property and licensing?
The project extends existing open-source packages used by the HPC community. As a Horizon 2020 RIA (Research and Innovation Action), results are typically made available under open-source licenses. Specific licensing terms for each tool component should be confirmed with the coordinator at Forschungszentrum Jülich.
How does this integrate with our existing HPC setup?
DEEP-SEA was specifically designed for interoperability. The deliverables include interfaces between MPI libraries and resource managers, support for both MPI and GASPI communication, and OmpSs-2 programming extensions. The software supports standard HPC stacks and was tested on real European compute centre infrastructure.
Is the project still active and who supports it?
The project ran from April 2021 to March 2024 and is now closed. However, the software components are built on actively maintained open-source projects. Forschungszentrum Jülich, one of Europe's leading supercomputing centers, coordinated the work and continues to maintain the Modular Supercomputer Architecture.
What hardware does this actually support?
The software targets European Processor Initiative (EPI) chips that combine general-purpose CPUs with accelerators, plus DDR and HBM memory types. It also supports GPU accelerators and follows the Modular Supercomputer Architecture. Based on the deliverables, the tools handle both node-level heterogeneity and system-level resource management.
Who built it
The 14-partner consortium across 8 countries is heavily research-oriented, with 7 research organizations and 5 universities forming the core — typical for an infrastructure-level HPC project. The 2 industry partners (14% industry ratio) and 1 SME indicate limited but present commercial interest. Forschungszentrum Jülich as coordinator brings credibility as one of Europe's top supercomputing institutions. The multi-country spread (Belgium, Switzerland, Germany, Greece, Spain, France, Sweden, UK) ensures the software works across Europe's diverse computing landscape, but the low industry participation means commercial adoption will require deliberate outreach beyond the consortium.
- FORSCHUNGSZENTRUM JULICH GMBHCoordinator · DE
- KUNGLIGA TEKNISKA HOEGSKOLANparticipant · SE
- TECHNISCHE UNIVERSITAET MUENCHENparticipant · DE
- COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESparticipant · FR
- IDRYMA TECHNOLOGIAS KAI EREVNASparticipant · EL
- BAYERISCHE AKADEMIE DER WISSENSCHAFTENparticipant · DE
- EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICHparticipant · CH
- EUROPEAN CENTRE FOR MEDIUM-RANGE WEATHER FORECASTSparticipant · UK
- BULL SASparticipant · FR
- TECHNISCHE UNIVERSITAT DARMSTADTparticipant · DE
- PARTEC AGthirdparty · DE
- KATHOLIEKE UNIVERSITEIT LEUVENparticipant · BE
- BARCELONA SUPERCOMPUTING CENTER CENTRO NACIONAL DE SUPERCOMPUTACIONparticipant · ES
Forschungszentrum Jülich GmbH (Germany) — one of Europe's leading supercomputing centers. Contact through their HPC division.
Talk to the team behind this work.
Want to explore how exascale-ready software tools can cut your HPC simulation costs? SciTransfer can connect you with the DEEP-SEA team and help evaluate fit for your infrastructure.