SciTransfer
MAESTRO · Project

Smart Middleware That Speeds Up Data-Heavy Computing by Eliminating Memory Bottlenecks

digitalTestedTRL 5Thin data (2/5)

Imagine your computer is a factory where the machines (processors) are incredibly fast, but the conveyor belts moving parts between them (memory and storage) are painfully slow. Everything jams up waiting for data to arrive. MAESTRO built a smart traffic controller that sits between your software and the hardware, automatically figuring out the fastest route to move data through increasingly complex memory systems. It's like upgrading from a single-lane road to an intelligent highway system that knows where every truck needs to go before it even starts moving.

By the numbers
8
consortium partners
5
countries involved (CH, DE, ES, FR, UK)
4
industry partners in consortium
50%
industry participation ratio
28
total project deliverables
5
demonstration deliverables with working software
The business problem

What needed solving

Modern high-performance computing systems waste enormous amounts of time and energy moving data through complex memory layers, even though the actual processors are fast enough. The software tools that companies rely on for HPC were designed decades ago when raw computing power — not data movement — was the constraint. This mismatch means businesses running simulations, analytics, and data-intensive workloads are paying for hardware capacity they can never fully use.

The solution

What was built

The project delivered 28 total deliverables including 5 demonstrated components: a Storage and Data Mapping implementation with executables and source code, a Workflow Management and Optimization demonstrator, telemetry monitoring tools, an Execution Framework Prototype for high-end HPC platforms, and a full Adaptive Transport Release for optimized data movement.

Audience

Who needs this

HPC infrastructure providers and cloud computing companies managing large-scale data center operationsAutomotive and aerospace firms running computationally intensive simulations (CFD, crash testing, structural analysis)Financial services companies performing large-scale risk modeling and quantitative analysis on HPC clustersWeather and climate modeling organizations processing massive observational datasetsPharmaceutical companies running molecular dynamics and drug discovery simulations
Business applications

Who can put this to work

Financial Services & Quantitative Trading
enterprise
Target: Banks and hedge funds running large-scale risk simulations and real-time analytics on HPC clusters

If you are a financial services firm dealing with overnight risk calculations that take too long because your HPC cluster spends most of its time shuffling data between memory layers — this project developed a middleware layer with adaptive data transport and telemetry tools that can optimize how your simulation data flows through complex memory hierarchies, potentially cutting compute time on existing hardware.

Automotive & Aerospace Engineering
enterprise
Target: Engineering firms running large computational fluid dynamics (CFD) and crash simulations

If you are an engineering company dealing with simulation jobs that underperform because data movement — not raw computing power — is the real bottleneck, this project developed an execution framework and workflow optimization tools tested across 5 demo applications. The memory-aware approach means your existing HPC investment works harder without buying more hardware.

Cloud & HPC Infrastructure Providers
enterprise
Target: Data center operators and managed HPC service providers

If you are an HPC service provider dealing with customers who complain about poor utilization of expensive hardware — this project built a telemetry system and adaptive transport mechanisms that give visibility into data movement patterns. With 4 industry partners in the consortium, the tools were designed with real infrastructure needs in mind.

Frequently asked

Quick answers

What would this cost to implement in our data center?

The project produced open-source middleware components (executables and source code are listed among deliverables). Implementation costs would primarily be integration engineering time rather than licensing fees. Contact the coordinator at Forschungszentrum Jülich to discuss specific deployment scenarios.

Can this handle our production-scale workloads?

MAESTRO was designed for high-end HPC platforms — the Execution Framework Prototype was specifically built for deployment on high-end HPC infrastructure. The Adaptive Transport Release provides full-release transport mechanisms for production data movement. However, this was tested in research HPC environments, not commercial production settings.

What is the IP situation — can we license this technology?

The consortium of 8 partners across 5 countries includes 4 industry partners and 1 SME. Deliverables include both executables and source code. As an EU-funded RIA project, results typically follow open access principles, but specific licensing terms should be discussed with the coordinator at Forschungszentrum Jülich.

How does this integrate with our existing HPC software stack?

MAESTRO was specifically designed as middleware — it sits between existing applications and hardware. The project includes object data mapping, workflow management tools, and containerized data models, suggesting it was built to layer onto existing infrastructure rather than replace it.

Is there ongoing support and development?

The project ended in November 2021. The consortium included major research institutions and 4 industry partners, which suggests continued interest. Based on available project data, ongoing maintenance would depend on the individual partners. The project website at maestro-data.eu may have current status.

What kind of performance improvement can we expect?

The project addresses data movement bottlenecks that dominate modern HPC performance. Based on available project data, specific benchmark numbers are not included in the public descriptions. The telemetry demonstration deliverable provides tools to measure your specific improvement, which will vary by workload type.

Consortium

Who built it

The MAESTRO consortium of 8 partners across 5 countries (Germany, France, Spain, UK, Switzerland) has a strong balance between research depth and industry relevance, with a 50% industry ratio — 4 industry partners alongside 3 research organizations and 1 university. The project is coordinated by Forschungszentrum Jülich, one of Europe's largest and most respected research centers with deep HPC expertise. The presence of 1 SME suggests some commercial pathway interest, though the consortium leans toward large research infrastructure rather than commercial product development. For a business looking to adopt this technology, the industry partners would be natural integration partners, while Jülich provides long-term research credibility.

How to reach the team

Forschungszentrum Jülich GmbH (Germany) — use SciTransfer's coordinator lookup to find the project lead's direct contact

Next steps

Talk to the team behind this work.

Want to explore how MAESTRO's data-aware middleware could optimize your HPC infrastructure? SciTransfer can arrange a direct introduction to the development team and help assess fit for your specific use case.