SciTransfer
CloudLightning · Project

Smart Cloud System That Automatically Manages Mixed Computing Hardware for Peak Performance

digitalPrototypeTRL 4Thin data (2/5)

Imagine you run a workshop with power drills, laser cutters, and 3D printers — but every time you get a new job, you have to manually figure out which machine handles which part. CloudLightning built a cloud computing system that does this automatically. It takes a mix of different processors — standard CPUs, graphics cards, specialized chips — and lets the system organize itself so each task lands on the best hardware without anyone having to configure it manually. Think of it as a self-sorting warehouse where packages route themselves to the right shelf.

By the numbers
EUR 3,934,425
EU research funding invested
8
consortium partners involved
5
countries in the research consortium
26
total deliverables produced
3
heterogeneous resource types supported (GPU, MIC, DFE)
The business problem

What needed solving

Managing cloud data centers with mixed hardware — GPUs, specialized processors, standard servers — is a manual, error-prone process that wastes computing power and engineering time. As companies add more diverse hardware to handle AI, simulation, and data-heavy workloads, traditional centralized management breaks down. Someone needs to constantly decide which workload goes on which machine, and getting it wrong means paying for capacity you are not using.

The solution

What was built

The project delivered a self-organising cloud management architecture with 26 deliverables, including a state-of-the-art architecture report, use case requirements documentation, and integrated use case applications demonstrating the system across heterogeneous hardware (GPUs, many integrated cores, and data flow engines). A testbed was built to gather data used in hyperscale simulations.

Audience

Who needs this

Cloud service providers managing mixed GPU/CPU data centersHPC-as-a-service companies offering scientific computingAI training platform operators needing efficient GPU cluster managementLarge enterprises running private heterogeneous cloudsManaged service providers looking to reduce cloud operations overhead
Business applications

Who can put this to work

Cloud Infrastructure & Data Centers
enterprise
Target: Cloud service providers and data center operators running mixed hardware

If you are a cloud provider struggling to efficiently allocate workloads across GPUs, CPUs, and specialized processors — this project developed a self-organising management layer that automatically matches tasks to the right hardware. Built with 8 partners across 5 countries, the system shifts the optimization burden from your engineering team to the software stack itself, reducing manual configuration overhead.

High-Performance Computing & Scientific Simulation
enterprise
Target: HPC service companies and research computing centers

If you are an HPC provider dealing with customers who need GPU acceleration but lack the expertise to configure heterogeneous clusters — this project built a delivery model where the cloud infrastructure self-optimizes resource allocation. With 26 deliverables including integrated use cases, the system was designed to make heterogeneous resources accessible without specialized deployment knowledge.

Media, Rendering & AI Training
mid-size
Target: Companies running GPU-intensive workloads like video rendering or machine learning pipelines

If you are a media or AI company paying for GPU cloud instances but only using a fraction of their capacity due to poor workload distribution — this project created a self-managing cloud layer that maximizes performance across mixed processor types including GPUs, many integrated cores, and data flow engines. The EUR 3,934,425 research effort focused specifically on reducing wasted compute resources.

Frequently asked

Quick answers

What would it cost to implement this kind of self-managing cloud system?

The project received EUR 3,934,425 in EU funding across 8 partners over 3 years to develop the research and testbed. Implementation costs for a commercial version would depend on your existing cloud infrastructure scale. As this is research-stage technology, no commercial pricing exists yet.

Can this scale to production-level data centers?

The project acknowledged that empirical experimentation on hyperscale cloud infrastructures is prohibitively expensive. Data gathered on their testbed was used to simulate hyperscale scenarios and evaluate the self-organisation approach at that scale. Full production validation at hyperscale has not been demonstrated.

What is the IP situation — can we license this technology?

The project was funded as an RIA (Research and Innovation Action) coordinated by University College Cork, Ireland. IP rights are typically shared among the 8 consortium partners. Licensing discussions would need to go through the coordinator and relevant partners.

What specific hardware types does this support?

Based on project data, the system was designed to manage three types of heterogeneous resources: graphics processing units (GPUs), many integrated cores (MIC), and data flow engines (DFE). These cover the most common accelerator types used in modern data centers.

How mature is this technology — is it ready to deploy?

The project produced 26 deliverables including architecture reports and integrated use cases, but relied on testbed data and simulation rather than live deployment at scale. This places it at the research-to-prototype stage, suitable for technology evaluation but not drop-in production use.

Does it work with existing cloud platforms like AWS or Azure?

Based on available project data, the system was designed as a new cloud management and delivery model rather than a plugin for existing platforms. It proposes a different approach to the standard IaaS, PaaS, and SaaS delivery models. Integration with commercial platforms would require additional engineering work.

Consortium

Who built it

The CloudLightning consortium brings together 8 partners from 5 countries (Greece, Ireland, Norway, Romania, UK), with a strong academic lean — 4 universities and 2 research organisations versus just 2 industry partners (25% industry ratio) and only 1 SME. Coordinated by University College Cork in Ireland, this is primarily a research-driven project. For a business buyer, the low industry involvement means the technology was developed more in lab settings than in real-world data center environments. Any commercial adoption would benefit from additional industry validation partners.

How to reach the team

University College Cork, National University of Ireland — contact through the university's research office or the project website

Next steps

Talk to the team behind this work.

Want to explore how self-managing cloud technology could reduce your infrastructure costs? SciTransfer can connect you directly with the research team behind CloudLightning and assess fit for your operations.