If you are a data center operator dealing with skyrocketing energy costs and cooling demands — this project developed a highly integrated compute element combining low-power processors with accelerators (GPGPU, FPGA) that targets TRL 5 readiness. The prototype integrates advanced thermal management into a high-density design, meaning more computing per watt per rack. With 13 partners across 6 countries validating the approach, this could reshape how you plan your next facility expansion.
Ultra-Efficient Computing Chips That Cut Data Center Power Bills by Design
Imagine your computer's brain could do a billion times more calculations without needing a power plant to run it. ExaNoDe built the building blocks for next-generation supercomputers by combining low-power processors, graphics chips, and programmable hardware into one dense, energy-efficient unit. Think of it like packing an entire server room's worth of computing power into something the size of a shoebox, with smart memory access so nothing gets bottlenecked. They also built the software layer — the operating system and communication libraries — so these units can actually talk to each other and run real scientific and industrial workloads.
What needed solving
Data centers and HPC facilities face a power wall: current computing architectures cannot scale to exascale performance without consuming unsustainable amounts of energy. Companies running large-scale simulations, AI training, or data analytics are stuck choosing between massive electricity bills and slower results. The industry needs fundamentally more efficient compute elements that pack more performance per watt.
What was built
ExaNoDe built a prototype compute element integrating low-power processors with GPGPU and FPGA accelerators using 3D integration and nanotechnology, along with firmware and a dedicated operating system. They also delivered tuned runtime systems (OmpSs, OpenStream) and communication libraries (GPI, MPI) optimized for the prototype, plus a hardware emulation of the interconnect for multi-node evaluation — 21 deliverables in total.
Who needs this
Who can put this to work
If you are an automotive or aerospace company running crash simulations, aerodynamics modeling, or materials testing that takes days on current HPC systems — ExaNoDe built runtime systems (OmpSs, OpenStream) and communication libraries (GPI, MPI) optimized for their prototype hardware. These tools let your simulation codes scale better across heterogeneous processors. The final tuned implementation was validated on real mini-applications that mirror industrial workloads.
If you are a pharma company where molecular dynamics simulations or genomic sequencing jobs queue for weeks on shared HPC clusters — ExaNoDe designed compute elements that integrate scalar, SIMD, GPGPU, and FPGA processing on a single node with low-latency memory access scalable to Exabyte levels. The firmware and operating system were purpose-built for these heterogeneous workloads. With 5 industry partners involved in development, the technology was shaped by real computing demands.
Quick answers
What would it cost to adopt ExaNoDe technology in our infrastructure?
The project did not publish pricing or cost-per-unit data. ExaNoDe delivered TRL 5 building blocks — validated components, not commercial products. Licensing or integration costs would need to be negotiated directly with the consortium members who hold the IP, particularly the 5 industry partners.
Can this scale to production data center deployments?
ExaNoDe explicitly targeted technology readiness level 5 — validated in a relevant environment but not yet production-scale. They built a hardware emulation of the interconnect to evaluate multi-node deployment. Scaling to full data center production would require further engineering and industrialization beyond what the project delivered.
Who owns the intellectual property, and can we license it?
IP is distributed among the 13 consortium partners across 6 countries, coordinated by CEA (France). The 5 industry partners and 3 SMEs likely hold commercially relevant IP on specific components. Licensing terms would need to be discussed with individual partners depending on which building blocks you need.
What specific hardware was actually built and tested?
The project delivered a final prototype compute element integrating low-power processors with GPGPU and FPGA accelerators, plus firmware and an operating system tailored for it. They also delivered final tuned runtime systems (OmpSs, OpenStream) and communication libraries (GPI, MPI) optimized for the prototype machine. A total of 21 deliverables were produced.
How does this compare to current commercial HPC solutions?
ExaNoDe focused on future exascale computing goals, combining European low-power processor designs with 3D integration and nanotechnology. Based on available project data, the design draws on the Unimem memory architecture from the earlier EUROSERVER project. It targets a different power-performance envelope than current commercial offerings from major chip vendors.
What is the timeline from current state to something we could deploy?
The project closed in June 2019 with TRL 5 components. Moving from TRL 5 to deployable products (TRL 8-9) typically requires significant additional development, testing, and certification. Some component technologies may have been carried forward by consortium members into follow-on EU projects or commercial roadmaps.
Is there regulatory compliance needed for adopting this technology?
Based on available project data, no specific regulatory requirements were highlighted. However, data center deployments involving new processor architectures typically need to meet energy efficiency standards and possibly export control regulations for high-performance computing equipment, depending on the target market.
Who built it
The ExaNoDe consortium of 13 partners from 6 countries (France, Germany, Greece, Spain, Switzerland, UK) is led by CEA, France's major atomic and alternative energy research body. With 5 industry partners and 3 SMEs making up 38% of the consortium, there is solid commercial involvement alongside 6 research organizations and 2 universities. This mix suggests the technology was developed with industrial relevance in mind, not just academic interest. For a business looking to adopt or license specific components, the SMEs in the consortium are often the most accessible entry points — they tend to be more agile in licensing discussions than large research institutions.
- COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESCoordinator · FR
- KALRAY SAparticipant · FR
- SCAPOS AGparticipant · DE
- THE UNIVERSITY OF MANCHESTERparticipant · UK
- FORSCHUNGSZENTRUM JULICH GMBHparticipant · DE
- IDRYMA TECHNOLOGIAS KAI EREVNASparticipant · EL
- VIRTUAL OPEN SYSTEMSparticipant · FR
- EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICHparticipant · CH
- ARM LIMITEDparticipant · UK
- BULL SASparticipant · FR
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRSparticipant · FR
- BARCELONA SUPERCOMPUTING CENTER CENTRO NACIONAL DE SUPERCOMPUTACIONparticipant · ES
CEA (Commissariat à l'énergie atomique et aux énergies alternatives) in France coordinates — reach out through their technology transfer office for licensing discussions
Talk to the team behind this work.
Want to explore how ExaNoDe's low-power HPC building blocks could fit your computing infrastructure? SciTransfer can connect you with the right consortium partner for your specific needs.