SciTransfer
Fun-COMP · Project

Brain-Inspired Photonic Chips That Process Data Faster With Far Less Power

digitalPrototypeTRL 4Thin data (2/5)

Today's computers waste enormous energy shuttling data back and forth between the processor and memory — like a chef running between two kitchens to cook one meal. Fun-COMP built tiny chips that use light instead of electricity and combine processing and memory in one place, mimicking how the human brain works. The result is hardware that can learn on its own and crunch complex data — think IoT sensor streams or big-data analytics — at much higher speed and much lower power consumption. They proved it works with two physical hardware demonstrators, not just simulations.

By the numbers
7
consortium partners across Europe
5
countries represented (BE, CH, DE, FR, UK)
2
working hardware demonstrators delivered
18
total deliverables completed
29%
industry participation ratio in consortium
2
industrial partners in the consortium
The business problem

What needed solving

Data centers and IoT networks face a growing crisis: conventional computing architectures waste massive energy moving data between processors and memory, and cannot keep pace with exploding data volumes from connected devices. Current chips are hitting physical scaling limits, meaning more transistors no longer equals proportionally better performance. Companies need fundamentally different computing hardware that processes data where it is stored, learns autonomously, and runs on a fraction of today's power budget.

The solution

What was built

The project built 2 hardware demonstrators: a photonic reservoir computing network that uses light and phase-change materials with self-learning Hebbian capabilities (D3.2, delivered at month 42), and a computing-in-memory hardware demonstrator using non-von Neumann device arrays (D4.2, delivered at month 40). In total, 18 deliverables were completed covering devices, architectures, and algorithms.

Audience

Who needs this

Data center operators looking to reduce energy consumption per computationIoT platform companies needing faster edge processing without cloud dependencySemiconductor companies scouting next-generation chip architectures beyond Moore's LawAI hardware startups building dedicated inference acceleratorsTelecom operators processing massive network data streams in real time
Business applications

Who can put this to work

Data Center & Cloud Computing
enterprise
Target: Data center operators and cloud infrastructure providers

If you are a data center operator dealing with skyrocketing energy bills and cooling costs — this project developed photonic computing hardware that fuses processing and memory into one chip, dramatically cutting power consumption. Their hardware demonstrator for computing-in-memory using non-von Neumann device arrays shows a path to servers that process data where it is stored, eliminating the energy-hungry data shuttle between CPU and RAM. With 7 partners across 5 countries validating the technology, this is a serious alternative to conventional architectures.

Industrial IoT & Edge Computing
mid-size
Target: IoT platform companies and edge computing hardware manufacturers

If you are an IoT platform company struggling to process massive sensor data streams at the edge without cloud latency — this project built a photonic reservoir computing network with self-learning capabilities. Their hardware demonstrator incorporates phase-change elements that adapt and learn autonomously, meaning edge devices could analyze patterns in real time without sending data to the cloud. This is purpose-built for the Internet of Things processing challenges the project specifically targeted.

Semiconductor & Chip Design
enterprise
Target: Semiconductor companies and photonic integrated circuit manufacturers

If you are a chip company looking for the next computing architecture beyond conventional silicon scaling — this project delivered reconfigurable integrated processing networks based on Silicon photonics. They built working spiking neural networks and autonomous reservoir computing in hardware, not software. With 2 industrial partners already in the consortium and 18 deliverables completed, the IP portfolio covers devices from nanoscale phase-change elements to full photonic circuits.

Frequently asked

Quick answers

What would it cost to license or adopt this technology?

The project was a Research and Innovation Action (RIA) with public EU funding, so core IP is held by the consortium of 7 partners. Licensing terms would need to be negotiated directly with the consortium, likely led by the University of Exeter as coordinator. Costs would depend on the specific components — photonic devices, phase-change materials, or the computing-in-memory architecture.

Can this scale to industrial production?

The project demonstrated hardware prototypes — a photonic reservoir computing network and a computing-in-memory device array — but these are lab-scale demonstrators, not production-ready chips. Scaling to industrial volumes would require further engineering, likely in partnership with Silicon photonics foundries. The consortium included 2 industrial partners which could support a commercialization pathway.

What intellectual property exists and how is it protected?

With 18 deliverables completed across photonic devices, neuromorphic architectures, and computing-in-memory systems, significant IP was generated. The consortium of 7 partners across 5 countries holds the rights. Specific patent filings would need to be confirmed with the University of Exeter coordinator.

How does this compare to existing neuromorphic chips like Intel Loihi or IBM TrueNorth?

Fun-COMP differentiates by using photonic (light-based) rather than electronic approaches, which promises higher speed and bandwidth with lower power. Their self-learning Hebbian photonic network is a unique combination not offered by Intel or IBM's purely electronic neuromorphic chips. Based on available project data, the photonic approach is earlier-stage but potentially faster.

What real-world problems has this been tested on?

The project specifically targeted big data analysis and Internet of Things computing challenges. Their 2 hardware demonstrators — the photonic reservoir computing network and the computing-in-memory array — were designed to address complex real-world computational problems. Based on available project data, specific benchmark results would need to be obtained from the final project reports.

Is regulatory approval needed to deploy this technology?

As computing hardware, this technology does not require medical or environmental regulatory approval. Standard semiconductor industry certifications and compliance with electronics regulations would apply. The Silicon photonics foundation makes it compatible with existing chip manufacturing infrastructure.

Consortium

Who built it

The Fun-COMP consortium brings together 7 partners from 5 European countries (Belgium, Switzerland, Germany, France, UK), led by the University of Exeter. The mix includes 3 universities, 2 research organizations, and 2 industrial partners, giving a 29% industry ratio. While the absence of SMEs and the relatively low industry ratio suggest this is research-heavy, the presence of 2 industrial partners signals some commercial interest. The multi-country spread across major European tech hubs (UK, Germany, France) provides access to leading photonics and semiconductor ecosystems. For a business looking to adopt this technology, the University of Exeter would be the primary contact, but industrial partners may offer more direct commercialization pathways.

How to reach the team

The University of Exeter (UK) coordinates the project. SciTransfer can facilitate an introduction to the research team.

Next steps

Talk to the team behind this work.

Want to explore how brain-inspired photonic computing could cut your data processing energy costs? SciTransfer can connect you with the Fun-COMP team and help evaluate fit for your infrastructure. Contact us for a confidential briefing.