Energy-Efficient AI: Models, Algorithms, and Hardware for Sustainable Intelligence
As artificial intelligence systems grow in scale and capability, their energy demands are increasing at an unsustainable pace. This workshop addresses the urgent need for energy-efficient AI, a field that seeks to develop environmentally responsible and computationally efficient approaches to machine learning. The goal is to explore innovations that reduce energy consumption across the AI stack — from models and learning algorithms to hardware implementations — while maintaining or even enhancing performance. One source of inspiration for reaching this goal is the brain, which manages to provide intelligence with 20W.
The workshop will be inclusive and accessible to a broad audience across machine learning, computer architecture, and systems engineering. Topics include but are not limited to:
- Model compression and pruning techniques that reduce inference and training costs
- Quantization and low-precision arithmetic for efficient deployment
- New learning algorithms designed for reduced computational and memory overhead
- Architectures and model designs tailored for energy-aware operation
- Neuromorphic computing and spiking neural networks inspired by biological efficiency
- Hardware accelerators, including FPGAs, photonic processors, and neuromorphic chips, that offer novel energy-performance trade-offs
This workshop aims to bring together researchers and practitioners from diverse backgrounds to foster collaboration and cross-pollination of ideas in making AI more sustainable without compromising its transformative potential.
Organizers: Wolfgang Maass, Marcel van Gerven
In case of questions contact: Randi Goertz
Workshop Agenda
The ELLIS UnConference workshop is co-located with EurIPS Copenhagen and will take place on December 2, 2025 at the Bella Center Copenhagen, Center Blvd. 5, 2300 København S, Denmark. Our workshop will take place in room 20.
| Time | Session |
|---|---|
| 7:00 - 10:00 | Registration |
| 9:00 - 10:30 | Workshop talks |
| Angeliki Pantazi (IBM Zurich): Designing Energy-Efficient AI: Insights from Neural Systems | |
| Emre Neftci (Forschungszentrum Jülich): Neuroscience-guided Learning Rules Discovery for Efficient AI | |
| Wilfred van der Wiel (University of Twente): Reconfigurable nonlinear processing in silicon | |
| 10:30 - 11:00 | Coffee break & Posters |
| 11:00 - 12:30 | Workshop talks |
| Shih-Chii Liu (ETH and University of Zurich): Brain-inspired dynamic sparsity for neuromorphic AI | |
| Iason Chalas (ETH and IBM Zurich): Analog In-Memory Computing for Efficient Large Language Model Deployment | |
| Nasir Ahmad (Radboud University Nijmegen): Two steps forward and no steps back: Training neural networks in noisy hardware without backward passes | |
| 12:30 - 13:30 | Lunch & Posters |
| 13:30 - 15:00 | Workshop talks |
| Yukun Yang (TU Graz): A brain-inspired method for context-aware and explainable planning that is suitable for implementation in energy-efficient AI | |
| Panel Discussion of the Workshop on the future of energy-efficient AI in general, and in ELLIS | |
|
|
|
|
|
|
| 15:00 - 15:30 | Coffee break & Posters |
| 15:30 - 16:00 | ELLIS Unconference Welcoming Remarks |
| 16:00 - 18:00 | ELLIS Unconference Poster Session |
| 18:00 - 20:00 | ELLIS Unconference Reception |
Accepted Posters
| Authors | Title |
|---|---|
| Alireza Olama, Andreal Lundell, Jerker Björkqvist, Johan Lilius | PRUNEX: A Hierarchical Communication-Efficient System for Distributed CNN Pruning |
| Dominik Kuczkowski, Sara Pyykölä, Klavdiya Bochenina, Keijo Heljanko, Laura Ruotsalainen | Benchmarking Green Supercomputing for Low-Emission AI: Reinforcement Learning as a Use Case |
| Dong Wang, Haris Šikić, Lothar Thiele, Olga Saukh | Model Folding A Unified Approach to Post-training Compression and Efficient Pre-training |
| Fabian Kresse, Thomas A. Henzinger, Christoph H. Lampert | Boolean Logic Circuits for Continuous Control |
| Jan Stenkamp, Nina Herrmann, Benjamin Karic, Stefan Oehmcke, Fabian Gieseke | Boosted Trees on a Diet: Compact Models for Resource-Constrained Devices |
| Karsten Schrödter, Jan Stenkamp, Nina Herrmann, Fabian Gieseke | Learn to Think in Boxes: Trainable Bitwise Quantization for Input Feature Compression |
| Roberto Neglia, Andrea Cini, Michael M. Bronstein, Filippo Maria Bianchi | Reservoir Conformal Prediction for Time Series |
| Shalini Mukhopadhyay, Urmi Jana, Swarnava Dey | Towards On-Device Stress Detection on Tiny Edge Platforms |
| Weilun Feng, Haotong Qin, Mingqiang Wu, Chuanguang Yang, Yuqi Li, Xiangqi Li, Zhulin An, Libo Huang, Yulun Zhang, Michele Magno, Yongjun Xu | Quantized Visual Geometry Grounded Transformer |