Resource-efficient Deep Learning

Resource-efficient machine learning focuses on developing models and training methods that achieve strong predictive performance while minimizing the use of computational, memory, energy, and data resources. As machine learning systems grow in scale and complexity, the cost of training and deploying models has increased substantially. Large models require extensive computational infrastructure and consume significant energy during training and inference. At the same time, many applications–such as mobile devices and embedded systems–operate under strict hardware and energy constraints. Resource-efficient machine learning addresses these challenges by designing algorithms and systems that maintain high performance while reducing resource consumption.

Approaches:

Overall, resource-efficient machine learning aims to make effective use of limited computational and energy resources while maintaining high model quality. Advances in efficient architectures, compression methods, and data-efficient training will play a key role in making machine learning systems more scalable, sustainable, and widely accessible.

Involved researchers: Wolfgang Maass, Robert Legenstein, Franz Pernkopf, Olga Saukh, Ozan Özdenizci