Förderjahr 2024 / Stipendium Call #19 / ProjektID: 7383 / Projekt: Dynamic Power Management for Edge AI: A Sustainable Self-Adaptive Approach
In my master thesis, I explore sustainable power adaptation for Edge AI using reinforcement learning. This blog highlights the motivation behind my research, the expected outcomes, and the current state of the art in this exciting field.
Motivation and Background
Edge computing processes data closer to users, reducing latency, improving security, and lowering costs compared to centralized cloud systems. With the rise of mobile and IoT devices, Edge AI has enabled real-time model training and decision-making in applications across healthcare, agriculture, and automotive sectors. However, many sustainable Edge AI systems rely on unstable power sources, such as solar panels, making it critical to dynamically balance energy consumption and QoS to ensure reliable performance. Addressing this challenge is the central focus of my research.
What is Reinforcement Learning and Why is it Promising?
Reinforcement learning (RL) is a machine learning approach where agents learn optimal strategies by interacting with their environment and receiving feedback in the form of rewards or penalties. Unlike rule-based systems, RL excels in handling dynamic, complex environments, making it ideal for power management in Edge AI. RL's adaptability allows systems to optimize energy use, scale resources, and adjust configurations in real-time, even under unpredictable workloads and power fluctuations. This makes RL a powerful tool for enhancing both energy efficiency and system performance in Edge AI.
Expected Outcomes and Goals
The primary goal of this research is to develop an energy-aware, intelligent power adaptation framework for Edge AI. This framework will:
- Use RL to learn and implement optimal power adaptation strategies in real time.
- Dynamically balance QoS and energy consumption based on changing conditions like power availability and workload.
- Minimize energy usage, prevent system shutdowns, and maintain target performance levels.
A prototype will be tested using an edge-based object detection use case powered by solar panels and batteries, with performance evaluated against key metrics such as energy consumption and QoS violations.
State of the Art
Recent advancements in self-adaptive systems integrate machine learning to enhance real-time decision-making under uncertain conditions. In edge computing, most research focuses on optimizing latency and resource allocation through task scheduling, workload balancing, and resource scaling. However, energy-efficient strategies, especially those using RL, remain underexplored. Studies like Tuli et al.’s real-time scheduler and Wang et al.’s energy-efficient mode-switching algorithms highlight RL's potential, but challenges persist in applying it to dynamic, energy-constrained Edge AI systems. My research builds on this by combining RL with sustainable power adaptation to address these gaps.
Some References
Patricia Arroba, Rajkumar Buyya, Rom´an C´ardenas, Jos´e L. Risco- Mart´ın, and Jos´e M. Moya. Sustainable edge computing: Challenges and future directions. Software: Practice and Experience. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/spe.3340.
Alessandro Tundo, Marco Mobilio, Shashikant Ilager, Ivona Brandi´c, Ezio Bartocci, and Leonardo Mariani. An Energy-Aware Approach to Design Self-Adaptive AI-based Applications on the Edge. In 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 281–293, September 2023. ISSN: 2643-1572.
Shreshth Tuli, Shashikant Ilager, Kotagiri Ramamohanarao, and Rajkumar Buyya. Dynamic Scheduling for Stochastic Edge-Cloud Computing Envi- ronments Using A3C Learning and Residual Recurrent Neural Networks. IEEE Transactions on Mobile Computing, 21(3):940–954, March 2022.