Electronics, Vol. 14, Pages 3184: Dependent Task Graph Offloading Model Based on Deep Reinforcement Learning in Mobile Edge Computing


Electronics, Vol. 14, Pages 3184: Dependent Task Graph Offloading Model Based on Deep Reinforcement Learning in Mobile Edge Computing

Electronics doi: 10.3390/electronics14163184

Authors:
Ruxin Guo
Lunyu Zhou
Linzhi Li
Yuhui Song
Xiaolan Xie

Mobile edge computing (MEC) has emerged as a promising solution for enabling resource-constrained user devices to run large-scale and complex applications by offloading their computational tasks to the edge servers. One of the most critical challenges in MEC is designing efficient task offloading strategies. Traditional approaches either rely on non-intelligent algorithms that lack adaptability to the dynamic edge environment, or utilize learning-based methods that often ignore task dependencies within applications. To address this issue, this study investigates task offloading for mobile applications with interdependent tasks in an MEC system, employing a deep reinforcement learning framework. Specifically, we model task dependencies using a Directed Acyclic Graph (DAG), where nodes represent subtasks and directed edges indicate their dependency relationships. Based on task priorities, the DAG is transformed into a topological sequence of task vectors. We propose a novel graph-based offloading model, which combines an attention-based network and a Proximal Policy Optimization (PPO) algorithm to learn optimal offloading decisions. Our method leverages offline reinforcement learning through the attention network to capture intrinsic task dependencies within applications. Experimental results show that our proposed model exhibits strong decision-making capabilities and outperforms existing baseline algorithms.



Source link

Ruxin Guo www.mdpi.com