Fast AI Model Partition for Split Learning over Edge Networks
arXiv:2507.01041v4 Announce Type: replace-cross
Abstract: Split learning (SL) is a distributed learning paradigm that can enable computation-intensive artificial intelligence (AI) applications by partitioning AI models between mobile devices and edge servers. %fully utilizing distributed computing resources for computation-intensive mobile intelligence applications. However, the model partitioning problem in SL becomes challenging due to the diverse and complex architectures of AI models. In this paper, we formulate an optimal model partitioning problem to minimize training delay in SL. To solve the problem, we represent an arbitrary AI model as a directed acyclic graph (DAG), where the model's layers and inter-layer connections are mapped to vertices and edges, and training delays are captured as edge weights. Then, we propose a general model partitioning algorithm by transforming the problem into a minimum \textit{s-t} cut problem on the DAG. Theoretical analysis shows that the two problems are equivalent, such that the optimal model partition can be obtained via a maximum-flow method. Furthermore, taking AI models with block structures into consideration, we design a low-complexity block-wise model partitioning algorithm to determine the optimal model partition. Specifically, the algorithm simplifies the DAG by abstracting each block (i.e., a repeating component comprising multiple layers in an AI model) into a single vertex. Extensive experimental results on a hardware testbed equipped with NVIDIA Jetson devices demonstrate that the proposed solution can reduce algorithm running time by up to 13.0$\times$ and training delay by up to 38.95\%, compared to state-of-the-art baselines.