Agentic AI-Based Joint Computing and Networking via Mixture of Experts and Large Language Models
arXiv:2605.02911v1 Announce Type: new
Abstract: Future sixth-generation (6G) mobile networks are envisioned to be equipped with a diverse set of powerful, yet highly specialized, optimization experts. Such a promising vision is concurrently expected to give rise to the need for scalable mechanisms that can select, combine, and orchestrate such experts based on high-level intent and uncertainty descriptions. In this paper, we propose an agentic artificial intelligence (AI)-based network optimization framework that integrates mixture of experts (MoE) architectures with large language models (LLMs). Under the proposed framework, the employed LLM acts as a semantic gate to reason over operator objectives and dynamically compose suitable optimization agents. The proposed framework is formulated in a model-agnostic manner and bridges human-readable network intents with low-level resource allocation decisions, enabling flexible optimization across heterogeneous objectives and operating conditions. As a representative instantiation, we apply the framework to a joint communication and computing network and design a library of specialized optimization experts covering throughput, fairness, and delay-driven objectives under both regular and robust conditions. Numerical simulations demonstrate that the proposed agentic MoE framework consistently achieves near-optimal performance compared to exhaustive expert combinations while outperforming individual experts across diverse objectives, including delay minimization and throughput maximization.