cs.AI

TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization

arXiv:2605.00224v1 Announce Type: new
Abstract: Aligning large language models (LLMs) with human preferences is commonly done via reinforcement learning from human feedback (RLHF) with Proximal Policy Optimization (PPO) or, more simply, via Direct Pre…