A Forensic Analysis of Synthetic Data in RL: Diagnosing and Solving Algorithmic Failures in Model-Based Policy Optimization
arXiv:2510.01457v4 Announce Type: replace
Abstract: Synthetic data is central to data-efficient Dyna-style model-based reinforcement learning, but it can also degrade performance. We study this failure in Model-Based Policy Optimization (MBPO), which performs actor-critic updates using model-generated synthetic state transitions. Although MBPO reports strong sample-efficiency gains on OpenAI Gym, recent work shows that it often underperforms Soft Actor-Critic (SAC), its non-Dyna base, in the DeepMind Control Suite (DMC), despite both suites involving MuJoCo-based proprioceptive continuous control. We identify two coupled causes of this collapse: scale mismatch between dynamics and reward targets, which suppresses reward learning and induces critic underestimation, and residual next-state prediction, which inflates model variance and produces unreliable synthetic transitions. We introduce Fixing That Free Lunch (FTFL), a minimal repair that combines independent target normalization with direct next-state prediction. FTFL outperforms SAC in five of seven previously failing DMC tasks while preserving MBPO's strong Gym performance. We further show that MBPO-lineage algorithms, including uncertainty-aware variants that filter, penalize, or reject synthetic transitions based on model uncertainty, still inherit these failures unless FTFL is applied to their shared learned-model backbone. More broadly, our results show how benchmark-limited evaluation can encode environment-specific assumptions into algorithm design, motivating taxonomies that map MDP structure to algorithmic failure modes.