Koopman-Assisted Reinforcement Learning
arXiv:2403.02290v2 Announce Type: replace
Abstract: The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman equation, are ubiquitous in reinforcement learning and control theory. However, these equations become intractable for high-dimensional or nonlinear systems. This paper develops two new reinforcement learning algorithms based on the data-driven Koopman operator, which lifts a nonlinear system into new coordinates where the dynamics become approximately linear, and where Hamilton-Jacobi-Bellman-based methods are more tractable. In particular, the Koopman operator captures the expectation of the time evolution of the value function via linear dynamics in the lifted coordinates. By parameterizing the Koopman operator with the control actions, we construct a ``controlled Koopman tensor'' that facilitates the estimation of the optimal value function. This enables us to reformulate two max-entropy RL algorithms: soft value iteration and soft actor-critic. This flexible and interpretable framework includes deterministic and stochastic systems, as well as discrete and continuous dynamics.
Koopman Assisted reinforcement learning attains state-of-the-art performance with respect to traditional neural network-based soft actor-critic baselines on a linear state-space system, the Lorenz system, fluid flow past a cylinder, and a double-well potential with non-isotropic stochastic forcing.