Analyzing Adversarial Inputs in Deep Reinforcement Learning

arXiv:2402.05284v2 Announce Type: replace Abstract: In recent years, Deep Reinforcement Learning (DRL) has become a popular paradigm in machine learning due to its successful applications to real-world and complex systems. However, even the state-of-the-art DRL models have been shown to suffer from reliability concerns -- for example, their susceptibility to adversarial inputs, i.e., small and abundant input perturbations that can fool the models into making unpredictable and potentially dangerous decisions. This drawback limits the deployment of DRL systems in safety-critical contexts, where even a small error cannot be tolerated. In this work, we present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification. Specifically, we present the Adversarial Rate, a metric adapted from the ProVe family, for the systematic evaluation of adversarial inputs in DRL, which partitions the input domain into subregions to enable both quantification and spatial visualization of adversarial inputs. The main contribution of this work is to provide a comprehensive evaluation framework for the effect of adversarial inputs on DRL policies. We present a set of tools and algorithms for its computation. Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations. Moreover, we analyze the behavior of these configurations to suggest several useful practices and guidelines to help mitigate the vulnerability of trained DRL networks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top