Model-Free Inference of Investor Preferences: A Relative Entropy IRL Approach

arXiv:2604.24280v1 Announce Type: new Abstract: We present a framework using Relative Entropy Inverse Reinforcement Learning (RE-IRL) to recover investor reward functions from observed investment actions and market conditions. Unlike traditional IRL algorithms, RE-IRL is employed to account for environments where transition probabilities are unknown or inaccessible. To address the challenge of data sparsity, we utilize a $K$-nearest neighbor approach to estimate the observed behavior policy. Furthermore, we propose a statistical testing framework to evaluate the validity and robustness of the estimated results.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top