Trading off rewards and errors in multi-armed bandits

arXiv:2605.00488v1 Announce Type: new Abstract: In multi-armed bandits, the most-explored arms are the most informative, while reward maximization typically pulls only the best arm. We study the tradeoff between identifying arm means accurately and accumulating reward, and present an algorithm with regret guarantees that interpolates between the two objectives. We provide both upper and lower bounds and validate empirically.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top