cs.AI, cs.HC, cs.LG

Influencing Humans to Conform to Preference Models for RLHF

arXiv:2501.06416v3 Announce Type: replace
Abstract: Designing a reinforcement learning from human feedback (RLHF) algorithm to approximate a human’s unobservable reward function requires assuming, implicitly or explicitly, a model of human preferences…