cs.CL

An Empirical Investigation of Practical LLM-as-a-Judge Improvement Techniques on RewardBench 2

arXiv:2604.13717v1 Announce Type: new
Abstract: LLM-as-a-judge, using a language model to score or rank candidate responses, is widely used as a scalable alternative to human evaluation in RLHF pipelines, benchmarking, and application layer evaluation…