Researchers may have found a way to stop AI models from intentionally playing dumb during safety evaluations

A study by researchers from the MATS program, Redwood Research, the University of Oxford, and Anthropic examines a safety problem that grows more pressing as AI systems become more capable: "sandbagging," where a model deliberately hides its true abilities and delivers work that looks adequate but is intentionally subpar.

The article Researchers may have found a way to stop AI models from intentionally playing dumb during safety evaluations appeared first on The Decoder.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top