Claude is Now Alignment Pretrained

Anthropic are now actively using the approach to alignment often called “Alignment Pretraining” or “Safety Pretraining” — using Stochastic Gradient Descent on a large body of natural or synthetic documents showing the AI assistant doing the right thing. They tried this out, ound it works well, and are now using it.

I’m absolutely delighted. I’ve been advocating this approach on LessWrong and the Alignment Forum for several years:

I’ve been very excited about this alignment technique for a couple of years, ever since I read the seminal paper demonstrating that it was extremely effective, Pretraining Language Models with Human Preferences (Korbak et al., ’23). This was later followed up by Safety Pretraining: Toward the Next Generation of Safe AI (Maini, Goyal, Sam et al., ’25), You Are What You Eat - AI Alignment Requires Understanding How Data Shapes Structure and Generalisation (Lehalleur, Hoogland, Farrugia-Roberts et al., ’25), and most recently Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment (Tice, Radmard, et al., ’26).

Others have also posted on LessWrong on this subject, such as TurnTrout’s Self-Fulfilling Misalignment Data Might Be Poisoning Our AI Models, and Beren Millidge’s Alignment In The Age Of Synthetic Data, The case for removing alignment and ML research from the training data and My path to prosaic alignment and open questions. Nostalgebraist discussed something closely related in the void, as did Seth Herd in Broadening the training set for alignment.

Anthropic are also specifically using fiction in which Claude does the right thing as training material, so they have even implemented my suggestion of Aligned AI Role-Model Fiction , which was also taken up by Aaron Silverbook/Hyperstition AI: see their posts Silicon Morality Plays: The Hyperstition Progress Report and Special Persona Training: Hyperstition Progress Report 2.

So I’m happy this idea’s time has finally come.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top