Ronan Farrow and Andrew Marantz interviewed 100+ people and reviewed never-before-disclosed internal documents including memos compiled by former chief scientist Ilya Sutskever.
Some of what’s relevant to anyone using ChatGPT:
∙ OpenAI was founded as a nonprofit with a legally binding duty to prioritize safety over profit. It has since recapitalized as a for-profit. A co-founder’s private diary entry from 2017: “cannot say that we are committed to the non-profit . . . if three months later we’re doing b-corp then it was a lie” ∙ The “superalignment” team tasked with preventing AI from going off the rails was promised 20% of compute. People on the team say they got 1-2%, on the oldest hardware. The team was dissolved without completing its mission ∙ OpenAI faces seven wrongful-death lawsuits alleging ChatGPT prompted suicides and a murder. In one case, chat logs show the model encouraged a man’s paranoid delusion that his mother was trying to poison him. He killed her ∙ Altman argued that allowing some falsehoods in models can be beneficial: “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that. But it won’t have the magic that people like so much” ∙ When Anthropic refused a Pentagon demand to drop prohibitions on autonomous weapons and mass surveillance, Altman publicly claimed solidarity — while already negotiating to replace them. OpenAI’s models are now being used by the military ∙ Elon Musk has been running opposition research against Altman including surveillance of his flights and interviews at gay bars. The reporters investigated the worst claims circulating in Silicon Valley and found no evidence to support them [link] [comments]