OpenAI was founded as a nonprofit for one specific reason — to ensure AI development couldn't be hijacked by profit motives.
Their original charter had a clause that legally required safety to come before profits, and gave the board the power to shut everything down if AI became too dangerous.
That clause is gone. The board has been restructured to answer to investors instead.
We just removed the emergency brake from the most powerful technology in human history because it was bad for business.
What happens the next time something goes wrong?
[link] [comments]