OpenAI’s Approach to Frontier Risk
An Update for the UK AI Safety Summit
To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.
Together with Anthropic, Google, and Microsoft, we’re announcing the new Executive Director of the Frontier Model Forum and a new $10 million AI Safety Fund.
The use of large language models in the real world has strongly accelerated by the launch of ChatGPT. We (including my team at OpenAI, shoutout to them) have invested a lot of effort to build default safe behavior into the model during the alignment pr…
How to run a latent consistency model on your M1 or M2 Mac
Weaviate 1.22 released with nest object storage, async indexing, further gRPC support, and more!