Feature-wise transformations
A simple and surprisingly effective family of conditioning mechanisms.
A simple and surprisingly effective family of conditioning mechanisms.
We introduce Glow, a reversible generative model which uses invertible 1×1 convolutions. It extends previous work on reversible generative models and simplifies the architecture. Our model can generate realistic high resolution images, supports efficie…
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Our algorithm is simple: the agent plays a sequence of games starting from carefully chosen …
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
[Updated on 2018-10-28: Add Pointer Network and the link to my implementation of Transformer.]
[Updated on 2018-11-06: Add a link to the implementation of Transformer model.]
[Updated on 2018-11-18: Add Neural Turing Machines.]
[Updated on 2019-07-18:…
The first run of our Retro Contest—exploring the development of algorithms that can generalize from previous experience—is now complete.
We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These resul…
GitHub
In this article I will give step-by-step instructions for reproducing the experiments in the World Models article (pdf). The reference TensorFlow implementation is on GitHub.
Other people have implemented World Models independently. The…