Safety & Alignment

AI safety needs social scientists

We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertain…

Research

Better language models and their implications

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, questi…

ai, deep-learning, Machine Learning

Thoughts on the BagNet Paper

Some thoughts on the interesting BagNet paper (accepted at ICLR 2019) currently being circulated around the Machine Learning Twitter Community.

Disclaimer: I wasn’t a reviewer of this paper for ICLR. I think it was worthy of acceptance to the c…

Uncategorised

Generalized Language Models

[Updated on 2019-02-14: add ULMFiT and GPT-2.]
[Updated on 2020-02-29: add ALBERT.]
[Updated on 2020-10-25: add RoBERTa.]
[Updated on 2020-12-13: add T5.]
[Updated on 2020-12-30: add GPT-3.]
[Updated on 2021-11-13: add XLNet, BART and ELECTRA; Also up…

Scroll to Top