We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision.
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples.
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
We've created OpenAI LP, a new "capped-profit" company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization.
We've found that self-play allows simulated AIs to discover physical skills like tackling, ducking, faking, kicking, catching, and diving for the ball, without explicitly designing an environment with these skills in mind.
We've created a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.
We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
We've discovered that evolution strategies (ES), an optimization technique that's been known for decades, rivals the performance of standard reinforcement learning (RL) techniques on modern RL benchmarks, while overcoming many of RL's inconveniences.
Deep learning is an empirical science, and the quality of a group's infrastructure is a multiplier on progress. Fortunately, today's open-source ecosystem makes it possible for anyone to build great deep learning infrastructure.
OpenAI’s mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible. We’re trying to build AI as part of a larger community, and we want to share our plans and capabilities along the way.
This post describes four projects that share a common theme of enhancing or using generative models, a branch of unsupervised learning techniques in machine learning.