DALL·E: Creating Images from Text

We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language. | Continue reading


@openai.com | 3 years ago

Organizational Update from OpenAI

It’s been a year of dramatic change and growth at OpenAI. In May, we introduced GPT-3—the most powerful language model to date—and soon afterward launched our first commercial product, an API to safely access artificial intelligence models using simple, natural-language prompts. … | Continue reading


@openai.com | 3 years ago

OpenAI: Gym Retro (2018)

We're releasing the full version of Gym Retro, a platform for reinforcement learning research on games. This brings our publicly-released game count from around 70 Atari games and 30 Sega games to over 1,000 games across a variety of backing emulators. We're also releasing the to … | Continue reading


@openai.com | 3 years ago

OpenAI Licenses GPT-3 Technology to Microsoft

OpenAI released its first commercial product back in June: an API for developers to access advanced technologies for building new applications and services. The API features a powerful general purpose language model, GPT-3, and has received tens of thousands of applications to da … | Continue reading


@openai.com | 3 years ago

Learning to Summarize with Human Feedback

We've applied reinforcement learning from human feedback to train language models that are better at summarization. Our models generate summaries that are better than summaries from 10x larger models trained only with supervised learning. Even though we train our models on the Re … | Continue reading


@openai.com | 3 years ago

OpenAI API

We’re releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now … | Continue reading


@openai.com | 3 years ago

Attacking Machine Learning with Adversarial Examples (2017)

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they're like optical illusions for machines. In this post we'll show how adversarial examples work across different mediums, and will discu … | Continue reading


@openai.com | 3 years ago

Image GPT

We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples | Continue reading


@openai.com | 3 years ago

AI and Efficiency

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a | Continue reading


@openai.com | 4 years ago

OpenAI trained an AI to generate music, with singing, in the style of various artists and genres

We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples. | Continue reading


@openai.com | 4 years ago

OpenAI Microscope

We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organisms” which are often studied in interpretability. | Continue reading


@openai.com | 4 years ago

OpenAI→PyTorch

We are standardizing OpenAI’s deep learning framework on PyTorch. In the past, we implemented projects in many frameworks depending on their relative strengths. We’ve now chosen to standardize to make it easier for our team to create and share optimized implementations of our mod … | Continue reading


@openai.com | 4 years ago

OpenAI Five

At OpenAI, we’ve used the multiplayer video game Dota 2 as a research platform for general-purpose AI systems. Our Dota 2 AI, called OpenAI Five, learned by playing over 10,000 years of games against itself. It demonstrated the ability to achieve expert-level performance, learn h … | Continue reading


@openai.com | 4 years ago

Deep Double Descent

Contrary to conventional wisdom, we find that the performance of CNNs, ResNets, and transformers is non-monotonic: it first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful reg … | Continue reading


@openai.com | 4 years ago

OpenAI Procgen Benchmark

We’re releasing Procgen Benchmark, 16 simple-to-use procedurally-generated environments which provide a direct measure of how quickly a reinforcement learning agent learns generalizable skills. | Continue reading


@openai.com | 4 years ago

Safety Gym

We're releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents which respect safety constraints while training. | Continue reading


@openai.com | 4 years ago

AI and Compute: 3.4-Month Doubling Time Since 2012

We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period). | Continue reading


@openai.com | 4 years ago

OpenAI Releases Largest GPT-2 Text Generation Model

As the final model release of GPT-2 [/blog/better-language-models/]’s stagedrelease [/blog/gpt-2-6-month-follow-up/], we’re releasing the largest version(1.5B parameters) of GPT-2 along with code and model weights[https://github.com/openai/gpt-2-output-dataset] to facilitate dete … | Continue reading


@openai.com | 4 years ago

Solving the Rubik’s cube with a robot hand

We've trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. | Continue reading


@openai.com | 4 years ago

Fine-Tuning GPT-2 from Human Preferences

We’ve fine-tuned the 774M parameter GPT-2 language model using human feedbackfor various tasks, successfully matching the preferences of the external humanlabelers, though those preferences did not always match our own. Specifically,for summarization tasks the labelers preferred … | Continue reading


@openai.com | 4 years ago

Emergent Tool Use from Multi-Agent Interaction

We've observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. | Continue reading


@openai.com | 4 years ago

Testing Robustness Against Unforeseen Adversaries

We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. | Continue reading


@openai.com | 4 years ago

OpenAI is releasing the 774M GPT-2 model

We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of our medium 355M model in May, and subsequent research with partners and the AI community into the model’s potential for misuse and societal bene … | Continue reading


@openai.com | 4 years ago

Learning Day

At OpenAI, each Thursday is Learning Day: a day where employees have the option to self-study technical skills that will make them better at their job but which aren’t being learned from daily work. | Continue reading


@openai.com | 4 years ago

OpenAI Raises $1B from Microsoft

Microsoft is investing $1 billion in OpenAI to support us building artificialgeneral intelligence (AGI) with widely distributed [https://openai.com/charter/] economic benefits. We're partnering to develop a hardware and software platformwithin Microsoft Azure which will scale to … | Continue reading


@openai.com | 4 years ago

Why Responsible AI Development Needs Cooperation on Safety

We've written a policy research paper identifying four strategies that can beused today to improve the likelihood of long-term industry cooperation on safetynorms in AI: communicating risks and benefits, technical collaboration,increased transparency, and incentivizing standards … | Continue reading


@openai.com | 4 years ago

J

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarizat … | Continue reading


@openai.com | 5 years ago

Learning Dexterity

We've trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. | Continue reading


@openai.com | 5 years ago

MuseNet

We’ve created Musenet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. | Continue reading


@openai.com | 5 years ago

Generative Modeling with Sparse Transformers

We've developed the Sparse Transformer, a deep neural network which sets newrecords at predicting what comes next in a sequence—whether text, images, orsound. It uses an algorithmic improvement of the attention mechanism to extractpatterns from sequences 30x longer than possible … | Continue reading


@openai.com | 5 years ago

How to Train Your OpenAI Five

OpenAI Five is the first AI to beat the world champions in an esports game,having won two back-to-back games versus the world champion Dota 2 team, OG[https://twitter.com/OGesports], at Finals[https://openai.com/blog/openai-five-finals/] this weekend. Both OpenAI Fiveand DeepMi … | Continue reading


@openai.com | 5 years ago

How to Train Your OpenAI Five

OpenAI Five is the first AI to beat the world champions in an esports game,having won two back-to-back games versus the world champion Dota 2 team, OG[https://twitter.com/OGesports], at Finals[https://openai.com/blog/openai-five-finals/] this weekend. Both OpenAI Fiveand DeepMi … | Continue reading


@openai.com | 5 years ago

Better Language Models and Their Implications

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarizat … | Continue reading


@openai.com | 5 years ago

Activation Atlases

We’ve created activation atlases (in collaboration with researchers from Google Brain), a new technique for visualizing interactions between neurons. | Continue reading


@openai.com | 5 years ago

OpenAI Five Finals

We’ll be holding our final live event for OpenAI Five at 11:30a PT on April 13th. | Continue reading


@openai.com | 5 years ago

OpenAI Five Finals

We’ll be holding our final live event for OpenAI Five at 11:30a PT on April 13th. | Continue reading


@openai.com | 5 years ago

Better Language Models and Their Implications

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarizat … | Continue reading


@openai.com | 5 years ago

OpenAI Five: Goals and Progress

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. | Continue reading


@openai.com | 5 years ago

OpenAI Research

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. | Continue reading


@openai.com | 6 years ago