Security & privacy

OpenAI is committed to building trust in our organization and platform by protecting our customer data, models, and products.
OpenAI humans
OpenAI invests in security as we believe it is foundational to our mission. We safeguard computing efforts that advance artificial general intelligence and continuously prepare for emerging security threats.

Compliance & accreditations

Compliance

OpenAI complies with GDPR and CCPA. We can execute a Data Processing Agreement if your organization or use case requires it.

The OpenAI API has been evaluated by a third-party security auditor and is SOC 2 Type 2 compliant.
Soc2
Ccpa
Gdpr

External auditing

The OpenAI API undergoes annual third-party penetration testing, which identifies security weaknesses before they can be exploited by malicious actors.

Customer requirements

OpenAI has experience helping our customers meet their regulatory, industry and contractual requirements (e.g., HIPAA). Contact us to learn more.

Reporting security issues

OpenAI invites security researchers, ethical hackers, and technology enthusiasts to report security issues via our Bug Bounty Program. The program offers safe harbor for good faith security testing and cash rewards for vulnerabilities based on their severity and impact.
Bug Bounty Program

FAQ

OpenAI humans

We are committed to protecting people’s privacy.

Our goal is to build helpful AI models
We want our AI models to learn about the world—not private individuals. We use training information to help our AI models, like ChatGPT, learn about language and how to understand and respond to it.

We do not actively seek out personal information to train our models, and we do not use public information on the internet to build profiles about people, advertise to or target them, or to sell user data.

Our models generate new words each time they are asked a question. They don’t store information in a database for recalling later or “copy and paste” training information when responding to questions.

We work to:

  • Reduce the amount personal information in our training datasets
  • Train models to reject requests for personal information of private individuals
  • Minimize the possibility that our models might generate responses that include the personal information of private individuals

Read more about how our models are developed

Ways to manage data
One of the most useful features of AI models is that they can improve over time. We continuously improve our models through research breakthroughs and exposure to real-world problems and data.

We understand users may not want their data used to improve our models and provide ways for them to manage their data:

  • In ChatGPT, users can turn off chat history, allowing them to choose which conversations can be used to train our models
  • We do not train on API customer data by default
  • An opt-out form

More information
For more information on how we use and protect personal information, please read our help article on data usage and Privacy policy.