OpenAI details ChatGPT privacy safeguards and user controls
OpenAI has published a plain-language guide explaining how ChatGPT learns from data while protecting privacy. The company uses a mix of publicly available information, partnership data, and user-provided content to train its models. To reduce personal information in training datasets, OpenAI developed Privacy Filter, an internal tool that identifies and masks personal text, which it claims outperforms any comparable tool. Privacy Filter is applied at multiple stages, including on public datasets and user conversations when the 'Improve the model for everyone' setting is enabled. OpenAI has also made Privacy Filter freely available to other developers. Users can opt out of training by turning off the setting in Data Controls, or use Temporary Chat, which does not appear in history, create memories, or contribute to model improvement; conversations are retained for 30 days for safety then deleted. The Memory feature is optional and can be reviewed, edited, or turned off entirely. ChatGPT is designed to reject requests for private information, but users can submit privacy requests via the portal if errors occur. OpenAI emphasizes its commitment to balancing privacy with safety, including detecting credible threats of violence.
Key facts
- OpenAI uses publicly available internet content, partnership data, and user-generated information for training.
- Privacy Filter is an internal tool that masks personal information in text.
- Privacy Filter is applied to public datasets and user conversations with 'Improve the model for everyone' enabled.
- OpenAI offers Privacy Filter to other developers for free.
- Users can disable training by turning off 'Improve the model for everyone' in Settings > Data Controls.
- Temporary Chats are not used for training and are deleted after 30 days.
- Memory feature is optional and can be turned off.
- ChatGPT rejects requests for private information; users can submit privacy requests.
- OpenAI states it balances privacy with detecting credible threats of violence.
- The guide was published on May 6, 2026.
Entities
Institutions
- OpenAI