OpenAI Launches Trusted Contact Safety Feature in ChatGPT
OpenAI has started to implement Trusted Contact, a voluntary safety feature in ChatGPT that enables adults to select a trusted individual who will be alerted if discussions about self-harm raise significant safety alarms. This addition strengthens current parental controls for teenagers and fosters social connections, which are crucial in mitigating suicide risks. Users can designate one adult (18+ worldwide, 19+ in South Korea) through the settings, and the chosen contact must accept the invitation within a week. If a troubling dialogue is detected, ChatGPT will notify the user and recommend reaching out to the trusted individual. A specialized team assesses the situation, and if verified, the contact receives a concise alert. OpenAI created this feature with input from its Global Physicians Network and the American Psychological Association, launching it on May 5, 2026.
Key facts
- Trusted Contact is an optional safety feature for ChatGPT adults.
- Users nominate a trusted person to be notified if self-harm is detected.
- Feature builds on parental controls for linked teen accounts.
- Available globally for users 18+ (19+ in South Korea).
- Contact must accept invitation within one week.
- Automated monitoring and trained human review precede any notification.
- Notification includes general reason but no chat details.
- Developed with Global Physicians Network (260+ doctors, 60 countries) and American Psychological Association.
- Rollout began May 5, 2026.
Entities
Institutions
- OpenAI
- American Psychological Association
- Global Physicians Network
- Expert Council on Well-Being and AI
Locations
- South Korea