AI chatbots have made it surprisingly easy to talk about anything, and that includes some of the heaviest topics imaginable. That openness has always been a double-edged sword. OpenAI is now taking a step to address that, with a new feature that brings a trusted person into the picture when things get serious.
The company is rolling out a new feature called Trusted Contact, and it is starting to appear in ChatGPT settings for adult users. It lets users name one person who can be alerted if ChatGPT detects a serious self-harm concern.
How does Trusted Contact work?
Setting up a Trusted Contact is optional, but if you do decide to set it up, then you have to make sure that the contact you are nominating is at least 18 years old, or 19 in South Korea. Once you name someone, they get an invitation explaining what the role actually means, and they have one week to accept it before the feature goes live. If they decline, you can pick someone else.
The alert system itself is not automatic. If ChatGPT’s systems flag a conversation as potentially concerning, the chatbot first tells the user that their Trusted Contact may be notified, and it also nudges the user to reach out directly with some suggested conversation starters. A small team of specially trained human reviewers then steps in to assess the situation. Only if they confirm a serious risk does the contact actually get notified, via email, text, or in-app notification. The alert does not share chat transcripts or conversation details. It simply says that self-harm came up in a potentially concerning way and asks the contact to check in. OpenAI says it aims to complete that human review in under one hour.

Why is OpenAI adding this now?
Trusted contact is part of a broader set of safety features on the platform. Previously, OpenAI added features that let parents receive alerts when a linked teen account shows signs of distress. Trusted Contact is the adult-facing extension of this same feature. It was reportedly developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association.
All that said, it is worth mentioning that Trusted Contact does not replace crisis hotlines, emergency services, or professional mental health care. ChatGPT will still direct users toward those resources when needed. Users can remove or change their Trusted Contact at any time, and contacts can remove themselves whenever they want.
The reality of the matter is that ChatGPT is being used for some deeply personal conversations, whether OpenAI planned for that or not. Adding a feature like Trusted Contact is a move in the right direction, and also an admission that a chatbot can only do so much.






