OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
OpenAI has launched a new "Trusted Contact" feature in ChatGPT aimed at addressing conversations that may involve self-harm. The optional safeguard alerts a designated contact if OpenAI determines a user may be at serious safety risk.

On Thursday, OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member.
In cases where a conversation may turn to self-harm, OpenAI will encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.
Background and Legal Pressure
OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families allege that ChatGPT encouraged their loved one to kill themselves — or even helped them plan it out.
How Trusted Contact Works
OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company says that every time it receives this kind of notification, the incident is reviewed by a human.
“We strive to review these safety notifications in under one hour,” the company says.
If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT sends the trusted contact an alert — either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.
Related Safeguards and Limitations
The Trusted Contact feature follows the safeguards the company introduced last September that gave parents oversight of their teens’ accounts, including receiving safety notifications if OpenAI’s system believes their child is facing a “serious safety risk.” ChatGPT has also included automated alerts encouraging users to seek professional health services if a conversation trends toward self-harm.
Trusted Contact is optional and, even if activated on a particular account, any user can maintain multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.
“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in its announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”