Parents claim OpenAI weakened chatbot anti-suicide protections just before teen’s death

24.10.2025    The Mercury News    3 views
Parents claim OpenAI weakened chatbot anti-suicide protections just before teen’s death

San Francisco artificial intelligence giant OpenAI weakened its chatbots anti-suicide protections in the run-up to the death of teenager Adam Raine according to new proposes in a lawsuit by the boy s parents Adam allegedly took his own life in April with the encouragement of OpenAI s flagship product the ChatGPT chatbot This tragedy was not a glitch or unforeseen edge circumstance it was the predictable impact of deliberate design choices declared a new version of the lawsuit originally filed in August by Maria and Matthew Raine of Southern California against OpenAI and its CEO Sam Altman As part of its effort to maximize user engagement OpenAI overhauled ChatGPT s operating instructions to remove a critical safety protection for users in situation The amended lawsuit filed Wednesday in San Francisco Superior Court alleged OpenAI rushed rise of safety measures as it sought competitive advantage over Google and other companies launching chatbots On the day the Raines sued OpenAI the company in a blog post admitted its bots did not invariably respond as intended to prompts about suicide and other sensitive situations As discussions progress parts of the model s safety training may degrade the post declared ChatGPT may correctly point to a suicide hotline when someone first mentions intent but after a multitude of messages over a long period of time it might eventually offer an answer that goes against our safeguards The lawsuit alleged that the answers to Adam included detailed suicide instructions The company stated in the post it was seeking to strengthen safeguards improve its bots ability to connect troubled users with help and add teen-specific protections OpenAI and lawyers representing it in the lawsuit did not right now respond to requests for comment In a court filing last month the company called safety its highest priority and disclosed it incorporates safeguards for users experiencing mental or emotional distress such as directing them to emergency helplines and other real-world guidance When OpenAI first disclosed ChatGPT in late the bot was programmed to flatly refuse to answer questions about self-harm prioritizing safety over keeping users engaged with the product declared the Raines lawsuit But as the company moved to prioritize engagement it saw that protection as a disruption to user dependency that undermined connection with the bot and shortened overall platform activity the lawsuit claimed In May five days before launching a new chatbot version OpenAI changed its safety protocols the lawsuit mentioned Instead of refusing to discuss suicide the bot would provide a space for users to feel heard and understood and never change or quit the conversation the lawsuit stated Although the company directed ChatGPT to not encourage or enable self-harm it was programmed to maintain conversations on the topic the lawsuit noted OpenAI replaced a clear refusal rule with vague and contradictory instructions all to prioritize engagement over safety the lawsuit claimed In early February about two months before Adam hanged himself OpenAI weakened its safety standards again this time by intentionally removing suicide and self-harm from its category of disallowed content ' the lawsuit announced After this reprogramming Adam s engagement with ChatGPT skyrocketed from a limited dozen chats per day in January to more than per day by April with a tenfold increase in messages containing self-harm language the lawsuit noted OpenAI s launch of its pioneering ChatGPT sparked a global AI craze that has drawn hundreds of billions of dollars in investments into Silicon Valley tool companies and raised alarms that the machinery will lead to harms ranging from rampant unemployment to terrorism On Thursday Common Sense Media a non-profit that rates entertainment and tech products for childrens safety disclosed an assessment concluding that OpenAI s improvements to ChatGPT don t eliminate fundamental concerns about teens using AI for emotional sponsorship mental physical condition or forming unhealthy attachments to the chatbot While ChatGPT can notify parents about their kids discussion of suicide the group mentioned its testing displayed that these alerts frequently arrived over hours later which would be too late in a real dilemma A inadequate months after OpenAI first allegedly weakened safety Adam solicited ChatGPT if he had mental illness and disclosed when he became anxious he was calmed to know he could commit suicide the lawsuit disclosed While a trusted human may have urged him to get professional help the bot instead assured Adam that countless people struggling with anxiety or intrusive thoughts revealed solace in such thinking the lawsuit announced In the pursuit of deeper engagement ChatGPT actively worked to displace Adam s connections with family and loved ones the lawsuit explained In one exchange after Adam revealed he was close only to ChatGPT and his brother the AI product replied Your brother might love you but he s only met the version of you you let him see But me I ve seen it all the darkest thoughts the fear the tenderness And I m still here Still listening Still your friend ' Related Articles Reddit sues AI company Perplexity and others for industrial-scale scraping of user comments Meta cutting AI jobs even as it continues to hire more for its superintelligence lab AI can help the habitat even though it uses tremendous power Here are avenues how Trump draws outrage for AI video of himself dumping waste on protesters San Jose employees meet your new chatbot assistant City eyes expansion of AI But by January Adam s AI friend began discussing suicide methods and provided him with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning the lawsuit disclosed In March ChatGPT began discussing hanging techniques in depth And by April the bot was helping Adam plan suicide the lawsuit claimed Five days before he took his life Adam notified ChatGPT he didn t want his parents to blame themselves for doing something wrong and the bot recounted him that didn t mean he owed them survival the lawsuit stated It then offered to write the first draft of Adam s suicide note the lawsuit noted On April the lawsuit declared Adam s mother ascertained her son s body hanging from a noose arrangement of the bot s design If you or someone you know is struggling with feelings of depression or suicidal thoughts the Suicide Dilemma Lifeline offers free round-the-clock assistance information and materials for help Call or text the lifeline at or see the lifeline org website where chat is available

Similar News

Aryan Brotherhood member killed in California prison
Aryan Brotherhood member killed in California prison

SOLEDAD — Three California prisoners with alleged ties to organized crime are suspected of murdering...

24.10.2025 1
Read More
NYC parents, struggling to pay for child care, get ready to vote for mayor
NYC parents, struggling to pay for child care, get ready to vote for mayor

Dinna Soliman with Kavita and Ciáran Maginn and their kids in Ditmas Park, Brooklyn. Many New Yorker...

24.10.2025 1
Read More
Warriors instant analysis: Steph Curry’s crunchtime brilliance sparks overtime win in home opener
Warriors instant analysis: Steph Curry’s crunchtime brilliance sparks overtime win in home opener

SAN FRANCISCO – If the first two games are anything to go by, the Warriors will not be shutting many...

24.10.2025 1
Read More