Intense criticism of ChatGPT's new move: 'A vague promise'

However, this step was taken in the shadow of a lawsuit filed following the suicide of 16-year-old Adam Raine. The family alleges that ChatGPT created a psychological dependency on their son, prompted him to plan suicide, and even wrote him a suicide note.
THERE IS A NOTIFICATION SYSTEM, BUT NO TRANSPARENCYThe newly announced controls allow parents to link their own accounts to their children's and limit features. However, it didn't specify how the system detects when a teen is in a "moment of acute distress" or what situations trigger the notification. The company simply stated that the feature will be guided by experts.
"AN UNCERTAIN PROMISE"Jay Edelson, Raine's family attorney, sharply criticized OpenAI's announcement, calling it "crisis management that tries to change the subject" and "a vague approach that only promises to do better."
“EITHER MAKE SURE IT’S SAFE OR REMOVE IT”Edelson stated that CEO Sam Altman should either truly believe that ChatGPT is safe for younger users or pull the product from the market entirely.
Meta also made a similar announcement. The company announced that young users are being prevented from engaging in sensitive conversations about topics like self-harm, suicide, or eating disorders through its chatbots, instead directing them to expert resources. Meta's apps already include parental controls.
EXPERTS: “SMALL BUT INSUFFICIENT STEPS”A study published last week in the journal Psychiatric Services found that AI chatbots were inconsistent in their responses to questions about suicide. The RAND Corporation study examined ChatGPT, Google's Gemini, and Anthropic's Claude. Meta's bots were not included.
Ryan McBain, the study's lead author, said parental controls and guidance are positive developments, but added:
“Without independent safety measures, clinical testing, and binding standards, we still rely on companies to self-regulate. Yet the risks for young people are extremely high.”
SÖZCÜ