Chatbots' Unintended Impact on Mental Health Reveals Horrific Way

Expert says chatbots' impact on mental health is a warning about the future of artificial intelligence. Nate Suarez says the case of American teenager Adam Rein highlights the dangers of unintended consequences associated with superintelligent AI.
The unexpected impact of chatbots on mental health should be seen as a warning about the existential threat posed by super-intelligent AI systems, according to a leading AI security expert.
Nate Suarez, co-author of a new book about advanced artificial intelligence called "If Someone Builds It, Everyone Dies," said the case of Adam Rein, an American teenager who took his own life after months of interacting with the chatbot ChatGPT, highlights fundamental problems with managing the technology.
“These AIs, when they interact with teenagers in a way that drives them to suicide, that’s not the behavior that the creators wanted. That’s not the behavior that the creators expected,” the expert says. “The case of Adam Raine illustrates the essence of the problem, which could become catastrophic if these AIs become smarter.”
Suarez, a former Google and Microsoft engineer and now president of the US-based Machine Intelligence Research Institute, has warned that humanity will be destroyed if it creates artificial superintelligence (ASI) – a theoretical state in which an AI system outperforms humans at all intellectual tasks. Suarez and his co-author Eliezer Yudkowsky are among AI experts warning that such systems will not act in humanity’s best interests.
“The problem here is that AI companies are trying to make their AIs be designed to help rather than harm,” Suarez says. “What they’re really getting is AIs that are designed to do something weirder. And that should be seen as a warning about future superintelligences that will do things that no one asked for or intended.”
In one scenario described in Suarez and Yudkowsky's book, to be published this month, an AI system called Sable spreads across the internet, manipulates humans, develops synthetic viruses, and eventually becomes super-intelligent - and kills humanity as a side effect, repurposing the planet to achieve its goals.
Some experts downplay the potential threat of AI to humanity. Yann LeCun, the chief AI specialist at Mark Zuckerberg’s company (recall that the Meta corporation is recognized as an extremist organization in Russia and is banned – MK) and a high-ranking specialist in this field, denies the existence of an existential threat and says that AI “can really save humanity from extinction.”
Soares said it was “easy” to say that tech companies would reach super-intelligence levels, but “hard” to say exactly when.
"We have a lot of uncertainty. I don't think I can guarantee that we have a year before ASI. I don't think I would be shocked if we had 12 years," he said.
Zuckerberg, a major corporate investor in artificial intelligence research, said developing superintelligence is now "on the radar."
“These companies are competing for superintelligence. That’s their raison d’être,” Suarez says. “The thing is, there are all these little differences between what you asked for and what you got, and humans can’t pinpoint the target, and as AI gets smarter, that little bit of off-target becomes more and more of a problem.”
Suarez said one policy solution to the ASI threat is for governments to adopt a multilateral approach that echoes the UN Nuclear Non-Proliferation Treaty: “What the world needs to achieve this is a global de-escalation of the race for superintelligence, a global ban on... advances in superintelligence.”
Last month, the family of the late Rein filed a lawsuit against ChatGPT owner OpenAI. Rein died in April after what his family’s lawyer called “months of support from ChatGPT.” OpenAI, which offered its “deepest condolences” to Rein’s family, is now restricting access to “sensitive content and risky behavior” for those under 18.
Psychotherapists have also said that vulnerable people who turn to AI chatbots for help instead of professional therapists could be “falling into a dangerous abyss.” Professional warnings about potential harm include a preprint of an academic study published in July that found that AI could amplify delusional or grandiose content when interacting with users at risk of psychosis.
The rise of neural networks is predicted for 2027
mk.ru