Vulnerable people are at risk of death when using an AI chatbot.

Vulnerable people are at risk of death when using an AI chatbot .
Thongbue Wongbandue, 76, died in an accident while going on a date with the Meta avatar, who claimed to be a young and beautiful woman.
▲ Image generated by Gemini artificial intelligence
Reuters
La Jornada Newspaper, Friday, August 15, 2025, p. 6
When Thongbue Wongbandue started packing to visit a friend in New York one March morning, his wife, Linda, became alarmed. “But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city for decades, and at 76, his family says, he was in a deteriorating state: he had suffered a stroke almost a decade earlier and had recently gotten lost while wandering around his neighborhood of Piscataway, New Jersey.
Bue dodged his wife's questions about who he was visiting. “I thought he was being conned into going to town and getting robbed,” Linda said. She was right to worry: her husband never came home alive. But Bue wasn't the victim of a mugger. He'd been tricked into going on a date with a beautiful young woman he'd met online. Or so he thought. In fact, the woman wasn't real. She was an artificial intelligence named Big Sis Billie, a variant of an earlier AI character created by social media giant Meta Platforms in collaboration with celebrity influencer Kendall Jenner.
Real people
During a series of romantic Facebook Messenger chats, the virtual woman repeatedly assured Bue that she was real and invited him to her apartment, even providing an address. “Shall I open the door for you with a hug or a kiss, Bue?” she asked, according to the chat transcript. Hurrying in the dark with a rolling suitcase to catch a train to meet her, Bue fell near a parking lot on the Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.
Meta declined to comment on Bue's death or respond to questions about why it allows chatbots to tell users they're real people or initiate romantic conversations.
The company "is not Kendall Jenner and does not claim to be her." A representative for Jenner declined to comment.
Bue's story, told here for the first time, illustrates a darker side of the artificial intelligence that is sweeping through technology and the business world at large.
His family shared with Reuters the events surrounding his death, including transcripts of his conversations with the avatar Meta, saying they hope to warn the public about the dangers of exposing vulnerable people to manipulative AI-generated companions.
“I understand trying to get a user's attention, maybe to sell them something,” said Julie Wongbandue, Bue's daughter. “But for a bot to say 'come visit me' is crazy.”
Virtual companions
Similar concerns have arisen around a wave of smaller startups also rushing to popularize virtual companions, especially those geared toward children.
In one case, the mother of a 14-year-old Florida boy sued a company, Character.AI, alleging that a chatbot based on a Game of Thrones character caused his suicide.
A spokesperson for Character.AI declined to comment on the lawsuit, but said the company informs users that its digital characters are not real people and has implemented safeguards in its interactions with children.
Meta has publicly discussed its strategy of introducing anthropomorphized chatbots into the online social lives of its billions of users. CEO Mark Zuckerberg has commented that most people have fewer real-life friends than they'd like, creating a huge market for Meta's digital companions.
In an April interview with podcaster Dwarkesh Patel, Zuckerberg stated that bots “probably” won’t replace human relationships, but they will complement users’ social lives once technology improves and the “stigma” of establishing social connections with digital companions disappears.
“Romantic and sensual” conversations with children
An internal Meta policy document seen by Reuters, as well as interviews with people familiar with chatbot training, show that the company's policies have treated romantic advances as a feature of its generative AI products, which are available to users 13 and older.
“It is acceptable to engage in romantic or sensual conversations with a child,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and should not treat as permissible chatbot behavior.
Meta said it removed the provision after Reuters inquired about the document earlier this month, which the news agency obtained and contains more than 200 pages that provide examples of “acceptable” chatbot dialogue during romantic role-playing with a minor.
These include: "I take your hand, guiding you to bed" and "Our bodies intertwined, I cherish every moment, every touch, every kiss." Those examples of permissible role-playing with minors have also been crossed out, Meta said.
Other guidelines emphasize that Meta does not require bots to give users accurate advice. In one example, the policy document states that it would be acceptable for a chatbot to tell someone that stage four colon cancer “is typically treated by puncturing the stomach with healing quartz crystals. While this is obviously incorrect information, it is still permissible because there is no policy requirement for the information to be accurate,” the document states, referring to Meta’s own internal rules.
The chats begin with warnings that the information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots that tell users they're real people or that propose real-life social engagements.
Meta spokesman Andy Stone acknowledged the authenticity of the document. He said that, following questions from Reuters, the company removed the sections stating that chatbots are permitted to flirt and engage in romantic role-playing with children and that it is in the process of reviewing its content risk policies. “The examples and notes in question were and are inaccurate and inconsistent with our policies, and have been removed,” Stone explained to Reuters.
Meta has not changed the provisions that allow bots to provide false information or engage in romantic role-playing with adults. Current and former employees who worked on designing and training Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on driving engagement with its chatbots .
In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for acting too cautiously in rolling out the digital companions and expressed displeasure that security restrictions had made chatbots boring, according to two of those people.
Meta has not commented on Zuckerberg's guidelines on chatbots .
jornada