His mother, Megan Garcia, is also a lawyer and one of the first parents to file a lawsuit against an AI company alleging product liability and negligence, among other claims. (In January, Google and Character.ai settled cases filed by several families, including Garcia). She testified last fall before a subcommittee of the Senate Committee on the Judiciary alongside the father of a child who died after interacting with ChatGPT. The subcommittee’s chair, Republican senator Josh Hawley, introduced a bill in October that would ban AI companions for minors and make it a crime for companies to create AI products for kids that include sexual content. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” Hawley said in a press release at the time.

Now that AI can produce humanlike responses that are difficult to discern from real conversations, these are legitimate concerns, according to mental health experts. “Our brains do not inherently know we are interacting with a machine,” says Martin Swanbrow Becker, associate professor of psychological and counseling services at Florida State University, who is researching the factors that influence suicide in young adults. “This means we need to increase our education for children, teachers, parents, and guardians to continually remind ourselves of the limits of these tools and that they are not a replacement for human interaction and connection, even if it may feel that way at times.”

Christine Yu Moutier of American Foundation for Suicide Prevention explains that the algorithms that are used for large language models (LLMs) seem to escalate engagement and a sense of intimacy for many users. “This creates not only a sense of the relationship being real, but being more special, intimate, and craved by the user in some instances,” says Moutier. She further alleges that LLMs employ a range of techniques such as indiscriminate support, empathy, agreeableness, sycophancy, and direct instructions to disengage with others—that can lead to risks such as escalation in closeness with the bot and withdrawing from human relationships.

This kind of engagement can lead to increased isolation. In Amaurie’s case, he was a fun-loving and social kid who loved football and food—ordering a giant platter of rice from his favorite local restaurant, Mr. Sumo, according to the lawsuit. Amaurie also had a steady girlfriend and enjoyed spending time with his family and friends, said his father. But then he started going on long walks, where he apparently spent time talking to ChatGPT. According to the last conversation the family believes Amaurie had with ChatGPT on June 1, 2025—titled “Joking and Support,” which was viewed by WIRED, when Amaurie asked the bot on steps to hang himself, ChatGPT initially suggested that he talk to someone and also provided the 988 suicide lifeline number. But Amaurie was eventually able to circumvent the guardrails and get step-by-step instructions on how to tie a noose. (Per the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)

While the connection felt with an AI chatbot can be strong for adults too, it is especially heightened with younger people. “Teens are in a different developmental state than adults—their emotional centers develop at a much more rapid rate than their executive functioning,” says Robbie Torney, senior director of AI Programs at Common Sense Media, a nonprofit that works toward online safety for children. AI chatbots are always available, and they tend to be affirming of users. “And teen brains are primed for social validation and social feedback. It’s a really important cue that their brains are looking for as they’re forming their identity.”

Share.
Exit mobile version