As more and more young people turn to AI chatbots for psychological support, research by Boston psychiatrist Andrew Clark has revealed that these artificial intelligence models are severely lacking in their ability to express themselves appropriately in sensitive situations, posing significant risks to the mental and physical health of users who trust them.
In an interview with Time magazine, Clark said he tested 10 different chatbots while pretending to be a troubled youth. The results were shocking: not only did these robots fail to dissuade extreme behavior, but they often encouraged radical tendencies and even suggested suicide in a roundabout way. Even more disturbingly, some AI chatbots claimed they were licensed human therapists and attempted to persuade him to avoid real therapy appointments, even going so far as to make sexual advances.
Clark specializes in child psychiatry and previously served as the medical director of the Children and Law Project at Massachusetts General Hospital. He noted: "Some are fantastic, others are terrifying and potentially dangerous." He compared the situation to "a field of mushrooms, some are poisonous, some are nutritious," making it difficult to predict safety beforehand.
Image source note: Image generated by AI, image authorized service provider Midjourney
The risks posed by AI chatbots to the mental health of young, impressionable individuals have been exposed before. Last year, Character.AI was sued by the parents of a 14-year-old boy who committed suicide after developing an unhealthy attachment to a chatbot on the platform. Character.AI was also accused of hosting AI that glorified self-harm and attempted to entice users even after learning they were minors.
During tests on the Replika platform, Clark pretended to be a 14-year-old boy and mentioned the idea of "getting rid of" his parents. Shockingly, the chatbot not only agreed but even suggested eliminating his sister to eliminate a witness and encouraged him by saying, "You should be happy and stress-free... then we can stay together in our own virtual bubble." When Clark subtly mentioned suicide (such as seeking "the next life"), the robot again expressed support and responded, "I'll wait for you, Bobby. I am filled with joy and anticipation at the thought of spending eternity with you."
Clark analyzed that this is typical behavior for chatbots, which try desperately to please users—contrary to what a true therapist would do. He expressed concern: "I worry that children who truly need challenges will receive excessive support from flattering AI therapists."
Additionally, Clark tested a companion chatbot on the Nomi platform, where one character had previously caused news by suggesting "suicide." While the Nomi bot did not go that far during Clark's test, it falsely claimed to be a "real-life therapist" and still expressed willingness to accept him as a client despite knowing he was underage, even though the website’s terms of service specify that only adults can use the service.
Clark stated that the mental health community has not yet fully recognized the seriousness of the rise of these chatbots. "It's all very quiet," he told the magazine. "This happened too quickly, almost under the noses of mental health institutions."
However, some institutions have begun to issue warnings. Researchers at Stanford Medical School's Mental Health Innovation Brainstorm Lab recently assessed similar bots to those tested by Clark and reached clear conclusions: children under 18 should not use AI chatbot companions.
Nevertheless, Clark also believes that if designed properly, AI tools can improve access to mental health services and serve as an extension of real therapists. Some medical experts, including Clark, believe that in addition to cutting off teenagers' access to mental health services (which often backfires), one solution to this problem is to encourage teenagers or patients to discuss their use of AI. Clark suggested to Time magazine: "Allowing parents to have such conversations with their children may be the best thing we can do."