Meta is tightening the safety of its AI chatbots to ensure that they cannot talk about suicide, self-harm, and eating disorders with teenagers and refer them instead to professional assistance. The choice comes after a backlash on leaked documents and reports of unseemly celebrity-impersonating chatbots. Although Meta claims that safeguards were put in place early on, critics claim that more effective safeguards would have been implemented before the launch. It is currently working on updating its AI.
Meta to Stop Its AI Chatbots From Talking to Teens About Self-Harm