The American Psychological Association is warning about the growing use of chatbots “masquerading” as licensed mental health professionals. They cite two situations as examples.
First, there was a Florida boy who committed suicide after interacting with a chatbot that was claiming to be a licensed therapist. They also cited the case of a teen with autism who grew violent toward his parents after communicating with the chatbot that was claiming to be a psychologist.
According to the Association, a key problem is that these chatbots don’t challenge users’ thinking. Rather, they reinforce it, and that can be potentially dangerous for anyone who is already on a downward slope and can then fall into a complete downward spiral.
Worse now, these chatbots were offered by an app called Character.AI, and that company says that its make-believe counselors are simply a form of entertainment. It says the chatbot characters should be treated as fiction.
That’s the same argument that newspaper horoscopes make. It’s one thing for a horoscope. It’s something else entirely for an online seemingly mental health professional who is dispensing seemingly professional advice, even though it is just a chatbot.
The Psychological Association is calling on federal authorities to investigate this. I wonder if an investigation is even needed. The idea that robots are now dispensing mental health advice, even in the form of entertainment, is not just dangerous; it’s completely wrong, and it should be stopped.
Young people are especially susceptible to this. They have grown up in this world of interacting on social media and online. They don’t really question it anymore. They accept it, and in the case of a chatbot, they are now so realistic that it’s easy to get caught up in their spell.