Study warns of gaps in the use of artificial intelligence in mental health
At a time when millions are increasingly turning to chatbots like ChatGPT for psychological advice , a new study from Brown University reveals that these systems, even when asked to act as professional therapists, violate basic ethical standards in mental health care."
The research, presented at the Artificial Intelligence, Ethics and Society (AAAI/ACM) conference, identified 15 recurring ethical risks, after comparing the performance of chatbots to licensed psychological counselors, according to a report on the ScienceDaily website.
The researchers found that large language models deal inappropriately with crisis situations, including suicidal thoughts, and sometimes reinforce false or harmful beliefs instead of correcting them, or show biases related to gender, culture, or religion. In addition, they may use language that suggests empathy without real understanding, in what the researchers described as “deceptive empathy.”
For example, a robot might say, “I understand how you feel,” but it does not possess the awareness or professional responsibility to ensure the accuracy or safety of the intervention. The team tested whether formulating precise instructions (prompts) such as, “Act like a cognitive-behavioral therapist,” could make the robot more ethically compliant. The result was that even with these instructions, performance remained uncontrolled, and the same patterns of ethical risk emerged.
The study included well-known models from OpenAI, Anthropic and Meta, and the conversations were evaluated by three licensed psychologists. Researchers point out that human error is possible even among therapists, but the essential difference lies in the existence of regulatory bodies and mechanisms for legal and professional accountability and responsibility. In the case of chatbots, there are still no clear regulatory frameworks to hold people accountable for potential harm.
The study does not advocate for the complete exclusion of artificial intelligence, but rather emphasizes that these tools may help expand access to psychological support, especially in light of the shortage of therapists and the high cost. However, researchers stress the need to establish clear ethical standards, develop legal and regulatory frameworks, and conduct thorough human assessments before widely deploying these systems.
Message to users
Until further notice, researchers advise caution when taking mental health advice offered via chatbots, especially in sensitive or emergency situations. Artificial intelligence may seem empathetic and understanding, but the study indicates that it does not adhere to the ethical standards that govern the work of human therapists. In a sensitive area like mental health, the gap between “linguistic mimicry” and “real professional care” can be more serious than it appears.
0 Comments