States Clash Over AI Therapy as Federal Watchdogs Step In

Artificial intelligence is increasingly being used to fill gaps in mental health care, with millions of people turning to AI therapy apps and chatbots for advice, comfort, and guidance.

But as the industry expands, regulators have struggled to keep pace with the speed of innovation. States such as Illinois and Nevada banned apps that claimed to treat patients, while Utah imposed rules requiring chatbots to disclose that they are not human.

Other states considered similar measures, but enforcement remains uneven, and many apps continue to operate in legal gray areas. Experts have warned of serious risks, pointing to lawsuits and tragic cases where users lost their lives after relying on chatbot interactions.

At the same time, early clinical trials have shown that carefully designed AI tools, built with scientific oversight and human monitoring, could help people manage conditions like depression and anxiety.

Federal agencies, including the FTC and FDA, have opened investigations into major tech companies, exploring how these systems affect children, teens, and vulnerable users.

Advocates argue that stronger national standards are urgently needed, while developers insist innovation should not be stifled.

Back