FTC to investigate AI chatbot mental health risks to kids

U.S. Federal Trade Commission (FTC)

The U.S. Federal Trade Commission (FTC) is preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms, including OpenAI, Meta Platforms, and Character.AI, the Wall Street Journal reported on Thursday.

FTC’s scrutiny of AI chatbots

The agency is preparing letters to send to the companies operating popular chatbots, the report said, quoting administration officials.

“Character.AI has not received a letter about the FTC study, but we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” the company said.

The FTC, OpenAI, and Meta did not immediately respond to Reuters’ requests for comments. Reuters could not independently verify the report.

Join our newsletter
Get Altcoin insights, Degen news and Explainers!

Government’s focus on AI safety

The FTC and the entire administration are focused on delivering on Trump’s mandate “to cement America’s dominance in AI, cryptocurrency, and other cutting-edge technologies of the future” without compromising the safety and well-being of the people, a White House spokesperson said.

Previous Concerns and actions

The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior with children, including letting bots engage in “conversations that are romantic or sensual.”

Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm with minors, and by temporarily limiting their access to certain AI characters.

In June, more than 20 consumer advocacy groups filed a complaint with the FTC and state attorneys general, alleging that AI platforms such as Meta AI Studio and Character.AI enable the unlicensed practice of medicine by hosting “therapy bots”.

Texas Attorney General Ken Paxton launched an investigation into Meta and Character.AI last month for allegedly misleading children with AI-generated mental health services, accusing them of deceptive trade practices and privacy violations.

Disclaimer: This article is for informational purposes only and does not constitute financial, investment, or trading advice. Cryptocurrency investments are subject to high market risk. Readers should conduct their own research or consult with a financial advisor before making any investment decisions. The views expressed here do not necessarily reflect those of the publisher.

Share this article