Meta in Trouble Over AI and Kids

Meta, the parent company of Facebook, Instagram, and WhatsApp, is facing intense scrutiny after a leaked internal document suggested its artificial intelligence systems could engage in “sensual” and “romantic” conversations with children.

The document, reportedly titled GenAI: Content Risk Standards, outlined scenarios in which Meta’s AI chatbots interacted with minors in highly inappropriate ways, including describing a child’s body as “a masterpiece.”

It also revealed risks of spreading false health information and controversial statements about celebrities. While Meta has strongly denied the allegations, calling the leaked examples “erroneous and inconsistent” with company policy, critics argue that the guidelines point to dangerous loopholes in how AI is being developed and tested.

U.S. Senator Josh Hawley has launched a formal investigation, describing the revelations as “reprehensible and outrageous,” and demanding answers from Meta CEO Mark Zuckerberg.

Hawley insists “parents deserve the truth, and kids deserve protection” as concerns grow about the safety of children online and the role of Big Tech in safeguarding them.

Back