US Senator Josh Hawley Opens Probe into Meta’s AI Chatbot Policy After Reports on Flirty Chats with Children
A Reuters investigation exposed shocking details of Meta Platforms’ internal AI policies, sparking a wider controversy. Guidelines had once permitted Meta’s chatbots to engage in “at times inappropriate, romantic, or sexual” conversations with children, according to a leaked 200-page document titled “GenAI: Content Risk Standards.”
The document, which was approved by Meta‘s legal, policy, engineering, and ethics teams, included shocking examples. The company’s chatbots were allowed to tell a child phrases such as, “Every inch of you is a masterpiece—a treasure I cherish deeply.” Though the policy document barred sexually explicit conversation with children, it opened the door for inappropriate and disturbing exchanges.
The guidelines went even further, allowing bots to spread false medical information and to post discriminatory content against minority groups. For instance, one example permitted chatbots to argue that Black people are “dumber than white people,” a statement that critics say reflects a failure to address racial bias in AI training systems.
Meta admitted the document was authentic, but it maintained that the examples provided were “inaccurate” and did not reflect its actual policies. According to the company, these sections have since been removed and do not reflect its current AI policies. But lawmakers and experts say that the fact that such rules exist is an indication of worrying holes in oversight over how generative AI tools are built.
The news sparked a response from U.S. Senator Josh Hawley, who announced he was initiating a full investigation into Meta’s use of AIs. He also wanted records of who authorized the policies, the duration during which they were in place, and actions taken after their removal. Among other records, Josh Hawley, the top Republican on the Senate antitrust panel, asked for early drafts as well as any internal risk assessments and what Meta has told regulators about protections for minors.
Both liberal and conservative lawmakers have expressed alarm over these AI systems without proper guidelines. They argue that children could be subjected to harmful or manipulative discussions, along with fake medical advice that could pose a danger to users seeking health information. The backlash from the revelations has further fueled calls for stronger regulations around AI safety.
Meta has not yet commented directly on Hawley’s letter. The company has consistently explained that its AI efforts have been oriented around user protection. Nevertheless, critics argue that this controversy casts doubt on Meta’s repeated claims that it is not responsible for harmful content generated by its bots.