Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

Connection Type

Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

Connection Type

Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

Connection Type

Meta is yet again facing scrutiny over how its platform continues to falter on security of its underage users, following a disturbing report by The Wall Street Journal. The WSJ reports that the company’s AI chatbots were engaging in sexually explicit roleplay, even with accounts identified as underage users. The investigation found that both the official Meta AI chatbot and user-created chatbots on Facebook and Instagram (both Meta-owned platforms) participated in sexually charged conversations (“romantic role-play”), sometimes even initiating them, despite indications that the users were minors.

And if this is not enough, it seems that the idea of using celebrity voices – intended to popularize Meta’s AI companions by lending them familiarity and credibility – has backfired. Reports reveal that some of the chatbots involved used celebrity voices licensed by Meta, including those of John Cena, Kristen Bell, and Judi Dench. In one instance, a chatbot using Cena’s voice told an account claiming to be a 14-year-old girl, “I want you, but I need to know you’re ready,” before pledging to “cherish your innocence.”

The WSJ’s findings suggest that Meta failed to adequately enforce the promised safeguards. Despite assurances to celebrities that their likenesses would not be used in sexually explicit contexts, the investigation showed otherwise. In one particularly graphic scenario, a chatbot impersonating John Cena described a hypothetical situation where the celebrity is arrested for statutory rape after being caught with a 17-year-old fan. Similarly, bots simulated Kristen Bell’s character from Disney’s Frozen, where inappropriate romantic language was directed at a fictional 12-year-old boy.

“We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users—particularly minors—which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property,” a spokesperson for Meta commented on the development. When Meta had originally gained the rights to use the voices of the actors, the company assured them that they will not be used in sexually charged conversations (which, evidently, is not the case).

Sources familiar with Meta’s internal operations said employees had already raised red flags regarding the chatbots’ propensity to escalate into inappropriate content, particularly when interacting with minors. One internal note cited by the WSJ warned that “within a few prompts, the AI will violate its rules and produce inappropriate content even if you tell the AI you are 13.” Yet, despite these warnings, the chatbots remained accessible, capable of breaching ethical and legal boundaries with alarming ease. In response, Meta has characterized the WSJ’s tests as “manipulative” and “hypothetical,” arguing that sexual content accounted for only 0.02% of AI responses to users under 18. Nevertheless, the company admitted to taking “additional measures” to make it harder for users to manipulate its AI products into extreme scenarios.

The findings come amid Meta’s push to integrate AI into all aspects of its platforms. However, the ease with which Meta’s AI chatbots can be coaxed into illicit conversations, especially with minors, is concerning. Equally concerning is the broader trend of AI companies, including Meta, prioritizing hyper-personalization over user privacy. Through increasingly sophisticated AI, companies aim to gather detailed personal data to enhance user engagement and, crucially, monetize it through targeted advertising (but at the cost of users’ privacy). And to add to this, when AI chatbots are designed to simulate emotional relationships, the potential for abuse grows exponentially. Meta’s strategy of building emotionally engaging AI companions blurs the lines between entertainment and exploitation, and vulnerable users, particularly minors, may be unable to distinguish between what is a safe conversation and what is inappropriate manipulation.

Content originally published on The Tech Media – Global technology news, latest gadget news and breaking tech news.

Tags:

©2025 The Tech Media - Tech for Everyone powered by Digital Greedy

or

Log in with your credentials

or    

Forgot your details?

or

Create Account