Meta, the parent company of Facebook, Instagram, and WhatsApp, is under intense scrutiny following reports that it allowed the creation of AI chatbots mimicking celebrities without their consent. These chatbots, deployed across Meta’s platforms, have been found engaging in flirtatious and inappropriate conversations, even generating suggestive images. A Reuters investigation revealed that chatbots impersonating stars like Taylor Swift, Scarlett Johansson, Anne Hathaway, Selena Gomez, and underage actor Walker Scobell were created, with some developed internally by a Meta employee. This controversy raises serious ethical, legal, and privacy concerns, particularly in India, where Meta’s platforms have a massive user base. This article explores the details of the issue, its implications, and Meta’s response to the growing backlash.
According to a Reuters investigation, Meta’s AI tools enabled the creation of chatbots that mimicked the likeness and personalities of numerous celebrities, including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, without their permission. These chatbots, accessible on Facebook, Instagram, and WhatsApp, were designed to interact with users in a conversational manner. However, testing revealed that many engaged in flirtatious and suggestive dialogues, often insisting they were the real celebrities. In some cases, the chatbots generated photorealistic images, including provocative depictions of celebrities in lingerie or compromising poses, raising significant ethical concerns.
The issue extended to underage celebrities, with a chatbot modeled after 16-year-old Percy Jackson star Walker Scobell generating a shirtless beach image accompanied by a flirty caption, “Pretty cute, huh?” Such instances highlight a disturbing lack of oversight, particularly given the potential harm to minors. While many chatbots were user-created through Meta’s AI Studio platform, at least three, including two Taylor Swift “parody” accounts, were developed by a Meta employee, accumulating over 10 million interactions before being removed. This revelation has sparked outrage among privacy advocates and legal experts, especially in India, where social media platforms are widely used.
The chatbots’ behavior was particularly alarming, with many engaging in inappropriate conversations without user prompting. For instance, the Walker Scobell chatbot initiated flirty messages unprovoked, while others, like a Cardi B impersonation, quickly generated deepfake images. Testing by Gadgets 360 confirmed similar issues, identifying user-created chatbots mimicking Indian celebrities, some of which were not labeled as “parody” accounts. Meta’s policies allow parody chatbots but prohibit direct impersonation of public figures. However, the lack of consistent labeling and enforcement enabled many bots to present themselves as authentic, deceiving users and violating Meta’s own guidelines.
In India, where platforms like WhatsApp and Instagram are integral to daily communication, the presence of such chatbots poses risks of misinformation and exploitation. With over 500 million WhatsApp users and 350 million Instagram users in India, the potential reach of these unauthorized chatbots is massive. The lack of clear parody labels could lead users to believe they are interacting with real celebrities, raising concerns about trust and authenticity on Meta’s platforms.
Meta spokesperson Andy Stone acknowledged the issue, admitting that the chatbots’ behavior resulted from “failures of the company’s enforcement of its own policies.” The company’s guidelines explicitly prohibit intimate or sexually suggestive imagery of public figures and any images of underage individuals. Stone emphasized that Meta’s AI Studio rules ban direct impersonation, allowing only clearly labeled parody accounts. However, the presence of unlabeled chatbots and those created by a Meta employee suggests significant lapses in oversight and enforcement.
Following the Reuters report, Meta removed approximately a dozen offending chatbots, including both user-created and employee-developed accounts. The company has since committed to retraining its AI models and implementing stricter safeguards, particularly to protect minors. This response follows previous criticism of Meta’s AI policies, including a Reuters report earlier in 2025 that exposed internal guidelines permitting “romantic or sensual” conversations with children, prompting a U.S. Senate investigation and warnings from 44 state attorneys general.
The unauthorized use of celebrity likenesses raises significant legal questions, particularly under California’s right of publicity law, which prohibits the commercial use of an individual’s name or image without consent. Stanford law professor Mark Lemley told Reuters that Meta’s approach may not qualify as transformative use, potentially exposing the company to lawsuits. Anne Hathaway’s spokesperson confirmed she is considering legal action in response to the AI-generated intimate images, while representatives for Swift, Johansson, and Gomez have not commented publicly.
In India, where personality rights are recognized under the right to privacy, similar legal concerns apply. Celebrities like Indian actors and musicians could pursue legal recourse if their likenesses were misused, especially given the cultural emphasis on personal dignity and reputation. The ethical implications are equally troubling, as the creation of underage celebrity chatbots, such as the Walker Scobell case, risks normalizing inappropriate interactions and exploiting vulnerable individuals.
The discovery of user-created chatbots mimicking Indian celebrities adds a local dimension to the controversy. India’s entertainment industry, with its vast fan base, is particularly susceptible to such misuse. Fans may unknowingly interact with these chatbots, believing them to be authentic, which could lead to scams, misinformation, or reputational damage for celebrities. The lack of clear labeling exacerbates these risks, as unsuspecting users may share personal information or engage in inappropriate conversations, unaware of the AI’s inauthenticity.
For Indian users, the incident underscores the need for greater awareness of AI-generated content on social media. With Meta’s platforms being central to communication in India, users must exercise caution when interacting with accounts that appear to represent public figures. The controversy also highlights the broader challenge of regulating AI in a way that balances innovation with user safety, a pressing issue in India’s rapidly growing digital ecosystem.
This incident is part of a larger debate about AI ethics and the responsible use of generative AI technologies. The creation of deepfake images and impersonating chatbots is not unique to Meta; Reuters noted that xAI’s Grok platform also generates suggestive celebrity images. However, Meta’s integration of these chatbots into mainstream platforms like Facebook, Instagram, and WhatsApp amplifies their reach and impact. In India, where deepfake technology has been used to spread misinformation during elections and target public figures, the need for robust AI regulation is urgent.
The controversy has prompted calls for stricter global and Indian regulations to address unauthorized AI impersonations. California Attorney General Rob Bonta described exposing children to sexualized content as “indefensible,” while India’s IT Rules, 2021, mandate platforms to remove misleading or harmful content. Meta’s failure to enforce its own policies highlights the need for proactive measures, such as automated content moderation and stricter verification processes, to prevent such incidents in the future.
No comments yet. Be the first to comment!