The social media platform X, formerly known as Twitter, has taken steps to limit the further spread of explicit deepfake images of Taylor Swift after they went viral last week.
Amid outcry from Swift’s fans on social media, lawmakers and the actors’ union SAG-AFTRA, X made the Grammy winner’s name unsearchable on its platform over the weekend. As of Monday afternoon, searching Swift’s name without quotes results in an error page that reads: “Something went wrong. Try reloading.”
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement shared with the Associated Press.
Last week, several explicit AI-generated images of the “Bejeweled” and “Cruel Summer” singer circulated on X. The doctored pictures were pornographic and referenced the 34-year-old’s high-profile romance with Kansas City Chiefs tight end Travis Kelce.
Hours after the images surfaced on Thursday, X’s safety team reminded users of its “zero-tolerance policy” on sharing “Non-Consensual Nudity (NCN) images.” The statement, which did not explicitly mention Swift, also said that users who posted the images would be held accountable.
“We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed,” X’s safety account added. “We’re committed to maintaining a safe and respectful environment for all users.”
X’s new restricted search action isn’t without fault. As of Monday afternoon, searching Swift’s full name in quotes or adding additional words at the end of the search phrase “Taylor Swift” conjures up posts, replies and images as usual — including graphic deepfakes of Swift.
Tesla Chief Executive Elon Musk, who officially took over Twitter in October 2022, made cuts to the platform’s moderation team, which was tasked to enforce rules against harmful content.
A representative for X did not immediately respond Monday to The Times’ request for comment.
Swift has not yet publicly addressed the explicit images, but the controversy reignited conversations about artificial intelligence and the need for more oversight, especially as the creation of AI images continues to overwhelmingly affect women and children.
“The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it’s happening to women everywhere, every day,” New York Rep. Joe Morelle said in a Thursday tweet.
“It’s sexual exploitation,” he added, before touting his proposed Preventing Deepfakes of Intimate Images Act, a bill that would make it illegal to share deepfake pornography without the consent of individuals being portrayed.
News of the Swift AI images raised bells, as Microsoft Chief Executive Officer Satya Nadella and White House Press Secretary Karine Jean-Pierre separately addressed the controversy on Friday.
“This is very alarming. And so, we’re going to do what we can to deal with this issue,” Jean-Pierre said during a press briefing, according to Reuters. “So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing, enforcing their own rules to prevent the spread of misinformation, and nonconsensual, intimate imagery of real people.”
SAG-AFTRA, which laid out terms concerning artificial intelligence in its 2023 contract, dubbed the AI images of Swift “upsetting, harmful, and deeply concerning.”
“The development and dissemination of fake images — especially those of a lewd nature — without someone’s consent must be made illegal,” the union said in a Friday statement. “As a society, we have it in our power to control these technologies, but we must act now before it is too late.”