Elon Musk’s social media platform X has prohibited various recent Taylor Swift searches due to pornographic AI-generated photos of the entertainer circulating online. 

Attempts to search the celebrity’s title without citation marks on the site resulted in an error message and a prompt for users to retry their search, which added, “Don't fret. It's not your fault”.  However, putting quote marks around the singer's name permitted posts to appear that mentioned her name. 

Sexually explicit and abusive fake images of Swift began circulating widely on January 15 on X, making her the most famous victim of hatred that tech platforms and anti-abuse organizations have struggled to end. Head of business operations at X, Joe Benarroch, stated that the incident is being taken very seriously and numerous new caution signs have been prioritized for personal safety. Unlike more conventional doctored images that have caused chaos for celebrities in the past, the Swift images appear to have been created using an artificial intelligence image generator that can instantly create new images from a written prompt. 

After the images began spreading online, the singer’s devoted fanbase of “Swifties” quickly went into action, launching a counteroffensive on X and a #ProtectTaylorSwift hashtag to flood it with more positive images of the pop star. Some said they were reporting accounts that were sharing the deep fakes. The deep fake-detecting group Reality Defender claimed it tracked a plethora of nonconsensual pornographic material involving Taylor Swift, particularly on X, formerly known as Twitter. A few images made their way to Meta-owned Facebook and other large social media platforms. The researchers found at least a couple dozen explicit AI-generated images. The most commonly shared were football-related images, showing a painted or bloodied Swift that objectified her and in some cases inflicted harm on her persona. The Swift images first emerged from an ongoing campaign that was created last year on media platforms to produce sexually explicit AI-generated images of female celebrities, said Ben Decker, founder of the threat intelligence group Memetica. One of the images that went viral appeared online as early as January 6, he said. 

Most frequently used AI image-generators have safeguards to prevent abuse, but commenters on anonymous message boards discussed tactics for how to circumvent the moderation, especially on Microsoft Designers text-to-image tool, Decker claimed. Decker also stated that there has been a longstanding relationship between trolls and platforms. As long as platforms exist, trolls will work to destroy them. The question that arises from this is how many more times will this happen before there is any serious action taken? X’s move to reduce searches of Swift is likely a stopgap measure. “When you're not sure where everything is and you can't guarantee that everything has been removed, the simplest thing you can do is limit people's ability to search for it.” claimed Decker. 

Researchers have said that the number of explicit creators have grown in the past few years, as the technology used to produce such images has become more accessible and more user friendly. In 2019, a file released by the AI firm DeepTrace Labs showed these images were overwhelmingly weaponized against females. Most of the victims of these photos were Hollywood actors and South Korean K-pop singers. 

In the European Union, separate pieces of new legislation include provisions for AI generated images. The Digital Services Acts, which went into effect last year, requires online platforms to take measures to curb the risk of spreading content that eliminates “fundamental rights” like privacy, such as “non-consensual” images or AI generated porn. The Artificial Intelligence Act, which still awaits final approvals, will require companies that create photos with AI systems to also inform users that the content is artificial or manipulated.


AI-generated photos of Taylor Swift lead to blocked searches on Elon Musk’s X

Ellie Palmer