webb tale

Deep Fake Technology: AI-Generated Nude Images of Taylor Swift Went Viral on ‘X’ Sparking Outrage Among Fans

webbtale.com
6 Min Read
Deepfake Technology: AI-generated nude images of Taylor Swift went viral on 'X' sparking outrage among fans
Picture courtesy: Google

Taylor Swift’s non-consensual sexually explicit deepfakes went viral on X on Wednesday, garnering more than 27 million views and 260,000 likes in 19 hours before the account that posted the images was suspended.

Deepfakes depicting Swift in nude and sexual scenarios continue to spread on X, including reposts of viral deepfake images. Such images can be produced with AI tools that develop entirely new and fake images, or they can be created by taking a real image and “undressing” it with an AI tool.

The source of the images is uncertain, but there is a watermark suggesting they were taken from a website notorious for sharing fake nude photos of celebrities. This website has a section dedicated to “AI deepfake” content. 

After analyzing the images, Reality Defender, an AI-detection software company, said that there was a high probability that they were generated using AI technology. 

The wide circulation of these images over the course of nearly a day brings attention to the proliferation of AI-generated content and misinformation online. Despite the recent increase in this issue, tech platforms like X, who possess their own generative-AI products, have not yet implemented or discussed any tools for detecting generative-AI content that goes against their guidelines. 

Taylor Swift’s most ‘viral’ deepfake, amassing thousands of views and shares, depicted her nude inside a football stadium. These deepfakes were a consequence of Swift facing prolonged misogynistic attacks due to her support for her partner, Travis Kelce, a player for the Kansas City Chiefs, and her attendance at NFL games. 

Swift addressed the criticism in an interview with Time, acknowledging that she was unaware if she was being overly exposed and upsetting certain individuals. X did not provide an immediate response to a comment request, while Swift’s representative declined to comment on the matter officially. 

AI-Generated ‘Deepfake’ Images of Taylor Swift went viral on ‘X’ creating controversy

Despite banning manipulated media that could harm specific individuals, X is consistently slow or it has failed to adequately address the issue of sexually explicit deepfakes on their platform. This was exposed when a 17-year-old Marvel star in early January reported finding sexually explicit deepfakes of herself on X and being unable to remove them. Even as of Thursday, NBC News discovered such content on X. In June 2023, a review by NBC News revealed nonconsensual sexually explicit deepfakes featuring TikTok stars circulating on the platform. When X was contacted for comment, only some of the material was removed as a result of a mass-reporting campaign led by the fans of Swift, one of the affected artists. 

In response to the trending topic “Taylor Swift AI” on X, fans of Taylor Swift flooded the hashtag with positive posts, as confirmed by an analysis conducted by Blackbird.AI. Blackbird.AI is a company that utilizes AI technology to safeguard organizations from online attacks based on predetermined narratives. Subsequently, the hashtag “Protect Taylor Swift” also gained significant popularity.

 NBC News received two screenshots from an individual who claimed credit for spearheading this reporting campaign. The screenshots depicted notifications from X, indicating that two accounts that shared Swift deepfakes were suspended due to violating X’s “abusive behavior” rule. 

The individual who shared the screenshots communicated with NBC News anonymously through direct messages. In these messages, she explained that she became increasingly troubled by the negative impacts of AI deepfake technology on the lives of ordinary women and girls. Consequently, she felt compelled to mass report these individuals in an effort to have them suspended.

 Rep. Joe Morelle, a Democrat from New York, introduced a bill in May 2023 to make nonconsensual sexually explicit deepfakes a federal crime. Recently, he shared his thoughts on the Swift deepfakes on social media, expressing his concerns about the harm they cause. Unfortunately, the bill has not made progress since its introduction, despite the support of a well-known victim of deepfakes who spoke out in January. 

Carrie Goldberg, an attorney who has been advocating for victims of deepfakes and other forms of nonconsensual sexually explicit material for over ten years, noted that even tech companies and platforms with policies against deepfakes often fail to prevent their spread on their platforms.

“Most human beings don’t have millions of fans who will go to bat for them if they’ve been victimized,” Goldberg said. “Even those platforms that do have deepfake policies, they’re not great at enforcing them, or especially if content has spread very quickly, it becomes the typical whack-a-mole scenario.”

“Just as technology is creating the problem, it’s also the obvious solution,” she continued. “AI on these platforms can identify these images and remove them. If there’s a single image that’s proliferating, that image can be watermarked and identified as well. So there’s no excuse.”

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *