When users attempted to search for “Taylor Swift” on X, a message appeared stating, “Something went wrong. Try reloading.”
Reports surfaced earlier in the week about explicit images of the singer circulating on the platform. However, these images were confirmed to be fake, either manually morphed or entirely generated by AI.
Concerned fans initiated a movement on X with the hashtag “ProtectTaylorSwift,” urging US officials to take action against the explicit content. Posts and accounts sharing fake images of Swift were flagged by her fans, and real images of Taylor Swift flooded the comment sections of offending posts.
X responded with a public statement emphasizing its strict prohibition of non-consensual nudity. The statement assured that the platform’s teams were actively removing identified images and taking appropriate actions against the responsible accounts. The platform remained ambiguous about whether it had completely blocked users from searching for the singer’s name.
The White House also weighed in on the matter, expressing concern about the alarming nature of AI-generated images. During a press briefing, Secretary Karine Jean Pierre highlighted the need for government regulations to prevent the misuse of AI on social media platforms. She emphasized that social media companies should enforce rules to curb such misuse, including banning harmful content.
In response to the incident, the US Parliament is considering new laws to criminalize deepfake images. “Deepfake” refers to the manipulation of faces onto another person’s body in a video using AI.
A survey revealed a 550 percent increase in doctored images since 2019, fueled by the rise of AI in 2023. Currently, there are no federal laws against the creation or sharing of deep-fake images. However, the UK took a step in this direction in 2023 by enacting a law against sharing deepfake pornography as part of its Online Safety Act.
Taylor Swift ‘furious’ over her deepfake pics, contemplates legal action!