The circulation of Taylor Swift deepfake videos on the internet has caused widespread anger and disapproval.

The circulation of Taylor Swift deepfake videos on the internet has caused widespread anger and disapproval.

the emerging technology

Deepfake images of Taylor Swift that contain pornographic content are being shared on the internet, with the singer being the most well-known target of this developing technology.a scourge

The task of addressing this issue has been challenging for technology platforms and anti-harassment organizations.

This week, there has been a proliferation of sexually graphic and harassing fake pictures of Swift being shared on the social media site X.

The devoted followers of Taylor Swift, known as “Swifties,” swiftly took action by starting a campaign on the social media platform formerly known as Twitter. They used the #ProtectTaylorSwift hashtag to flood the site with positive images of the singer. Others mentioned that they were reporting accounts that were sharing deepfakes.

A video created by AI, which showed Swift’s image promoting a counterfeit Le Creuset cookware contest, spread on the internet. The perpetrator of this scam is unknown, and Le Creuset released an apology to anyone who may have fallen for it.

is increasing

Experts have reported a rise in the quantity of obvious deepfakes. have grown

In recent years, advancements in technology have made it easier and more available to create these types of images. A report from DeepTrace Labs in 2019 revealed that these images were primarily used as weapons against women, with a large number of victims being Hollywood actresses and South Korean K-pop stars.

According to Brittany Spanos, a senior writer for Rolling Stone and an instructor at New York University, fans of Swift are known for their swift action in standing up for the artist, particularly those who are deeply devoted and in instances where there may be wrongdoing.

According to her, this could have a significant impact if she follows through with taking it to court.

Upon being asked for a response regarding the fabricated images of Swift, X referred the Associated Press to a statement from its safety account which stated that the company has a strict policy against the dissemination of non-consensual nude images on its platform. The company has significantly reduced its teams responsible for moderating content since the leadership of Elon Musk began in 2022.

“Our company has taken prompt action to remove any images that have been identified and has taken necessary measures against the accounts responsible for posting them,” stated the company in their post on Friday morning. “We are closely monitoring the situation to promptly address any future violations and remove the content.”

Meta has released a statement expressing strong disapproval of “the material that has surfaced on various online platforms” and has taken measures to remove it.

The company stated that they are still checking their platforms for any content that violates their policies and will take necessary measures if needed.

A spokesperson for Swift did not promptly reply to a comment request on Friday.

According to Allen, there is a 90% certainty that the images were generated using diffusion models. These models are a type of artificial intelligence that can create realistic images based on written prompts. The most well-known examples include Stable Diffusion, Midjourney, and OpenAI’s DALL-E. Allen’s team did not attempt to trace the origin of the images.

On Friday, Microsoft announced that they are currently looking into potential misuse of their image-generator, which is partially based on DALL-E. They clarified that, as with other AI services for sale, they do not permit any “adult or non-consensual intimate content.” Repeated attempts to create prohibited content may lead to loss of service access.

When questioned about the Swift deepfakes during an interview on “NBC Nightly News,” CEO of Microsoft Satya Nadella stated on Friday that there is still much work to be done in establishing safeguards for AI and it is important for us to act quickly on this matter.

Nadella stated that this is concerning and awful, and thus we must take action.

During the middle of the journey, Midjourney, OpenAI, and Stable Diffusion-maker Stability AI did not promptly provide a response when requested for comment.

Members of Congress who have proposed legislation to impose stricter regulations or make deepfake pornography a criminal offense have stated that this incident highlights the need for the United States to implement stronger safeguards.

Representative Yvette D. Clarke, a Democratic lawmaker from New York, stated that there has been a longstanding issue of non-consensual deepfakes targeting women. She believes that what happened to Taylor Swift is a widespread problem that many people are unaware of. In response, she has proposed a bill that would mandate the inclusion of digital watermarks in deepfake content.

Congressman Joe Morelle, a fellow Democrat from New York who is advocating for a law to make sharing deepfake porn online a crime, expressed his distress over what occurred to Swift and noted that this issue is becoming increasingly prevalent on the internet.

Morelle stated that even though the images may not be authentic, the consequences are significant. In our constantly evolving digital society, deepfakes are a common occurrence for women worldwide, and it is crucial to take action to prevent them.

Source: cbsnews.com