have been circulating on social media are not real.
The false pictures of Taylor Swift being shared on social media are not authentic.
rapidly circulated on various social media platforms
A recent study discovered that in late January, there was a chatroom challenge aimed at circumventing filters designed to prevent the use of artificial intelligence in creating pornography.
According to Graphika, a company that studies social networks, the pictures of the popular singer can be found on a forum called 4chan, which is an internet platform known for spreading conspiracy theories, hateful language, and other contentious material.
According to Graphika, individuals on the website 4chan engaged in a “game” to create explicit and aggressive images of well-known women, such as singers and politicians. The company found evidence of a chat thread on 4chan that encouraged users to attempt to circumvent the restrictions of AI-based image creation tools like OpenAI’s DALL-E, Microsoft Designer, and Bing Image Creator.
Why do AI image generators give biased results?
According to OpenAI, the images of Swift were not created using ChatGPT or its API.
OpenAI stated that they strive to eliminate highly explicit material during the training process of their DALL-E model. They also have implemented extra precautions, such as rejecting requests for specific public figures or explicit content, for their products like ChatGPT.
A spokesperson for Microsoft stated that the company is currently looking into the images and has taken steps to enhance their existing safety measures in order to prevent their services from being utilized in the creation of similar images.
No response was received from 4chan when a comment was requested.
The false pictures of Swift rapidly circulated on various platforms, garnering millions of views and causing X (previously known as Twitter) to intervene.
The system restricts searches for the performer.
for a short period of time.
The enthusiastic followers of the famous celebrity swiftly initiated a response on the platform previously referred to as Twitter, inundating the social media platform with a #ProtectTaylorSwift hashtag along with uplifting images of the singer.
The Screen Actors Guild expressed their distress over the images of Swift, stating that they were disturbing, damaging, and troubling. They also emphasized the need for laws against the creation and sharing of fake images, particularly those of a sexual nature, without consent.
Phony porn made with software has been around for years, with scattered regulation leaving those impacted with little legal or other recourse to get the images taken down. But the advent of so-called generative AI tools has fueled the creation and spread of pornographic “deepfake” images, including of celebrities.
AI is also being employed to focus on famous individuals through various methods. In January,
A photo of Swift was used to promote a fake giveaway for Le Creuset cookware on the internet. Le Creuset apologized to anyone who fell for the scam.