Since the graphic images went viral, various organisations have called for action against the proliferation of damaging deepfake content. The White House Press Secretary Karine Jean-Pierre urged Congress to take legislative action on the issue, noting that lax enforcement disproportionately impacts women and girls.
Loading
On Friday, the Hollywood actors’ union SAG-AFTRA also condemned the images, describing them as “upsetting, harmful and deeply concerning”.
“The development and dissemination of fake images – especially those of a lewd nature – without someone’s consent must be made illegal,” the union said. “As a society, we have it in our power to control these technologies, but we must act now before it is too late.”
The union also voiced its support for New York Democrat Joe Morelle, who is pushing a bill that would criminalise the sharing of deepfake porn online.
Deepfakes use artificial intelligence, known as “deep learning”, to create fake images or videos of real people. This usually involves manipulation of their body or face. According to the BBC, there has been a 550 per cent rise in the creation of such manipulated imagery since 2019.
Researchers suspect the fake Swift images were created by diffusion models – a generative artificial intelligence model that can produce new and photorealistic images from written prompts. This includes models like Midjourney, Stable Diffusion and OpenAI’s DALL-E.
Loading
Microsoft currently offers an image generator based partly on DALL-E. Its chief executive, Satya Nadella, called the fake Swift images “alarming and terrible” in an interview with NBC News, adding that “irrespective of what your standing on any particular issue is, I think we all benefit when the online world is a safe world.”
Microsoft is currently in the process of investigating whether its image-generator tool was misused.
In Australia, civil and criminal legislation do not penalise the creation and possession of pornographic deepfakes, meaning it remains difficult to prosecute its distribution. Intellectual property and media lawyers Ted Talas and Maggie Kearney said in a report that the legal frameworks currently in place to regulate deepfakes are probably insufficient to address the challenge they pose to individuals.
“Future legislative reform will only ever form part of an effective solution. What is required is the continuing development of effective tools to detect, identify and alert internet users of deepfakes.”
Find out the next TV, streaming series and movies to add to your must-sees. Get The Watchlist delivered every Thursday.









Add Category