Worried about AI creating fake nudes of real people? The government is finally taking action
By
Maan
- Replies 0
Content warning: This article contains references to image-based abuse.
A disturbing online trend has left women questioning how safe they truly are in the digital world.
It began with seemingly innocent photos—then turned into something far more sinister, altered and weaponised without consent.
Now, the NSW Government says enough is enough.
The state’s parliament introduced new laws targeting the creation and sharing of sexually explicit deepfake images, aiming to close what Attorney General Michael Daley called a dangerous gap in existing legislation.
Deepfakes are hyper-realistic images generated by artificial intelligence, often making it difficult to tell they are fake.
While NSW already outlawed the non-consensual recording and distribution of intimate images, the legislation will now extend to those digitally manipulated by AI to appear genuine.
One high-profile supporter of the reforms is TV presenter Tiffany Salmond, who discovered her social media posts had been altered into explicit content.
‘There was just so many deepfakes created, it just started an absolute onslaught,’ she told shared.
Salmond described the deepfakes as an attempt to ‘humiliate me, degrade me, maybe even make me question my own actions of posting a bikini pic online’.
She said she could not imagine the emotional toll on younger girls targeted in this way.
‘I wasn’t going to let it happen. I wanted them to know I see it [and] I actually think you are kind of pathetic,’ she said.
An estimated 98 per cent of deepfakes online are pornographic, with women making up the vast majority of victims.
Under the proposed amendments, producing a sexually explicit deepfake designed to appear as a genuine depiction of a real, identifiable person would carry a maximum penalty of three years in prison.
Sharing or threatening to share such images—regardless of whether the offender created them—would carry the same penalty.
The crackdown will also cover the creation, recording, and distribution of sexually explicit audio, whether real or AI-generated to mimic a person’s voice.
Daley said the measures were necessary to protect people from new forms of exploitation.
‘This bill closes a gap in NSW legislation that leaves women vulnerable to AI-generated sexual exploitation,’ he said.
‘We are ensuring that anyone who seeks to humiliate, intimidate or degrade someone using AI can be prosecuted.’
Watch the full report below:
Source: Youtube/9 News Australia
Concerns about AI misuse go far beyond deepfake images, reaching into areas that could impact your financial security.
One expert warns that AI-driven scams might soon trigger a large-scale bank fraud crisis.
To understand how this technology could affect your money, it’s worth exploring this urgent issue further.
Read more: Is your money at risk? AI could trigger a massive bank fraud crisis, warns tech expert
Will these laws be enough to stop AI from becoming another weapon for online abuse?
A disturbing online trend has left women questioning how safe they truly are in the digital world.
It began with seemingly innocent photos—then turned into something far more sinister, altered and weaponised without consent.
Now, the NSW Government says enough is enough.
The state’s parliament introduced new laws targeting the creation and sharing of sexually explicit deepfake images, aiming to close what Attorney General Michael Daley called a dangerous gap in existing legislation.
Deepfakes are hyper-realistic images generated by artificial intelligence, often making it difficult to tell they are fake.
While NSW already outlawed the non-consensual recording and distribution of intimate images, the legislation will now extend to those digitally manipulated by AI to appear genuine.
One high-profile supporter of the reforms is TV presenter Tiffany Salmond, who discovered her social media posts had been altered into explicit content.
‘There was just so many deepfakes created, it just started an absolute onslaught,’ she told shared.
Salmond described the deepfakes as an attempt to ‘humiliate me, degrade me, maybe even make me question my own actions of posting a bikini pic online’.
She said she could not imagine the emotional toll on younger girls targeted in this way.
‘I wasn’t going to let it happen. I wanted them to know I see it [and] I actually think you are kind of pathetic,’ she said.
An estimated 98 per cent of deepfakes online are pornographic, with women making up the vast majority of victims.
Under the proposed amendments, producing a sexually explicit deepfake designed to appear as a genuine depiction of a real, identifiable person would carry a maximum penalty of three years in prison.
Sharing or threatening to share such images—regardless of whether the offender created them—would carry the same penalty.
The crackdown will also cover the creation, recording, and distribution of sexually explicit audio, whether real or AI-generated to mimic a person’s voice.
Daley said the measures were necessary to protect people from new forms of exploitation.
‘This bill closes a gap in NSW legislation that leaves women vulnerable to AI-generated sexual exploitation,’ he said.
‘We are ensuring that anyone who seeks to humiliate, intimidate or degrade someone using AI can be prosecuted.’
Watch the full report below:
Source: Youtube/9 News Australia
Concerns about AI misuse go far beyond deepfake images, reaching into areas that could impact your financial security.
One expert warns that AI-driven scams might soon trigger a large-scale bank fraud crisis.
To understand how this technology could affect your money, it’s worth exploring this urgent issue further.
Read more: Is your money at risk? AI could trigger a massive bank fraud crisis, warns tech expert
Key Takeaways
- NSW moved to outlaw sexually explicit deepfake images and audio.
- Producing or sharing such content will carry a maximum three-year prison term.
- TV presenter Tiffany Salmond publicly backed the reforms after being targeted.
- An estimated 98 per cent of deepfakes are pornographic, with most victims being women.
Will these laws be enough to stop AI from becoming another weapon for online abuse?