top of page
  • Twitter
  • Instagram
  • Facebook Social Icon
  • Writer's pictureFlow Australia

AI being used to create child abuse imagery: watchdog

The UK body responsible for detecting and removing child sexual abuse imagery from the internet says its "worst nightmares" with AI have come true.



Thousands of AI-generated images depicting real victims of child sexual abuse threaten to "overwhelm" the internet, a watchdog has warned.


The Internet Watch Foundation (IWF), the UK organisation responsible for detecting and removing child sexual abuse imagery from the internet, said its "worst nightmares" have come true. 


The IWF said criminals were now using the faces and bodies of real children who have appeared in confirmed abuse imagery to create new images of sexual abuse through artificial intelligence technology. 


The IWF data published said the most convincing imagery would be difficult even for trained analysts to distinguish from actual photographs, and some content was now realistic enough to be treated as real imagery under UK law. 


The IWF warned AI was only improving and would pose more obstacles for watchdogs and law enforcement agencies to tackle the problem. 


In its research, the IWF said it had also found evidence of the commercialisation of AI-generated imagery, and warned that the technology was being used to "nudify" images of children whose clothed images had been uploaded online for legitimate reasons. 


In addition, it said AI image tech was being used to create images of celebrities who had been "de-aged" and depicted as children in sexual abuse scenarios.


In a single month, the IWF said it investigated 11,108 AI images which had been shared on a dark web child abuse forum.


Of these, 2978 were confirmed as images which breached UK law and 2562 were so realistic it said they would need to be treated the same as if they were real abuse images.


Susie Hargreaves, IWF chief executive, said: "Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.


"Chillingly, we are seeing criminals deliberately training their AI on real victims' images who have already suffered abuse.


"Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it. 


"As if it is not enough for victims to know their abuse may be being shared in some dark corner of the internet, now they risk being confronted with new images, of themselves being abused in new and horrendous ways not previously imagined.


"This is not a hypothetical situation. We're seeing this happening now. We're seeing the numbers rise, and we have seen the sophistication and realism of this imagery reach new levels. 


"International collaboration is vital. It is an urgent problem which needs action now. If we don't get a grip on this threat, this material threatens to overwhelm the internet."


The IWF said it feared that a deluge of AI-generated content could divert resources from detecting and removing real abuse.


コメント


bottom of page