Pupils make indecent ‘AI images’ of classmates


The UK Safer Internet Centre said it had received reports from schools about pupils using AI generators to create explicit photos.

Experts at the organisation said such images – which legally constitute child sexual abuse material – might have harmful effects on youngsters, adding the pictures could be used for abuse or even blackmail.

UKSIC urged schools to ensure that their filtering and monitoring systems were able to effectively block illegal material across devices to combat the rise of such activity.

David Wright, UKSIC’s director, noted AI can be used for good, but added: “The reports we are seeing should not come as a surprise.

“Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies become accessible to the public.

“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows.”

Mr Wright explained intervention is crucial to prevent the criminal content from spreading further.

In October, the Internet Watch Foundation, which forms part of UKSIC, warned that AI-generated images of child sexual abuse are now so realistic that many would be indistinguishable from real imagery, and thousands of such images were online.

AI has increasingly become an area of focus in the online safety debate over the last year, particularly since generator ChatGPT’s launch last year.

Online safety groups, governments and industry experts want better ­regulation of the sector due to fears it is developing faster than authorities are able to respond to it.

Leave a Reply

Your email address will not be published.