We need to act now on young people creating AI indecent imagery, before it’s too late
A new report has highlighted that pupils are using AI-generating technology to create images of children that legally constitute child sexual abuse material.
A new report has highlighted that pupils are using AI-generating technology to create images of children that legally constitute child sexual abuse material.
This week The Guardian highlighted concerning reports of children and young people (CYP) in British Schools using the relatively new artificial intelligence (AI) image-generating technology to make indecent images of children. This issue was raised by the UK Safe Internet Centre (SIC) which stated that these images constitute illegal child sexual abuse material (CSAM) and were hard to distinguish from non-AI generated images. Whilst the exact numbers of images that have been reported are not available, the UK SIC states that, at the moment, they are relatively low.
The Internet Watch Foundation (IWF) explains that AI-generated images appear as photographs, developed by word-to-image technology whereby an image is produced based on the description typed out by the user.
Before the recent developments in AI, digitally altered material was quite often easily identifiable as such due to a lack of sophistication in the technology used. However, the proficiencies of AI mean that the alteration and creation of fake, photoreal child sexual abuse material is much easier, which poses a challenge for child protection and risk management.
To put this into a wider context, the accessibility of CSAM online has been growing exponentially over the last 30 years. Material includes animation (i.e., Anime or Manga), physical sexual abuse imagery (self-generated or grooming/abuse-generated), or digital alterations of existing material. Before the recent developments in AI, digitally altered material was quite often easily identifiable as such due to a lack of sophistication in the technology used. However, the proficiencies of AI mean that the alteration and creation of fake, photoreal CSAM is much easier which poses a challenge for child protection and risk management.
Research by the IWF recognises that AI-generated images are a small proportion of normal IWF activities, but it is a growing area, with 20254 images being posted to a CSAM forum in September 2023. One of the important things that that IWF indicates is the increased availability of various tools that exist for improving and editing generated images to the point that AI CSAM is now realistic enough to be treated as ‘real’ CSAM which was recently highlighted in a BBC investigation. This is worrying and problematic, especially for the victims of these AI images, whose unintentional involvement has implications for treatment, therapy, and school/welfare support. It’s imperative that this technology is better understood and how accessible it is and what platforms it can run on to ascertain the extent to which children and young people are exposed to it.
The rate of growth in AI-generated CSAM in the wider arena and the indications by the IWF of CYP generating these images need to be taken incredibly seriously. Intervention and education need to be urgently implemented to prevent the same level of escalation that has been seen in adults generating these images.
Quite often when technology and media have been used in producing abuse material, the warning signs have been ignored and any response has been “too little, too late”. With regard to the technology’s use by CYP, there is still time to put interventions in place.
Whilst concerned groups are calling for urgent action there is very little consideration given to what that action should be. One key approach is tackling the issue through early relationships, sex and health education not only in schools, but in communities online and offline. This needs to be free from parameters so that schools can respond to any issues highlighted without restriction.
This could act as both an intervention, for when these images have been produced, and prevention, providing the safe spaces in which to understand the damage such images can do. Such subject matter can be coupled with sessions on pornography and links to the desensitisation of sexual imagery. Resources and training for teachers in this area are urgently needed.
It is also important to consider how this issue is framed. CYP who are producing these images may not understand the enormity of the impact of doing so. Therefore, the approach should be one of understanding, rather than condemnation. If feelings of shame are coupled with this issue that runs the risk of CYP not being able to talk about such things in safe spaces, and risks exacerbating the problem.
This involvement of teachers does not solely link to RSHE sessions, it can, and should, be underpinned by a whole school approach, ethics of online conduct discussed in sessions on citizenship for example. Talking to young people is a key aspect of this work - they are the experts in the online world they are part of today. For many CYP the online and offline worlds are seamless, and they should always be part of the conversation when planning sessions on matters such as these.
However, this is not just a school issue. For example, parents/carers, local communities, social and youth services and criminal justice need to be involved in the planning of approaches. Everyone needs to be part of the conversation.
Looking at the context and escalation of AI-generated CSAM, one thing is certain. The problem of children and young people producing these images is in its infancy and we need to act now before it is too late.
This Perspective was co-authored by Professor Kieran McCartan, Professor of Criminology at UWE Bristol.