AI Can Make Disturbing Deepfakes Of Kids – What Parents Should Know

There are a lot of things that worry me as a parent, but AI and the rapid speed at which everything is evolving – and particularly how this impacts children – is way up there. I’m sure I’m not alone.

This week, Children’s Commissioner Dame Rachel de Souza called on the government to introduce a total ban on “nudification” apps that use artificial intelligence (AI) to generate sexually explicit deepfakes of children.

It is illegal to create or share a sexually explicit image of a child. But the technology enabling it to happen remains legal – and it is now very accessible.

What’s more, while most nudifying sites prohibit the production of deepfake sexual images featuring children, Internet Matters suggested these guardrails are often easy to circumvent.

Perhaps unsurprisingly given the growing trend of misogyny in classrooms and online, girls feel particularly vulnerable.

According to Internet Matters, 13% of children have had an experience with a nude deepfake – and 99% of nude deepfakes feature women and girls.

Rachel de Souza’s latest report found girls fear nudification technology in the same way they fear the threat of sexual assault in public places. They also expressed concern that deepfakes could be used against them to coerce or blackmail them into acts they would otherwise not consent to.

One 16-year-old girl said: “It’s so easily accessible that anyone could just make those photos, put them on the internet and just like … create a big controversy just because they clicked a few buttons on a mobile app.”

Already, girls are changing their behaviours to try and stop themselves from being targeted. Some explained how they had reduced their presence online – a consideration which the report found was not flagged by boys.

Protecting kids from deepfakes

Rachel de Souza said while AI has “enormous potential to enhance our lives”, in the wrong hands it also brings “alarming risks to children’s safety online”.

“Children have told me they are frightened by the very idea of this technology even being available, let alone used,” she said.

The commissioner said tools using deepfake technology to create naked images of children should not be legal and called on the government to take decisive action to ban them.

She also wants to see systems put in place to remove sexually explicit deepfake images of children from the internet.

There are a handful of ways young people can try and keep themselves safe – and as parents it’s crucial we have these conversations with them and help guide them, so they’re not navigating it alone.

Wellbeing experts at Brook recommend for teens to keep any social media accounts private and to be aware of the amount of personal information they share online.

“They should immediately block and report anyone sending deepfake images or videos,” add the experts.

There is a very real risk of nude deepfakes being used to scam money out of young people (and their families), so it’s crucial to keep an open and non-judgmental dialogue open with your child so they know you are there for them, no matter what.

What can parents do if their child is impacted?

“The creation and circulation of deepfake nude images of children is deeply distressing and can have a devastating impact,” Emma Barrow, senior solicitor in the abuse claims team at Bolt Burdon Kemp, told HuffPost UK.

“While the technology used to create these images may currently be legal, the end result – a sexualised image of a minor – is not.”

As such, these incidents must be reported to the police.

“Creating, possessing, or distributing indecent images of children is a criminal offence under the Protection of Children Act 1978, regardless of whether the image is ‘real’ or AI-generated,” said Barrow.

The solicitor also suggested parents should consult a professional experienced in abuse law.

“Creating and circulating deepfake nude images falls under image-based abuse and your child may be entitled to compensation for the harm caused by the creation and sharing of the images,” she added.

“There are also avenues to have images removed from platforms and search engines under data protection law and using reporting mechanisms on major platforms, many of which have policies against non-consensual imagery and sexual content.”

The expert said the issue highlights the “inadequacy of current regulation”.

“Despite the introduction of the Online Safety Act, which was designed to make the UK the safest place to be online, it does not yet adequately address emerging harms like AI-generated sexual imagery of children,” she said.

“The law is struggling to keep up with technological advances, and without urgent reform – including a ban on apps designed for this purpose – children will continue to be at risk.”

Related Content

British solider told girlfriend ‘I’m going to die’ after Thai waterfall plunge

Zoe Ball makes BBC Radio 2 return after stepping back from breakfast slot

Leicester vs Southampton DELAYED as referee forced off on his DEBUT after undergoing treatment

Leave a Comment