The EU has contributed £1.8 million to the development of new software that aims to reduce the amount of child sexual abuse content that is seen online.
It will be put to the test on volunteers who have enlisted help because they are attracted to illegal images to see whether they can resist their desires.
There is speculation that it might help lessen the “growing demand” for images of child abuse.
For the Protech project, companies from the UK and the EU have partnered.
The Salus app, developed as part of the project, employs artificial intelligence to recognise potentially pornographic content and block users from viewing it.
Moreover, more conventional blocking techniques will be used.
The Internet Watch Foundation, a group that looks for, reports, and eliminates child abuse information, will help train the AI system developed by UK firm SafeToNet.
According to SafeToNet’s Tom Farrell, a former law enforcement officer with 19 years of experience, the application was not intended to be a tool to report users to the police.
Those who are actively attempting to limit their exposure to child sex abuse material wouldn’t utilise such a method if they believed it would alert the authorities.
To identify volunteers, the app will employ organisations that work with people who are seeking help after being drawn to internet images of child abuse.
Read More:- The Contact Center as a Service (CCAAS) Platform and its Advantages For Large Businesses.
One such organisation is the British charity The Lucy Faithfull Foundation.
For people who are worried about downloading illegal images and wish to stop, it provides a hotline.
This contains a large number of people who openly admit to being paedophiles in addition to other people who have previously been convicted or imprisoned for sexual offences.
According to Donald Findlater of the foundation, the new programme might help people control their behaviour.
That is a useful assistance for individuals who see a weakness in themselves, he added.
Participants in the Protech initiative hope it will lessen the “growing demand for child sexual assault content online.”
In 2021–2022, a record–breaking 30,925 offences involving the possession and distribution of indecent photographs of kids were committed, according to the NSPCC.
According to research from the Police Foundation think tank, the quantity of internet-based child sexual abuse offences has “just beyond the capability of law enforcement organisations, worldwide.”
Since 2014, the UK has made more arrests for possessing child sex abuse materials than any other country in the world, said Tom Farrell of SafeToNet, and in the process, some very serious offenders have been discovered.
Yet a lot of people keep on seeing images: “Thus, making an arrest won’t be the solution.
We think it is possible to prioritise prevention while reducing demand and accessibility.”
Some aspects of the app’s functioning still need to be improved.
A balance must be struck between under-blocking, which leaves out many harmful images, and over-blocking, which would prevent a device from being used legally.
AI systems are not perfect.
The app will be assessed over 11 months in five countries with a minimum of 180 users each: Germany, the Netherlands, Belgium, the Republic of Ireland, and the United Kingdom.
In a statement, Security Minister Tom Tugendhat said that the government supported “this important initiative by SafeToNet.”
Companies need to step up and invest in these types of programmes, he argued, to “find ways of defending children against cyber abuse.
Independent experts think the idea has promise.
Prof. Belinda Winder of Nottingham Trent University thinks it’s a good development that might aid individuals who “want to be urged to resist their bad tendencies, and who would benefit from this safety net.”