A Cambridge academic argues that governments should not always stick to “magic software solutions” in the form of artificial intelligence, especially when it comes to dealing with child safety.

According to the expert, the process should be natural, which means coordination with the authorities and other people who know the case.

Government should not resort to ‘magical thinking’

Using AI to scan encrypted messages is wrong for protecting children, argues Cambridge Expert

(Photo: ROBIN WORRALL from Unsplash)
More governments are resorting to “magical thinking” rather than tackling the root of the child’s safety problem.

Ross Anderson of the University of Cambridge refuted an earlier discussion paper entitled “Thoughts on child safety on commodity platforms” by Ian Levy and Crispin Robinson. According to him, the government should always examine the children’s point of view rather than relying on organizations that sell the computer software.

The two senior directors of GCHQ told the paper that it is necessary to scan the children’s encryption messaging apps to detect potential threats and unwanted activities that threaten their security and privacy.

In addition, Levy and Robinson also mentioned that society should not be forced to establish communication or pave the way for “safe spaces for child molesters.”

Anderson, who also teaches at the University of Edinburgh, wrote a 19-page rebuttal to the paper. He said using AI to detect terrorism, child abuse and other illicit events would not completely solve the problem.

For Anderson, ‘client-side scanning’ only poses privacy risks to all people in society. Meanwhile, the law enforcement side could be problematic if implemented.

“The idea of ​​using artificial intelligence to replace police officers, social workers and teachers is exactly the kind of magical thinking that leads to bad policy,” he wrote in his paper titled “Chat Control or Client Protection?”

In addition, the Cambridge professor stressed that the idea of ​​scanning messages through encryption apps could most likely create a ripple between societal groups and industries.

Related article: Google Brain Demonstrates AI-Generated Coding: Yes, Machines Can Keep Secrets From Us

Language modeling is flawed

According to Computer Weekly, Levy and Robinson stated in their paper that the language models must be run (entirely) locally on a smartphone or a PC to identify cues for grooming and other activities. This concept has found its way into European Union and UK law.

However, Anderson refuted that natural language processing models are very vulnerable to errors. The expert stated that there is an error rate of less than 5 to 10% when running the models.

In short, if governments stick to the use of AI when it comes to child safety, billions of false alarms around the world would be processed in just one day.

Anderson also noted in his paper that more tech companies often fail to handle the complaints because of the expensive deployment of human moderators.

He wrote in the same paper that the companies should not ignore requests from other citizens in society who want to get in touch regarding the abuse. If they could respond quickly to the police, they could do the same with ordinary residents in the community.

Also read: AI Evolution: Art Copying Is Huge Now, Copyright Laws Coming Soon?

This article is owned by Tech Times

Written by Joseph Henry

ⓒ 2022 TECHTIMES.com All rights reserved. Do not reproduce without permission.




Leave A Reply