Top lawyer warns that AI chatbots might “easily” recruit young males to commit terror attacks.

Photo of author

By Creative Media News

The independent evaluator of terrorism legislation has warned that chatbots powered by artificial intelligence could soon be used to prepare extremists for terrorist attacks.

Jonathan Hall KC told that bots like ChatGPT could be easily programmed, or even decide on their own, to spread terrorist ideologies to vulnerable extremists, and that ‘AI-enabled attacks are undoubtedly imminent’.

Mr. Hall also cautioned that if an extremist is groomed by a chatbot to commit a terrorist atrocity or if artificial intelligence is used to instigate one, it may be difficult to prosecute anyone because Britain’s anti-terrorism legislation has not caught up with the new technology.

Top lawyer warns that ai chatbots might "easily" recruit young males to commit terror attacks.
Top lawyer warns that ai chatbots might "easily" recruit young males to commit terror attacks.

“AI chatbots may be programmed, or even decide, to spread violent extremist ideology,” Mr. Hall added.

“However, when ChatGPT begins to promote terrorism, who will prosecute?

Since criminal law does not apply to machines, the AI groomer will escape punishment. Neither does it [the law] operate reliably when human and machine responsibility is shared.’

“Since an artificial companion is a boon to the lonely,” Mr. Hall worries that chatbots could help lone-wolf terrorists. Many arrestees will be neurodiverse, maybe with medical or learning problems.”

“Terrorism follows life,” he cautions, so “when we move online as a society, terrorism moves online.”

He also notes that terrorists are “early tech adopters,” citing their recent “misuse of 3D-printed guns and cryptocurrency” as examples.

Mr. Hall stated that it is unknown how well companies that operate AI such as ChatGPT monitor the millions of conversations that take place daily with their bots, or if they notify law enforcement agencies such as the FBI or the British Counter Terrorism Police if anything suspicious occurs.

AI bots have caused significant damage, yet there is no indication they have trained terrorists. A Belgian father of two committed suicide after six weeks of debating climate change with Eliza, an automaton. Australia’s mayor has vowed to sue OpenAI, ChatGPT’s inventors, for falsely saying he served time for bribery.

This weekend, it was discovered that Jonathan Turley of George Washington University in the United States was falsely accused by ChatGPT of sexually harassing a female student during a vacation to Alaska he did not take. The allegation was made to a researcher at the same university who was investigating ChatGPT.

The Science and Technology Committee of Parliament is currently investigating AI and governance.

Its chair, Conservative MP Greg Clark, stated, “We recognize there are dangers here, and we must improve governance.” Recent talks suggest that the internet helps youths commit suicide and grooms terrorists. Given these risks, we must remain vigilant about machine-generated material.

Raffaello Pantucci, a counter-terrorism expert at the Royal United Services Institute (RUSI) think tank, stated, “The danger with AI like ChatGPT is that it could aid a ‘lone actor terrorist,’ as it would be a perfect foil for someone who is seeking to understand alone but is hesitant to communicate with others.”

Mr. Pantucci said, “My opinion is that it is somewhat difficult to fault the company. As I am not entirely certain that they can control the machine themselves.”

Read More

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Skip to content