The US Air Force disputes that an AI drone attacked a test operator.

Photo of author

By Creative Media News

A US Air Force colonel “misspoke” when describing an experiment. In which an AI-enabled drone opted to attack its operator to complete its mission, the service has said.

Colonel Tucker Hamilton, US Air Force AI test and operations leader, spoke at a Royal Aeronautical Society conference.

A report on the topic went viral.

The Air Force asserts that no such experiment was conducted.

In his presentation, he described a simulation in which a human operator repeatedly prevented an AI-enabled drone from destroying Surface-to-Air Missile sites.

The drone damaged the communication tower, preventing the operator from connecting.

The us air force disputes that an ai drone attacked a test operator.
The us air force disputes that an ai drone attacked a test operator.

Col Hamilton later clarified in a statement to the Royal Aeronautical Society. “We’ve never conducted that experiment, nor would we need to recognize that this is a plausible outcome.”

He added that it was more of a “thought experiment” than anything that had occurred.

AI cautions

Recent warnings from experts concerning AI’s threat to humanity vary in severity.

Prof. Yoshua Bengio, one of three computer scientists dubbed the “godfathers” of artificial intelligence (AI) after obtaining the prestigious Turing Award for their work, stated that the military should not have any AI capabilities.

He referred to it as “one of the worst possible locations for a superintelligent AI.”

A pre-planned scenario?

I spent several hours this morning talking to defence and AI professionals who all doubted Col Hamilton’s claims. Which were extensively reported before his clarification.

One defense expert told me Col Hamilton’s original tale seemed to be missing “important context” if nothing else.

There were also suggestions on social media that, if such an experiment had occurred, it was more likely to have been a pre-planned scenario as opposed to the AI-enabled drone being powered by machine learning during the task, meaning that it would not have chosen its outcomes based on what had occurred previously.

Professor of aerospace engineering at the University of the West of England and expert in unmanned aerial vehicles Steve Wright joked that he had “always been a fan of the Terminator films” when I inquired about his views on the plot.

In aircraft control computers, there are two concerns: ‘do the right thing’ and ‘don’t do the wrong thing,’ so this is a classic example of the latter,” he explained.

“In practice, we address this by always including a second computer that has been programmed using old-fashioned methods. This computer can turn off the power if the first computer behaves abnormally.”

Read More

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Skip to content