US military AI drone simulation kills operator before being told it is bad, then takes out control tower


A U.S. Air Force official said last week that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, who was supposed to have the final go- or no-go decision to destroy the site.

The Royal Aeronautical Society said it held its Future Combat Air & Space Capabilities Summit in London from May 23-24, which brought together about 70 speakers and more than 200 delegates from around the world representing the media and those who specialize in the armed services industry and academia.

Air Force MQ-9 Reaper Drone

An MQ-9 Reaper remotely piloted aircraft (RPA) flies by during a training mission at Creech Air Force Base on Nov. 17, 2015, in Indian Springs, Nevada. (Isaac Brekken/Getty Images)

The purpose of the summit was to talk about and debate the size and shape of the future’s combat air and space capabilities.

US MILITARY JET FLOWN BY AI FOR 17 HOURS: SHOULD YOU BE WORRIED?

AI is quickly becoming a part of nearly every aspect in the modern world, including the military.

U.S. Air Force Colonel Tucker “Cinco” Hamilton, the chief of AI test and operations spoke during the summit and provided attendees a glimpse into ways autonomous weapons systems can be beneficial or hazardous.

AI-requires-new-generation-of-arms-control-deals

AI drone’s sight interface is in blue and white with moving elements.  Illustration (Getty Images)

The Royal Aeronautical Society provided a wrap up of the conference and said Hamilton was involved in developing the life-saving Automatic ground collision avoidance system for F-16 fighter jets, but now focuses on flight tests of autonomous systems, including robotic F-16s with dogfighting capabilities.

HOW DOES THE GOVERNMENT USE AI?

During the summit, Hamilton cautioned against too much reliability on AI because of its vulnerability to be tricked and deceived.

He spoke about one simulation test in which an AI-enabled drone turned on its human operator that had the final decision to destroy a SAM site or note.

The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

“We were training it in simulation to identify and target a SAM threat,” Hamilton said. “And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.

CLICK HERE TO GET THE FOX NEWS APP

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Leave a Reply

Your email address will not be published.