Is it reliable to use AI in military conflicts?

Is it reliable to use AI in military conflicts?

In the beginning of this year, various House and Senate Committees and Subcommittees were presented with concerning statements regarding artificial intelligence and its connection to China. During the hearings, Alexandr Wang, the CEO of Scale AI, stated that the Chinese Communist Party has a thorough understanding of the impact AI can have on warfare and considers it to be their version of the Apollo project.

In her time as Under-Secretary of Defense during the Obama administration, Michèle Flournoy stated that the Chinese government follows a policy of civil-military fusion. This means that they can require any company, academic institution, or scientist to assist their military efforts. In contrast, the United States has a distinct approach where the private sector and individuals have the freedom to decide if they want to support national security initiatives.

However, in order to comprehend the potential role of artificial intelligence in national security, it could be beneficial to reflect on its past success in a few board games.

In 1997, Garry Kasparov, who is often considered one of the greatest chess players in history, agreed to a match against IBM’s Deep Blue. He emerged victorious in the initial match, but was ultimately defeated in the remainder of the competition.

The traditional game of Go has a massive following in Asia and is known to be more intricate than chess. Lee Sedol, a young player from South Korea, was widely regarded as the top Go player globally. In 2016, a highly acclaimed film called “Alpha Go” depicted the hype surrounding the upcoming series of five matches between Sedol and an AI program specifically created for the game. Sedol expressed his belief that AI still falls behind human intuition.

The defeat of Sedol and human intuition in four out of five games was a remarkable and attention-grabbing occurrence just a few years ago, but it has now become a minor detail in the progress of artificial intelligence.

Which left poker – heads up, no limit, Texas hold ’em. People get to lie in poker. Decisions have to be made on imperfect information, which is precisely what attracted the attention of Tuomas Sandholm, a professor of computer science at Carnegie Mellon. “Almost all problems in the real world are imperfect information games,” he said, “in the sense that the other players know things that I don’t know, and I know things that the other players don’t know.”

In 2017, Carnegie Mellon’s team presented a challenge to four experienced poker players, including Jason Les, who stated, “Our goal was to defend humanity and prove that our cherished game of poker was too intricate for AI to dominate.”

Les stated that the AI program had a distinctly different style from a human player. According to him, the AI is capable of predicting its moves 13% of the time and has a more intricate strategy compared to the limitations of the human mind.

Koppel said, “You were the representative of humanity, and you were defeated!”

Les chuckled, “You’re just adding insult to injury!” He continued, “Our intention was to showcase the immense complexity of this game, revealing that AI still has room for improvement. Losing to the AI made me realize the incredible advancement of this technology.”

According to Sandholm, the methods we created were not specifically for solving poker. Instead, they were designed for solving games with incomplete information in a broader sense.

Koppel asked if poker could be seen as a more civilized version of warfare.

Les described it as a clever approach, stating that although we do not have weapons like guns, tanks, and planes, we are still engaged in a battle using chips and cards. Ultimately, it is a game of strategy.

After honing their abilities in poker, Professor Sandholm’s artificial intelligence (AI) company, Strategy Robot, serves as a contractor for the Pentagon, making up for the lack of complete information. “Our goal is to assist the country and our partners in possessing a superior AI capability for this particular decision-making process,” he explained.

Koppel asked if the information was being directed to the Ukrainian military.

Sandholm stated that they were unable to leave a comment.

“Regardless of what you possess, you turn it over to the Pentagon. What the Pentagon chooses to do with it is not your concern?”

“Unfortunately, discussing this matter is not within our discretion.”

“Alright! Can we say that the concepts used in AI poker are now being used in warfare?”

Sandholm stated that he is unable to comment on the current war, but is open to discussing military strategy and tactics in general.

It is already established that artificial intelligence will play a role in warfare. However, current U.S. policy requires human oversight at all times. To uphold this policy, a new department has been created at the Pentagon, led by Dr. Craig Martell. The chief digital and AI office, according to Martell, has a distinct responsibility of setting guidelines and protocols for the ethical acquisition and deployment of AI. They will ensure that responsible practices are followed in both acquiring and deploying AI technology.

The underlying issue here is one of trust, as making the wrong decision could result in loss of life. According to Martell, consider this scenario: an AI advises a commander to take action A, but the commander’s training would have dictated action B. In this case, should the commander follow the machine’s suggestion or rely on their training and gut instinct?

According to Martell, the DOD excels at training and places a strong emphasis on it. Through consistent training, commanders may develop a sense of trust in machines. However, if they are accustomed to not trusting machines, this trust may not be established.

If this seems like a large and complex issue, it is; however, it also holds a significant amount of truth. Jason Les, the former poker champion, speaks from his own personal experience: “I can take you back to the start of this AI challenge. The AI would advise me to play a hand in a specific way, and initially I would have thought that the advice was not good based on my own experience. I believed that my conventional wisdom and understanding of strategy was the most optimal. However, after playing against the AI for thousands of hands, my confidence grew and I began to trust its advice for higher stakes decisions.”

Sandholm expressed concern about falling behind in decision-making AI technology in military settings, using China as an example.

Is that occurring? Sandhold states, “I believe China has reached the same level in AI as the U.S. overall, and we are currently equal.” He adds, “In terms of military AI, China has a stronger grasp on implementing it in the military.”

According to Michèle Flournoy, the speed at which they are moving is uncertain. However, we cannot afford to slow down. If we consider a situation where China is making moves against Taiwan, waiting until they actually attack to respond will be too late. It is important to have a sense of urgency and respond beforehand, otherwise it will be too late by the time we have the necessary resources. This shows that we have not fully grasped the importance of taking action immediately.

This next statement, which accurately reflects U.S. policy, is challenging to accept. According to Flournoy, we must continue with AI development, but with a strong ethical and normative framework that guarantees any AI used for military purposes is safe, secure, responsible, explainable, and trustworthy. However, the idea of AI making major decisions in warfare goes against our democratic values and established norms.

Koppel inquired, “However, when faced with competition and the belief that our rivals are not adhering to the same ethical standards, what actions should we take?”

“If an enemy employs a destructive weapon resulting in significant harm to innocent civilians, or commits actions that would be considered war crimes, we do not justify or mimic those actions. Rather, we condemn them and implement sanctions.”

Koppel expressed uncertainty about accepting that statement. He pointed out that there have been numerous instances, dating back to 1945 with the bombings of Hiroshima and Nagasaki, where we were not constrained by such strict rules.

“That seems reasonable, that’s acceptable.”

“If we perceive our opponent gaining an upper hand, I’m not entirely certain we would continue to adhere to such restrictions?”

“I hope we do not abandon the same principles as they did,” Flournoy responded. “Our actions in battle reflect our character.”

In the summer of last year, the Biden administration faced criticism for sending a delivery of cluster bombs, which have been prohibited by over 120 countries, to Ukraine.

The topic at hand is the supervision of all military AI programs by humans. Sandholm believes that humans are responsible for most mistakes in life and therefore supports the idea of human oversight of AI. However, he also argues that there should also be AI oversight of humans to maintain a balance. This balance of oversight will likely change over time.

There is a noticeable trend among various artificial intelligence programs that have emerged victorious against top players in poker, Go, and chess. Initially, many doubted the possibility of such achievements until they actually occurred.

According to Sandholm, humans have a tendency to overestimate their ability to make decisions.

For more info:


The story was created by Dustin Stephens and edited by Ed Givnish.


Additional coverage of artificial intelligence from “CBS News Sunday Morning”: