Artificial Intelligence in Weapons

For military power to be lawful and morally just, future autonomous artificial intelligence systems must not commit humanitarian errors. This requires a preventative form of minimally-just autonomy using AI to avert attacks on protected symbols, sites, and signals of surrender.

In his article in the US Air Force Journal of Indo-Pacific Affairs, Jai Galliott argues that fears of speculative AI have been a barrier to making current weapons more compliant with international humanitarian law.

 

Read the full article