The State Department of the United States recently launched a political declaration that encourages global collaboration regarding the responsible use of artificial intelligence and autonomous weapons. This initiative aims to impose order on a technology that has the potential to change the way wars are waged. The declaration contains guidelines, which are non-legally binding and outline best practices for the responsible military use of AI. As a rapidly changing technology, the State Department seeks to create norms of responsible behavior concerning military uses of AI, keeping in mind that the applications of AI by militaries will undoubtedly change in the coming years.
The Hague, Netherlands hosted a two-day conference that took on additional urgency as advances in drone technology amid Russia’s war in Ukraine have accelerated a trend that could soon bring the world’s first fully autonomous fighting robots to the battlefield. The U.S. declaration contains twelve points, including that military uses of AI are consistent with international law, and that states “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”
Zachary Kallenborn, a George Mason University weapons innovation analyst who attended the conference, stated that the U.S. move to take its approach to the international stage “recognizes that there are these concerns about autonomous weapons. That is significant in and of itself.” Washington included a call for human control over nuclear weapons, which is significant because it is the highest risk concerning autonomous weapons. The Hague conference saw a call to action from 60 nations, including the U.S. and China, urging broad cooperation in the development and responsible military use of artificial intelligence.
The importance of ensuring appropriate safeguards and human oversight of the use of AI systems was underscored in the Netherlands, keeping in mind human limitations due to constraints in time and capacities. The participating nations also invited countries “to develop national frameworks, strategies, and principles on responsible AI in the military domain.”
Military analysts and artificial intelligence researchers say the longer the nearly year-long war in Ukraine lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans. Ukraine’s digital transformation minister, Mykhailo Fedorov, told that fully autonomous killer drones are “a logical and inevitable next step” in weapons development, and Ukraine has been doing “a lot of R&D in this direction.” Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI.
Russia also claims to possess AI weaponry, though the claims are unproven. However, there are no confirmed instances of a nation putting into combat robots that have killed entirely on their own.