Document Type

Article

Publication Date

2024

Abstract

With the advent of recent advances in artificial intelligence (AI) technology, national militaries have become some of the earliest and most enthusiastic adopters. India, for instance, announced in 2022 the formation of an Artificial Intelligence Military Council along with substantial funding for related initiatives. 1 Many other nations, including the United States and China, have similarly pursued military applications of AI with gusto. Much ink has been spilt over ethical issues and concerns with AI. We concern ourselves here specifically with some specific implications of military AI in the field of international humanitarian law, also known as the law of armed conflict. While the military applications of AI are manifold and continue to increase by the day, we focus specifically on the use of AI to assist in evaluating targets for the use of lethal force. International humanitarian law provides some shield for state actors and individuals who use force, so long as that force is directed against a lawful military objective. The determination of a lawful military objective can often be difficult, requiring factual investigation and the exercise of judgment to make decisions based on competing information. 'This process of evaluation of lawful military objectives is known as the principle of distinction.

Comments

Artificial Intelligence and Constitutionalism Chapter 7

Share

COinS