As Artificial Intelligence and Autonomous Systems proliferate, we must be prepared for what is to come.
“I’m certainly questioning my original premise that the fundamental nature of war will not change. You’ve got to question that now. I just don’t have the answers yet,”
US Secretary of Defense Jim Mattis, on the impact of intelligent machines
The Chief of Army’s Futures Statement, Accelerated Warfare
, discusses the link between technology and the rapidly changing character of war. Specifically, Artificial Intelligence (AI) and autonomous systems are two technologies that may provide an asymmetric advantage
to the Australian Army, which is relatively small in terms of equipment and personnel. Current discussion on these topics is often clouded by hype, emotion and references to popular culture; moreover, most discourse is framed from separate perspectives such as technology, strategy or ethics. In order to intellectually prepare for the debate around the development and employment of these technologies, Army, as well as the broader Australian Defence Force, needs to build an interdisciplinary understanding of at least three fields of study.
Firstly, Army must understand the technology. It involves answering the question of "what" Artificial Intelligence and Autonomous Systems are, and what they can do. A common lexicon is necessary if people are going to talk to each other (rather than past or at each other) about this contentious and nascent field of study. Most of what the general population understands about AI and autonomy comes from the media
or from popular culture
; both of these may have contributed to the current hype
and emotion around the topic. Understanding some of the various definitions of AI
, what some machine learning methods are, and what the technology can do now and will be able to do in the future forms the basis of investigating potential uses.
The second question is one of strategy; that is, answering "how" and “why” Army might intend to use AI and autonomous systems. For example, the use of autonomous systems might mitigate the relatively small size of the Australian Army
compared to that of an adversary’s land force. Without a long-term strategic end-state however, procuring these systems may produce tactically excellent and exquisite
AI-enabled autonomous weapon systems such as autonomous tanks
, but lead to the same result as Hitler’s reliance on Wunderwaffen
in lieu of strategy. Once Army understands why it needs these capabilities and how it might use them, it can then explore whether they should be used at all.
The third question Army should ask about the use of AI and autonomous systems is one of ethics; even if there is a valid strategy which would benefit from the use of these systems, should they be pursued? A precursor to answering this question is a basic understanding of the ethical frameworks that may be used to assess the employment of emergent technologies by the military in war and peacetime. A deontological ethical perspective
assesses actions against a set of rules or morals
in order to judge them as right or wrong. For example, the Campaign To Stop Killer Robots
states that “allowing life or death decisions to be made by machines crosses a fundamental moral line”. Utilitarianism
aims to achieve the greatest good for the greatest number; under this framework, Army may be ethically obligated to use an AI-enabled autonomous weapons system if it reliably caused less collateral damage than weapons systems involving human decision makers. Just War Theory, or jus in bello
, outlines a number of principles that should guide ethical conduct in war. These include discrimination to avoid targeting non-combatants, proportionality in the assessment of collateral damage and the prohibition on the use of means that are considered malum in se
, that is to say universally deemed as evil, such as mass rape, torture and biological agents. Paul Scharre’s book Army of None discusses the possibility that machines could be programmed to adhere to these principles, and their lack of emotion would remove any motivation to violate the rules.
Army can begin to develop a broad interdisciplinary understanding of AI and autonomy-related areas through a number of methods. Unit-level professional development programs and ad hoc professional development events
such as the ASPI AI Masterclass provide good introductory discussion opportunities. These, however, must necessarily be underpinned by structured education
such as long term schooling serials and military curricula through either corps or non-corps training to enable Officers and Warrant Officers to intelligently lead or coordinate such activities.
A broad understanding of the technological, strategic and ethical principles relevant to the potential employment of AI and Autonomous Systems will set the foundations for an informed discussion not only within Army, but also within the wider Australian Defence Force, society and Parliament.
Only once Army understands what AI and autonomous systems are, and why and how they may be used, can we begin to discuss if they should be used in a military context at all.
About the Author: Major Daniel Lee has served in the Australian Army since 1999 initially as a rifleman in the Army Reserve, and then in the ARA in the Royal Australian Signals Corps from 2005. He has deployed to East Timor, Afghanistan and the Middle East Region and has held command roles as a Signals Troop Commander and as Squadron Commander of 105 Signals Squadron.
He holds a Bachelor’s Degree in Pharmacy from the University of Queensland and a Masters Degrees in Strategy, Security and Defence Studies from UNSW and ANU.
The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government. Further information.