skip to navigation skip to content skip to footer

Joint Studies Paper Series

Fighting Artificial Intelligence Battles: Operational Concepts for AI-Enabled Wars

JSPS No. 4

January 2021

Peter Layton

You may attribute this work as follows: Peter Layton, 'Fighting Artificial Intelligence Battles: Operational Concepts for Future AI-Enabled Wars', Joint Studies Paper Series, No. 4 (Canberra: Department of Defence, 2021), https://doi.org/10.51174/JPS.004

In recent years, militaries around the world have recognised the need to rapidly increase investments in artificial intelligence (AI) technologies and examine the potential ways it may be employed in future warfighting. Over time, AI will likely infuse most military equipment and enable the battlespace. Yet, there is still much to be determined in terms of the application and management of AI as a military capability.

AIl machine learning has enormous potential for enhancing efficiency, quickly identifying patterns and detecting items within very large data troves. But, AI also has known weaknesses and lacks robustness and, to be effective, it must still be carefully teamed with humans.

In this paper, Dr Peter Layton considers these issues and proposes operational level defensive and offensive concepts for an AI-enabled battlespace. He then explores how these concepts may be applied to the traditional sea, land and air domains. The intent is to stimulate discussion and new ways of thinking about how AI may be employed in the future and how to begin preparing for that future now.

Extract

Artificial intelligence (AI) technology has suddenly become important to military forces. The United States Department of Defense (US DoD) has increased investments in AI from some $600 million in 2016 – 17 to $2.5 billion in 2021 – 22, sprawling across over 600 projects. 1 China has adopted a 'Next Generation Artificial Intelligence Development Plan' that aims to make the country the pre-eminent nation in AI by 2030 and to shift the People's Liberation Army (PLA)from an 'informatized' way of war to 'intelligentized warfare'. 2 Even more dramatically, Russia's President has declared that 'artificial intelligence is the future... whoever becomes the leader in this sphere will become the ruler of the world'. 3 These high–level initiatives and splendid statements are starting to produce outcomes.

In the United States (US), the United States Navy's (USN) Sea Hunter uncrewed surface vessel (USV) has sailed without a crew from California to Hawaii and back again, navigating by AI using data from the vessel's onboard sensors, radars and cameras. 4 Meanwhile, under the aegis of the US Defense Advanced Research Projects Agency (DARPA), an AI-powered simulated F-16 fighter aircraft recently comprehensively defeated a comparable simulation controlled by a very experienced human pilot in multiple simulated, close-in air combat events. 5 In a similar evaluation examining land warfare, the United States Army (US Army) has determined that an AI–enabled force has some 10 times more combat power than a non-AI powered force. 6

In China, the PLA is now applying AI to improve the speed and accuracy of its battlefield decision-making by automating command and control systems, developing predictive operational planning and addressing intelligence, surveillance and reconnaissance data fusion challenges. The PLA has also moved to start trialling AI–enabled USVs for potential use in the South China Sea and begun experimenting with uncrewed tanks, while a private Chinese company has publicly exhibited AI-enabled, armed, swarming drones. 7

Russia lags the US and China, but is now implementing a national AI strategy to catch up. 8 In the military domain, Russia has several lines of effort underway. A major line focuses on applying AI to information operations, both tactically in waging psychological warfare and strategically in terms of damaging adversary nation's social cohesion. Another line is using AI to improve the effectiveness of land combat operations through developing uncrewed ground vehicles (UGVs), remote sensors, tactical command and control systems, and uncrewed aerial vehicles (UAVs). A further line of effort is the automation of the command and control systems in the national air defence network. 9

The initial indications are that AI might be a very significant technology in future wars but there remain uncertainties. While widely used in the civil domain and particularly in consumer products, AI is only just nearing operational deployment in the military environment. Moreover, it remains unproven in the hard testing ground of real combat operations. Even so, AI has become a technology that cannot be ignored by military forces considering their future

Importantly, the AI technology that is available for the foreseeable future is narrow, not general. Narrow AI equals or exceeds human intelligence for specific tasks within a particular domain; its utility is context dependent. In contrast, general AI equals the full range of human performance for any task in any domain. When general AI might be achieved remains debatable, but it appears to be several decades away. 10 The global military interest for the near-to-medium term is in how narrow AI technologies could be employed in the modern battlefield

Unsurprisingly, AI definitions tend to draw on parallels with human intelligence. For example, the 2018 US DoD AI strategy defines AI as 'the ability of machines to perform tasks that normally require human intelligence...' 11 Such understandings anthropomorphise technology and can unintentionally constrain thinking about AI employment to only those tasks that can be performed by humans.

In some applications, AI may do more – or less – than a human. The Venn diagrams of AI and human capabilities may overlap in some areas, but it is somewhat disingenuous to suggest they coincide. AI may be intelligent in the sense that it provides problem-solving insights, but it is artificial and, consequently, thinks in ways humans do not.

Accordingly, this paper considers AI more by the broad functions such technology can perform than by its relationship to human capabilities. The 2019 Defense Innovation Board took this approach in defining AI as 'a variety of information processing techniques and technologies used to perform a goal–oriented task and the means to reason in pursuit of that task'. 12

At first glance, the definition appears imprecise in not including the tasks AI might actually perform for military or civilian purposes. This vagueness though is a key attribute of contemporary AI applications. AI can be applied in multifarious ways and may be considered a general–purpose technology that is pervasive across society. 13 An earlier example of a general-purpose technology is electricity, now so widely used that its continual presence and use is, to all intents and purposes, simply assumed. 14 Electricity enlivens inert machines and so, in its own way, will AI, by providing them with the ability to achieve tasks through reasoning. AI appears set to infuse many, if not most, military machines thus the future battlefield will inevitably be in some way AI-enabled

To achieve battlefield dominance over their opponents, military forces continually seek ever greater combat effectiveness. Traditionally, technology has been employed on the battlefield in an integrated manner that makes the best use of the strengths of humans and machines, while trying to minimise the effects of the weaknesses of both. AI seems likely to be similar. AI can be expected to be most effective when carefully teamed with humans, rather than in some independent mode. 15

Such considerations underline that new technology in itself does not suddenly give a battlefield advantage but, rather, it is how humans employ it. A historical analysis of earlier technological innovations noted that having sound concepts guiding how to employ these new technologies was the key to military forces bringing them into service successfully. Historians Williamson Murray and Allan Millet observed that:

The evidence points, first of all, to the importance of developing visions of the future. Military institutions not only need to make the initial intellectual investments to develop visions of future war, but they must continue agonising over such visions to discern how those wars might differ from previous conflicts... [In this] any vision of future war is almost certain to be vague and incomplete rather than detailed and precise, much less predictive in any scientific sense. Vision, however, is not enough to produce successful innovation. One's view of future conflict must also be balanced and well connected to operational realities. 16

The linkage to the gritty realities of war is strongest at the tactical level. Strategy sets out the objectives, the general approach and the forces to use, but it is tactics that handles these forces in battle against an intelligent and adaptive adversary. While success in battle may not lead to strategic success, as the US war in Vietnam illustrates, the converse is not true. A good strategy cannot succeed in the face of continuing tactical defeat. Clausewitz writes that 'Everything turn[s] on tactical results...[t]hat is why we think it useful to emphasize that all strategic planning rests on tactical success alone... this is in all cases the actual fundamental basis of the decision'. 17 Tactics are generally considered to involve the distributing and manoeuvring of friendly forces in relation to each other and to the enemy, and the employing of these forces on the battlefield. 18

This paper draws these threads together. The paper aims to develop operational concepts for the employment of human-machine teams on the future AI-enabled battlefield. Such a battlefield, especially when expanded beyond land warfare to include air and naval warfare, will have a mix of linear and deep aspects featuring both attrition and manoeuvre concepts. 19 Devising these operational concepts will provide a broad vision of how potential narrow AI systems might be used at the tactical and operational level of war.

Initially, the paper discusses the various technical elements that combine to create the AI technology package. These include advanced computer processing and big data together with specific aspects related to cloud computing and the Internet of Things (IoT). The second chapter examines waging war using AI and develops generic operational concepts for defence and offence. These concepts are located at the blurred interface between the operational and tactical levels and concern the distribution and manoeuvre of friendly forces relative to the adversary, and of friendly force employment on the battlefield

Chapters 3, 4 and 5 apply the two generic concepts of AI– enabled defence and offence into the sea, land and air domains, respectively. Combat in each domain is sufficiently different in terms of distributing and manoeuvring friendly forces and in engaging the enemy to necessitate individual AI employment concepts. No single employment concept can adequately encompass all three domains except at such a high level of abstraction that understanding the implications can become difficult. Suggesting such forward-leaning concepts may seem to verge on speculative fiction. To avoid this, each concept is deliberately grounded in contemporary operational thinking, and current and emerging AI-enabled sea, land and air platforms and systems are discussed to illustrate the ideas advanced

The intent in devising these operational concepts is to stimulate thought and initiate vigorous debate about the future and how to prepare for it. The operational concepts presented in this paper are intended to be a basis for arguing over the practicalities, possibilities and usefulness of alternative, AI-enabled battlefield concepts. It is only through the dialectical process of critically analysing proposals and continually reconstructing them for further analysis and evolution that progress towards an optimum operational concept can be made.

The concepts discussed in this paper are deliberately constrained in nature and scope. In terms of nature, the sea, land and air concepts are just that – to keep each concept focused, they are not joint or combined. Importantly this narrowness means that some areas, like Russia's use of AI in influence warfare or China's AI employment in societal management and internal defence, are not included. 20 For similar reasons, each concept has a narrow scope, focused on warfighting with only limited attention to logistics and avoiding key areas such as education, training, administration and command and control. Notably, the new domains of cyber and space are not discussed except in terms of their relationship to tactical engagements in the traditional land, sea and air domains.

This paper takes AI and looks outward, relating this new technology to both operational ways of war and tactical employment options. With such a focus, the paper is then different to the numerous AI strategies and plans that many armed forces have formulated. In general, these look inward, aiming to set out how AI as a technology will be researched, acquired and introduced into their specific service. 21 This paper aims to complement those AI technology strategies and plans, playing a small part in connecting them to the broader business of warfighting.


[1]. Daniel S. Hoadley and Kelley M. Sayler, Artificial Intelligence and National Security: Updated November 10, 2020 (Washington DC: Congressional Research Service, 2020), 2., https://crsreports.congress.gov/product/pdf/R/R45178/10

[2]. Office of the Secretary of Defense, Annual Report to Congress: Military and Security Developments Involving the People's Republic of China (Washington DC: Office of the Secretary of Defense, 2020), 16. https://media.defense.gov/2020/Sep/01/2002488689/-1/-1/1/2020-DOD-CHINA-MILITARY-POWER-REPORT-FINAL.PDF

[3]. President Putin quoted in Alina Polyakova, 'Weapons of the Weak: Russia and AI-driven Asymmetric Warfare', [Report], Artificial Intelligence and Emerging Technology Initiative, Brookings, published online 15 November 2018. https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/

[4]. Jurica Dujmovic, 'Drone Warship Sea Hunter of the U.S. Navy is Powered by Artificial Intelligence', MarketWatch, 3 July 2019. https://www.marketwatch.com/story/drone-warship-sea-hunter-of-the-us-navy-is-powered-by-artificial-intelligence-2019-07-03

[5]. Defense Advanced Research Projects Agency, AlphaDogfight Trials Foreshadow Future of Human-Machine Symbiosis (Washington: Defense Advanced Research Projects Agency, 26 August 2020). https://www.darpa.mil/news-events/2020-08-26

[6]. Sydney J. Freedberg Jr, 'AI & Robots Crush Foes in Army Wargame', Breaking Defense, 19 December 2019. https://breakingdefense.com/2019/12/ai-robots-crush-foes-in-army-wargame/

[7]. Office of the Secretary of Defense, Annual Report to Congress, 161, 142–143.

[8]. Nikolai Markotkin and Elena Chernenko, 'Developing Artificial Intelligence in Russia: Objectives and Reality', Carnegie Moscow Center, 8 May 2020. https://carnegie.ru/commentary/82422

[9]. Margarita Konaev and Samuel Bendett, 'Russian AI-Enabled Combat: Coming to a City Near You?', War on the Rocks, 31 July 2019. https://warontherocks.com/2019/07/russian-ai-enabled-combat-coming-to-a-city-near-you/

[10]. Ross Gruetzemacher, David Paradice and Kang Bok Lee, 'Forecasting Transformative AI: An Expert Survey', Computers and Society: Cornell University, 16 July 2019. https://arxiv.org/abs/1901.08579

[11]. The 2018 Department of Defense Strategy on Artificial Intelligence full definition of AI is 'the ability of machines to perform tasks that normally require human intelligence-for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action-whether digitally or as the smart software behind autonomous physical systems', Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity, published online 12 February 2019, United States of America Department of Defense, 5. https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

[12]. Defense Innovation Board, AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense Supporting Document, November 2019, 10. https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB_AI_PRINCIPLES_SUPPORTING_DOCUMENT.PDF.pdf

[13]. Manuel Trajtenberg, AI as the Next GPT: A Political-Economy Perspective, NBER Working Paper No. 24245, (Cambridge: National Bureau of Economic Research, January 2018). https://www.nber.org/papers/w24245

[14]. Clifford Bekar, Kenneth Carlaw and Richard Lipsey, 'General Purpose Technologies in Theory, Application and Controversy: A Review', Journal of Evolutionary Economics 28, no. 5 (December 2017): 1005-1033, 1016-1017.

[15]. Peter Layton, Algorithmic Warfare: Applying Artificial Intelligence to Warfighting, (Canberra: Air Power Development Centre, 26 March 2018), 24-30. https://airpower.airforce.gov.au/Publications/Algorithmic-Warfare-Applying-Artificial-Intelligen

[16]. Williamson Murray and Allan R. Millett (eds.), Military Innovation in the Interwar Period (Cambridge: Cambridge University Press, 1998), 406.

[17]. For Clausewitz, tactics involved the use of armed forces in the engagement, while strategy was the purposeful use of a series of engagements to achieve the war's objective. Carl von Clausewitz, On War, ed. and trans. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1984), 128, 386.

[18]. Wayne P. Hughes and Robert Girrier, Fleet Tactics and Naval Operations, 3rd ed. (Annapolis: Naval Institute Press, 2018), 2.

[19]. Linear battlefields are those where opposing forces meet along a line of contact. In contrast, deep battlefields are those where opposing forces attack across the depth of each other's forces. Sporting analogies can be used to illustrate the differences. A linear battlefield is like American football, where attacking and defending sides face one another on a fixed line of scrimmage. A deep battlefield is more akin to soccer, with opposing forces intermixed and moving fluidly across the entire field, where some on each side play offense and some play defence. Sean B. MacFarland, Non-Linear Operations: A New Doctrine for a New Era (Fort Leavenworth: School of Advanced Military Studies, 1994),12.https://apps.dtic.mil/dtic/tr/fulltext/u2/a284137.pdf

[20] Concerning Russia, see Layton, Algorithmic Warfare, 56-58. For China, see Peter Layton, 'Artificial intelligence, big data and autonomous systems along the belt and road: towards private security companies with Chinese characteristics?' Small Wars & Insurgencies 31, no. 4 (June 2020): 874-897.

[21]. An example is the Royal Australian Navy's RAS-AI Strategy 2040, released in October 2020. In this document, four lines of effort are set out to address 'many of the common challenges that RAS-AI adoption faces and key enablers that it will require. These include training and workforce transformation; research & development; and building collaborative partnerships with industry and allies to design and demonstrate RAS-AI capabilities'. Royal Australian Navy, RAS-AI Strategy 2040 (Canberra: Royal Australian Navy, October 2020) https://www.navy.gov.au/sites/default/files/documents/RAN_WIN_RASAI_Strategy_2040f2_hi.pdf



Joint Studies Paper Series - Abouts