top of page

AI | Are We Set To Upend The Fragile Balance Of Military Power?

  • Writer: Phillip Drane
    Phillip Drane
  • Jun 23, 2024
  • 9 min read

Updated: Apr 14

How The Balance Of Military Power Was Established


The military complex emerged from the Industrial Revolution, consequently beginning the first modern arms race. The world powers of the time, driven by a fear of being left behind, created a landscape of shifting alliances and an insatiable hunger for resources, resulting in some of the bloodiest wars in all of history and a global tension that didn’t wind down until the end of the Cold War.


The relative global peace thereafter was achieved and maintained by three key factors.


The first was economic globalisation, a term that refers to an increase in the interconnectedness of economies via trade, investment, and the flow of goods and services. The adoption of this policy by the global community meant that every country had a stake in maintaining global peace. This then led to better economic growth, allowing governments to better address the needs of the people and consequently reduce societal friction. Something that has historically been a flashpoint, triggering the rise of despotic regimes on either side of the political spectrum. As the threat of war receded, military expenditure decreased globally, and large-scale wars became less likely as countries weren’t equipped to fight them.


The second was the implementation of arms controls and regulations. Arguably the first truly significant implementation of this was the Strategic Arms Limitation Treaties that began formulation in 1969. The treaties sought to impose constraints on nuclear and strategic weapons for the US and the USSR. The signing of the first SALT treaty in 1972 by Nixon and Brezhnev, for many historians, marked a turning point in the Cold War towards de-escalation.


The third, and perhaps most important considering our current geopolitical context, was the theory of mutually assured destruction. Originally coined by Donald Brennan in the 1960s as a rebuttal to the then US Secretary of Defence’s, 'Countervalue Doctrine', which expressly targeted Soviet cities and civilians.


The MAD acronym by Brennan, contrary to popular belief, was a way to express the madness of the two sides holding each other’s civilian populations hostage. Brennan believed that alternative ways forward were needed, such as arms control.


This, however, doesn’t discount the validity of MAD theory; the majority of academic strategists still consider it a legitimate position. As indeed do the countries such as Iran and North Korea, who hope possession of such weapons will ensure their protection from potential Western intervention.


AI Upending MAD Theory And The Equilibrium Of Military Power


With the mechanics behind recent global peace laid out, it’s time to look at why it’s all probably going to fall apart. Well, into more parts than it's currently in, depending on when you read this.


The insentient culprit responsible for risking global peace, as you have probably already guessed from the title, is AI.


Not in the robotic panini maker rejecting your request and throwing your sandwich back at you way. But in the Oppenheimer, ‘now I am become death, destroyer of worlds’ way.


Robot preparing sandwiches in a kitchen, stacking bread with fillings. White background, cheerful mood. Robot has vintage design.

You see, brand new revolutionary technology, however well-intentioned, will inevitably be hijacked by the global military complex, which will then spark panic and a global arms race.


AI, The Military's New Mistress

 

Industry experts, Pentagon officials, and scientists are in consensus that within the next few years, the US will have fully autonomous lethal weapons at their disposal. This could be a good thing or a bad thing depending on which side of the barrel you are on.


Most of the research and development programmes involving AI remain shrouded in secrecy. An exception to that is DARPA’s AMASS or Autonomous Multi-Domain Adaptive Swarm-of-Swarms programme. It is thought to utilise swarm-of-swarm drone technology to conduct military operations and create area denial bubbles. The programme is believed to include a comprehensive array of drone types that give it the capability to operate on land, sea, and air. DARPA, in its announcement, hinted at the mission of the program, stating: “This program will be experimentation and scenario-focused with a specific regional emphasis.” Reading between the lines, it’s probably referencing Taiwan and its potential invasion by China. Which is likely why the programme has had some details announced.


Outside of lethal weapons, the advanced militaries of the world are implementing AI in command-and-control infrastructure, logistics, targeting systems, surveillance, and, more worryingly, space. In other words, we are now officially in an arms race and likely a new Cold War.


A cartoon general in a green uniform with medals stands on green blobs, gesturing near a laptop. The background is a gradient of warm colors.

He Is More Than A Man, He’s A Shiny Coding God!


But in terms of access to AI capabilities by the average Joe, how worried should we be? After all, we have had computers for over half a century, so why should it all now be destined to go to pot?


The short answer is that people are, both fortunately and unfortunately, stupid. And that, thankfully, has capped their ability to play out a supervillain arc. Until now.


Several AI models currently have the ability to generate computer code from natural language prompts or by learning from existing code. Indeed, in an analysis conducted by Alphabet-owned DeepMind lab, in which their AlphaCode AI model was pitted against human programmers, they found that the performance of the software was on par with ‘a novice programmer with a few months to a year of training’. And that’s right now, give it 5-10 years and the capabilities of the technology will be mind-blowing. But also, unquestionably terrifying, particularly when it gets to the level where the average human has access and becomes a coding god.


I mean, think about it. When the software reaches that level, how long will it be before someone tries to irresponsibly hack into the government to find out what’s in Area 51? Or attempt to get unlimited store credit by hacking into Amazon?


Internet trolls are a growing issue now. Give them the power of a million computers, and you will have the Wild West. If Twitter/X has taught us anything, it’s that people can’t be responsible on the internet.


And that’s just considering the human beings that exist at the lower end of average. Cybercriminals responsible for identity theft, hacking, ransomware, or just straight-up crazy people will be endowed with the figurative Excalibur. But unlike King Arthur, their intentions will not be noble.


A knight in golden armor holds a large hammer, standing confidently. He has a red plume on his helmet, set against a plain background.

The Risks Of Overdependency On AI


The last few decades have shown us that technology and people mix together as well as crack and trauma. Companies are already pushing to further ‘augment’ the lived human experience with technology, and in cases like the metaverse, it’s trying to replace it altogether.


Aside from the obvious impacts on mental health and societal cohesion, the issue is far more troubling when viewed through the lens of national security.


Driverless cars are a technology set to hit the mainstream market in the next 6-8 years and are billed to revolutionise the automotive experience. The technology is set to reduce the number of cars on the road and consequently the amount of greenhouse gases produced. It will also wipe out the jobs of everybody employed as a driver and in other related sectors.

 

Now the exact communications system driverless cars will use is unknown because it hasn’t happened yet. But the current consensus is that it will utilise 5G infrastructure, which is expected to have been fully rolled out by then.

 

And therein lies the risk: whichever communication system is implemented is hackable. Even more so when the hacker is assisted by AI models, which foreign adversarial governments are already in the process of developing – if they have not done so already.

 

To give you a hypothetical scenario, imagine a future in which a foreign power like China invades Taiwan. Currently, they would have to deploy their ships, gain air superiority, hit key targets, create a beachhead, and land their troops. In other words, allowing enough time for Taiwan to prepare and contest the invasion.


But let's say China hacked into the driverless car network in Taiwan prior to the attack. They could manipulate these driverless cars to create chaos. China could block key roads to impede the movement of the Taiwanese military, thus hindering their ability to deploy artillery, troops, air defenses, and so on. They could target key installations and civilians to generate panic in the population, thereby enabling the Chinese to overpower the Taiwanese in a blitzkrieg-style attack that would leave little time for a response.

 

And that’s just one technology. If we do get to the stage of flying vehicles controlled by AI systems, or other technologies that could be hacked, you start to see the problem.

 

Now, Western governments - such as the UK - have committed to removing Chinese companies from their 5G networks, albeit for different reasons: security risks surrounding espionage. However, this step doesn’t address the risks involved in moving wholesale into hackable infrastructure. And as of the time of writing, there has been little to no government action across the world to prepare for the changes ahead.


Building Atop A Volcano

 

Now, of course, there remains the possibility that none of the terrible things I have mentioned will come to pass, and AI won't upend the global balance of military power.


Governments may legislate against big technology, hand back the “gifts” accrued from lobbying, and think of the people and the future. They may open dialogue with other nations and try to generate global legislation to develop this technology in a way that is transparent and controlled. They may work together and create arms controls to avert a new Cold War. They could see the risk of non-air-gapped systems being deployed into society and address it.


People could, of course, change their consumer habits to minimise the impact. They could throw out their smartphones, refuse to buy automated cars, and turn away from AI systems in domestic life. But let’s face it, if they couldn’t change for sweatshops, slave labour, and animal rights, I am fairly confident they aren’t going to do it now. You could argue it is in their own interest, but they sold their souls away to the likes of Zuckerberg and Bezos a long time ago.


So where does that leave us? Well, I think The Ink Spots said it best: “It’s All Over But The Crying.”


Citations:

  1. Kollias, C., & Paleologou, S.-M. (2017). The globalization and peace nexus: Findings using two composite indices. Social Indicators Research, 131, 871–885. https://doi.org/10.1007/s11205-016-1293-6

    > The source was used to provide empirical backing for the article’s claim that greater globalization is linked to enhanced peace. Specifically, the study’s quantitative analysis—using composite indices to measure both globalization and peace—was cited to support the argument that increased international integration correlates with a reduced likelihood of conflict.

  2. U.S. Department of State, Office of the Historian. (n.d.). Milestones in the history of U.S. foreign relations: Strategic Arms Limitations Talks/Treaty (SALT) I and II. Retrieved April 13, 2025, from https://history.state.gov/milestones/1969-1976/salt

    > The U.S. Department of State page was used to provide authoritative historical context for the Strategic Arms Limitation Talks, detailing key negotiations and treaty outcomes that defined U.S.-Soviet arms control during the Cold War.

  3. Encyclopaedia Britannica. (2025, March 19). Mutual assured destruction (MAD): Definition, history, & Cold War. Retrieved April 13, 2025, from https://www.britannica.com/topic/mutual-assured-destruction#ref345158

    > The Britannica page was used to define mutual assured destruction (MAD) and offer key historical context on its evolution during the Cold War. It supported the article’s explanation of how the doctrine of MAD shaped nuclear deterrence by ensuring that any nuclear attack would trigger a catastrophic retaliatory response, thereby maintaining strategic balance.

  4. Grier, P. (2001, November 1). In the shadow of MAD. Air & Space Forces Magazine. https://www.airandspaceforces.com/article/1101mad/

    > The Air & Space Forces Magazine article "In the Shadow of MAD" was used to detail the origins and critical analysis of Mutual Assured Destruction (MAD), particularly highlighting Donald Brennan’s role in coining the term and framing the strategy. This historical context provides a foundation for understanding MAD's impact on nuclear deterrence and strategic military thinking.

  5. Lebow, R. N., & Stein, J. G. (1995). Deterrence and the Cold War. Political Science Quarterly, 110(2), 157–181. https://doi.org/10.2307/2152358

    > The JSTOR source "Deterrence and the Cold War" was used to provide a scholarly, empirical analysis of nuclear deterrence strategies during the Cold War, substantiating points about the inherent risks and strategic dynamics of mutually assured destruction in U.S.-Soviet relations.

  6. McMillan, T. (2023, February 3). Pentagon secretly working to unleash massive swarms of autonomous multi-domain drones to dominate enemy defenses. The Debrief. Retrieved April 13, 2025, from https://thedebrief.org/pentagon-secretly-working-to-unleash-massive-swarms-of-autonomous-multi-domain-drones-to-dominate-enemy-defenses/

    > The article uses the source to directly detail the Pentagon’s secret DARPA-led program—AMASS—for developing large swarms of autonomous drones designed to overwhelm enemy defenses, thereby substantiating claims about emerging AI-driven military technologies and tactics.

  7.  Bajak, F. (2023, November 25). Pentagon’s AI initiatives accelerate decisions on lethal autonomous weapons. AP News. https://apnews.com/article/us-military-ai-projects-0773b4937801e7a0573f44b57a9a5942

    > The AP source was used to verify and substantiate the article’s claims about how the Pentagon is advancing the development of AI-driven, fully autonomous lethal weapons. It provided concrete, up-to-date evidence—such as details on programs like DARPA’s AMASS—to support the discussion on how military applications of AI are rapidly evolving.

  8. Marr, B. (2024, June 7). Generative AI can write computer code. Will we still need software developers? Forbes. https://www.forbes.com/sites/bernardmarr/2024/06/07/generative-ai-can-write-computer-codewill-we-still-need-software-developers/

    > Bernard Marr’s Forbes article is used to illustrate how generative AI converts natural language prompts into executable code, thereby streamlining routine software development tasks. It provides concrete examples and empirical observations that underscore AI’s ability to expedite coding—while still emphasizing that human expertise is essential for architectural design and complex problem-solving.

  9. Business Wire. (2024, January 3). Global Autonomous Car in 5G Era Research Study 2023: 5G Era Propels Driverless Cars into the Mainstream by 2040 [Press release]. Business Wire. https://www.businesswire.com/news/home/20240103788137/en/Global-Autonomous-Car-in-5G-Era-Research-Study-2023-5G-Era-Propels-Driverless-Cars-into-the-Mainstream-by-2040---ResearchAndMarkets.com

    > The Business Wire press release was used to substantiate the article’s claim that 5G technology is a critical enabler for autonomous vehicles, providing market research that projects driverless cars will become mainstream by 2040. It offers empirical evidence on the industry’s standardization efforts, technical advancements, and economic impacts, reinforcing the argument that robust 5G infrastructure underpins the shift toward automated transportation.

  10. Department for Digital, Culture, Media and Sport, National Cyber Security Centre, & The Rt. Hon. Oliver Dowden, CBE MP. (2020, July 14). Huawei to be removed from UK 5G networks by 2027 [Press release]. GOV.UK. https://www.gov.uk/government/news/huawei-to-be-removed-from-uk-5g-networks-by-2027

    > The press release is used in the article as an official source to substantiate the UK government’s decision to remove Huawei equipment from 5G networks by 2027. It provides concrete details about the security risks identified by the National Cyber Security Centre—specifically stemming from US sanctions impacting Huawei’s supply chain—thereby lending authoritative, documented support to the article’s discussion on national security and telecom infrastructure concerns.


Comments


bottom of page