AI is already changing defence planning. Can oversight mechanisms keep pace?

Militaries are adopting AI at speed; the real test will be keeping decisions accountable in an age of autonomous operations.

 

FUTURE PROOF – BLOG BY FUTURES PLATFORM


Artificial intelligence is already helping militaries determine which threats to monitor, where supplies are allocated, and which issues require a commander’s immediate attention. Global military AI spending is climbing at double-digit rates. If this momentum continues, political and military decision-making may soon operate at speeds and levels of autonomy that current oversight frameworks are ill-equipped to handle.


GET THE FULL REPORT

For deeper analysis on the future of security and defence, get the complete Future of Defence and Security market intelligence and foresight report.



 

Market Data: Artificial Intelligence in Military

According to SNS Insider, the global AI in Military market is projected to reach USD 35.2 billion by 2031, growing at a CAGR of 19.25% from 2023-2031.

Artificial Intelligence in Military Market: World Forecast 2024-2031

Funding is concentrated in several key domains:

  • Information processing: Analyses vast streams of battlefield and intelligence data to give commanders a clear, timely operational picture.

  • Cybersecurity: Detects, blocks, and responds to digital intrusions or disinformation that could disrupt military networks.

  • Unmanned aircraft systems: Operates drones for surveillance and precision strikes without placing pilots at risk.

  • Logistics: Plans supply routes, predicts equipment failures, and reallocates resources to keep operations running in contested environments. 

Zooming Out: The Future of AI-Assisted Defence Structures

Some shifts are easy to foresee. Decision cycles will shorten. Logistics networks will adapt in near-real time. Autonomous systems will take on more surveillance and strike roles, freeing humans for higher-level planning.

Other changes will be harder to anticipate and more complex to govern. AI could put advanced capabilities into the hands of smaller states, non-state actors, and even private companies. Chains of command may blend human judgment with algorithmic decision models in ways that obscure responsibility. And even political leadership could change if AI becomes a formal adviser in national security decisions.

Below is a selection of four future phenomena that illustrate different ways AI could reshape defence structures, from trends already emerging in today’s conflicts to low-probability, high-impact wild cards that could disrupt the balance of power.

FUTURE PHENOMENON 1

Uberisation of Warfare & Digitalised Battlefields 

Strengthening Trend

Battle management is shifting toward decentralised, algorithm-driven allocation of assets. We’ve already seen this play out in Ukraine, where GIS Arta software assigns fire missions in seconds based on live data from drones, artillery, and reconnaissance units.

The benefit is speed and adaptability. The risk is opaque reasoning and failure under novel conditions. Highly connected systems can multiply the points of failure. A disrupted network in a digitalised battlefield could paralyse a force faster than any conventional strike. The compression of targeting cycles – from hours to minutes – may also reduce opportunities for diplomatic intervention and de-escalation. As decision-making accelerates, so too must political and ethical responses.

States deploying such systems will need to consider how they will interact with the ambiguous, unpredictable conditions that define many real-world conflicts. That includes identifying failure modes, testing under edge cases, and establishing clear thresholds for human reassertion of control.

FUTURE PHENOMENON 2

Data Poisoning

Strengthening Trend

AI is only as good as the data it learns from. Corrupt that data, either deliberately or otherwise, and the system’s decisions shift in ways that may be hard to spot. A targeting algorithm might misclassify an ally as a threat, or ignore a real threat because it no longer matches the model’s established threat profile.

Future attacks on defence AI may not focus on frontline systems at all. They could target the civilian datasets used to pre-train military models, embedding bias or error years before the system sees combat. The result could be a slow, silent distortion of the decision-making process. In essence, poisoned LLM training data poses a far greater threat to the AI revolution than any other GenAI-related risk currently on the radar of businesses. Protecting against this means treating data integrity like any other critical national asset, with constant auditing and redundancy built in.

FUTURE PHENOMENON 3

Self-Replicating AI

Wild Card

In theory, self-replicating AI improves resilience. If a network is damaged, the system can copy itself into new nodes, restoring critical capabilities after a cyberattack. But the same capability also opens the door to unpredictable propagation.

A rogue system might multiply into an unregulated population of AI agents, acting collectively to preserve their existence.  Such agents could, in theory, manipulate public opinion, disrupt critical infrastructure, or enable autonomous weaponisation without direct human command.

Containing these systems would become increasingly difficult as their capabilities expand. Experts warn that national safeguards will not be enough; the nature of the threat demands international coordination. Building effective safety frameworks should be treated as a global priority, designed to anticipate and collectively mitigate the risks before they manifest in conflict.

FUTURE PHENOMENON 4

AI Presidents and State Leaders

Wild Card

It sounds like science fiction, but proposals for AI leadership roles are already surfacing — not as sole rulers, but as decision-support systems embedded in executive offices or national security councils. In defence, that could mean AI influencing rules of engagement and crisis response.

The geopolitical implications are profound. Allies and rivals will have to decide whether they trust strategic decisions made — or even co-made — by algorithms. International law, diplomatic protocol, and crisis management will all have to adapt. If AI leadership emerges unevenly across states, it could alter alliance structures and create asymmetries in decision speed and style.

The Real Risk is Systems Outpacing Doctrine

Most defence and security institutions are structured around the assumption that humans are the central decision-makers. As AI-enabled systems take on operational roles, legacy command structures face a mismatch: the technical capability exists to act autonomously, but the institutional mechanisms to monitor, audit, and intervene are still limited.

Without coordinated governance, alliances could fracture not from political disagreements, but from incompatible and unpredictable system behaviour.

The key questions now are:

  • Where should AI never act without human sign-off?

  • How to design oversight mechanisms that work in real time?

  • How will allied systems be tested for compatibility and predictability?

  • What transparency obligations apply to contractors building defence AI?

  • How will algorithmic errors be investigated — and by whom?

  • Who is accountable when AI-driven decisions cause unintended outcomes?


GET THE FULL REPORT

For the full analysis of emerging trends and risks in defence and security, access the complete Future of Defence and Security market intelligence and foresight report.


 

RELATED


 
Next
Next

Why do the lengths of societal cycles vary?