By SUNIL GOYAL
As the global race to modernize military capabilities intensifies, Artificial Intelligence (AI) has become the centerpiece of technological warfare. From autonomous drones and smart missiles to AI-powered surveillance and cybersecurity systems, the battlefield of the future is increasingly being shaped by algorithms and data.
However, behind this rush lies an uncomfortable truth- AI in defence software systems is still unpredictable, unreliable, and alarmingly vulnerable to external control.
Hackable Hardware: A National Security Time Bomb
Most AI-integrated defence equipment whether it is fighter jets, radar systems, guided missiles or unmanned vehicles depends on complex software and unique hardware codes. These devices, often connected to remote-control networks, can be operated or manipulated from far-off locations. This opens up a terrifying possibility: an enemy with advanced hacking tools or access to backdoors could disable, hijack or misdirect critical defence systems mid-operation.
There have already been classified incidents and reports (often undisclosed to the public) suggesting suspected cyber intrusions in active fighter aircraft systems, radar blinding using AI-generated electronic interference, and remotely jammed missile guidance systems. While governments deny such breaches for strategic reasons, experts confirm that AI-powered warfare has created new vulnerabilities.
AI Tools Are Under Foreign Government Control
Adding to the concern is the fact that most leading AI tools are privately developed but effectively under the control of powerful nations. For instance, ChatGPT and other OpenAI models are governed by U.S. laws and policies. Similarly, China’s DeepSeek AI are tightly linked to their national defence strategies.
This digital dependence on foreign AI poses a direct threat to sovereign defence operations. If any defence software or communication channel relies on such tools directly or indirectly it risks exposure to surveillance, espionage or sabotage.
Black Box Problem and Lack of Explainability
AI models are largely opaque in nature. They often operate as “black boxes,” offering no transparency in how decisions are made. In combat scenarios where lives are at stake, such unpredictability is not just a bug it’s a fatal flaw.
Imagine a drone deployed for reconnaissance misidentifying a civilian gathering as an enemy troop formation due to a flaw in its AI recognition model. Or a missile defense system failing to intercept an incoming threat because its data did not match the expected pattern. These are not science fiction scenarios they are real risks in today’s semi-autonomous military strategies.
Examples of Potential AI-Driven Defence Disasters
- Remote Takeover of Fighter Planes: AI-linked flight systems in advanced fighter jets could be jammed or rerouted using malicious code, potentially causing crashes or rerouting them into enemy zones.
- Radar Spoofing and AI Blinding: Adversarial AI can mimic friendly signals or mask real threats, fooling radar systems into ignoring incoming missiles or aircraft.
- AI-Controlled Missiles Hijacked: Smart missiles guided by AI algorithms are theoretically vulnerable to mid-course reprogramming through cyber-attacks, turning them against unintended targets.
- Drone Armies Gone Rogue: Swarms of AI-powered drones operating in hostile environments could be manipulated to turn on friendly units or civilians if their command links are compromised.
What Needs to Be Done
To counter this growing threat, defence experts and strategists are emphasizing the urgent need for:
Indigenous AI Development: Nations must invest in sovereign AI capabilities free from foreign control and backdoor vulnerabilities.
Cyber-Resilient Architecture: Defence hardware and software must be designed with encryption, multi-factor authentication, and physical override systems.
Human-in-the-Loop Systems: No AI decision, especially in combat, should be made without human validation and control.
Transparency and Explainability: AI models used in defence should be interpretable and accountable, ensuring they behave reliably in real-time environments.
Cautious, calculated and sovereign handling is needed
AI undoubtedly represents the future of defence, offering unmatched speed, precision, and automation. But as with all powerful tools, its misuse or even its malfunction can have catastrophic consequences. Without robust regulation, ethical frameworks, and sovereign control, AI could shift from being a force multiplier to a security nightmare.
It’s time for defence establishments across the globe, especially in developing countries, to wake up to the reality that AI is not just a solution it is also a strategic vulnerability that demands cautious, calculated and sovereign handling.
