AI Robots in Warfare: Dangerous Future Ahead? Militaries Are Rushing to Replace Human Soldiers with AI-Powered Robots.

Hirok
7 min readOct 28, 2024

--

Militaries Rushing to Replace Human Soldiers with AI-Powered Robots: Why Experts Warn This Could Be a Disaster

In March 2020, a chilling event unfolded over the skies of Libya. While fierce battles raged on the ground, a swarm of quadcopter drones closed in on a Libyan National Army convoy. These weren’t ordinary drones — they were kamikaze drones, designed with one grim purpose: to destroy. As they swooped down, these drones, programmed to seek and strike, honed in on their targets, leaving chaos and wreckage in their wake. The twist? They did it entirely on their own. No human pressed a button to attack; no commander issued the order. This was a rare yet powerful glimpse into the reality of autonomous warfare, where machines operate independently on the battlefield, and it leaves us questioning the future: Is this just the beginning? Are we on the edge of a new era where robots decide who lives and dies?

The Evolution of War: From Hand-to-Hand Combat to Robotic Soldiers

Imagine a battlefield thousands of years ago. Warriors face each other with clubs, swords, and spears, fighting not just for victory but for survival. It required incredible physical courage — fighters risked their lives at close range, looking their enemy in the eye before striking. As time passed, war evolved. Bows and arrows allowed soldiers to attack from a safer distance, followed by gunpowder weapons that extended that range even further. Then came cannons, muskets, and eventually machine guns, shifting battles from close-range engagements to fights from behind cover.

With every advancement, humans moved further from the physical dangers of combat, especially in the last century. Bombs, missiles, and even drones can be launched from miles away, sparing soldiers from direct confrontation. But in today’s age of artificial intelligence (AI), a new possibility emerges: What if the warriors themselves — the ones wielding these weapons — are no longer human?

In recent years, many militaries have leaned into robotic and AI technology, testing the boundaries of autonomous machines. The potential seems limitless: Drones that gather intelligence, robots that transport supplies, unmanned ground vehicles that detect and clear mines. But one barrier remains — most of these robots, particularly those equipped with weapons, still require human authorization to fire. The last link between man and machine is a human’s choice to take a life. Yet, as AI advances, experts are concerned that we may lose even that thin thread of control.

The Rise of Autonomous Weaponry: A Convenient Solution or a Deadly Gamble?

Today’s drones have dramatically changed the way militaries operate. An airman sitting in a control room thousands of miles away can pilot an MQ-9 Reaper, a heavily armed drone, over distant warzones like the Middle East. This setup means troops can conduct surveillance, assess threats, and carry out airstrikes without leaving the safety of their home base. It’s a convenience that has saved countless lives and altered the face of modern warfare. But as AI continues to advance, militaries now face a critical decision: How much autonomy should these machines have?

AI’s ability to make rapid, informed decisions — like self-driving cars navigating traffic — is tempting to apply in a battlefield context. Imagine a drone that could identify a target, assess the situation, and launch an attack without needing human oversight. It would be fast, precise, and utterly detached from the risks and moral weight that come with combat. But there’s a flipside. In a world where robots make decisions on the battlefield, the potential for mistakes, for catastrophic misjudgments, looms large.

Samuel Bendett, an expert at the Center for New American Security, captures this dilemma. He points out that as AI becomes more cost-effective, it will become harder to ignore its potential for large-scale deployment. Bendett warns of a future where robotic systems dominate battlefields across air, land, sea, and even cyber domains. Imagine armies of robots, each operating with a level of independence, their decisions influenced by cold logic and algorithms, not empathy or morality. Defending against such a force could become nearly impossible, pushing humans to employ their own autonomous systems just to keep pace.

The Human Element in Combat: Why Removing It Could Be Disastrous

War has always involved moral stakes — decisions about life and death that weigh heavily on soldiers and commanders alike. In traditional warfare, there’s a certain respect for the gravity of these decisions. It’s not easy to take a life, and for most soldiers, it never becomes easy. Removing that human element raises difficult questions: What happens when machines, devoid of empathy, make decisions that end lives?

Imagine a battlefield where autonomous drones patrol the skies, land-based robots sweep through urban areas, and underwater drones patrol the coasts, all primed to strike based on preset algorithms. In such a scenario, the traditional concept of moral courage — the willingness to make difficult decisions, to carry the burden of combat — fades. The machine simply executes its programming, devoid of any hesitation or moral dilemma.

Zach Kallenborn, an expert on autonomous weapon systems, argues that the “man in the loop” is crucial, now and likely in the future. Machine vision and AI decision-making systems are far from flawless, and one miscalculation could lead to a disaster. A drone that mistakes a friendly soldier for an enemy, or an autonomous weapon that accidentally targets civilians, could spark international crises, escalating conflicts unintentionally.

The Slippery Slope of AI in Warfare: Will Machines Decide Who Lives and Dies?

The current global consensus is that a human should always be the final decision-maker when a weapon is used. Human control serves as a firewall, a safeguard against unintended violence and the kind of mechanical coldness that machines bring. Yet this firewall is already showing cracks. The “first-shot advantage” is a widely acknowledged rule in combat — the side that shoots first often gains a decisive upper hand. Waiting for human approval before launching an attack could give that advantage to the enemy, a risk that some militaries may find unacceptable.

Imagine an automated weapon identifying an approaching enemy. It registers the threat and prepares to fire but hesitates, awaiting human confirmation. Meanwhile, the enemy, facing no such delay, launches an attack. For soldiers on the front lines, this delay could mean life or death. This built-in hesitation — this very human quality of double-checking, of assessing all options — may be a liability in the eyes of military strategists.

March 2020’s Libyan drone strike is a sobering example of what might lie ahead. In Russia, the military has long championed the mass deployment of autonomous systems, aiming to minimize soldier casualties and overpower enemy defenses. And this future isn’t distant: in March 2024, Russian unmanned ground vehicles advanced on Ukrainian positions, only to be countered by Ukrainian drones. In that instance, humans were still at the controls, but how long until both sides rely entirely on machines?

The Ukrainian Conflict: A Testbed for AI and Autonomous Warfare

The Ukraine conflict has become a proving ground for AI-driven military technology. Faced with manpower shortages, Ukraine has increasingly relied on drones and robots to supplement its defenses. Ukrainian volunteers and tech enthusiasts have developed domestic drones that, in many cases, are sent into danger zones ahead of human soldiers. These autonomous systems conduct reconnaissance, launch attacks, and collect data — all without putting Ukrainian troops directly in harm’s way.

Experts following the conflict say that it has pushed AI and autonomous weapons to their limits. It’s a preview of how future wars could play out, with humans further removed from the battlefield. Bendett notes that Ukrainian drone developers have repeatedly expressed a clear preference: let the robots fight first. This statement underscores the appeal of autonomy but also raises ethical concerns. As AI-driven warfare becomes increasingly common, will soldiers ever return to direct combat? Or will they remain behind, allowing machines to bear the brunt of battle?

The Moral Questions No One Wants to Answer

The push for autonomous weaponry forces us to confront uncomfortable questions. Kallenborn notes the importance of “moral courage” in warfare. There’s a long-standing argument that if someone takes a life, they should have the decency to do it themselves, to carry the burden of that action. Wars have always been fought with the understanding that, at some point, a human must decide to pull the trigger. Handing that responsibility to a machine might seem convenient, but it strips away the last vestiges of human decency in warfare.

Removing human oversight from the decision to kill is more than just a tactical change; it’s a moral shift. When AI-driven machines control life and death, wars become automated exercises of cold logic, devoid of empathy or moral restraint. And if one thing is clear, it’s that AI — no matter how advanced — is not capable of understanding the human costs of its decisions. An AI doesn’t feel regret; it doesn’t mourn; it doesn’t comprehend the value of a life lost.

The Road Ahead: A Future Filled with Autonomous Battlefields?

As militaries around the world rush to incorporate AI, there’s an unsettling sense that the technology is outpacing our ethical understanding. Autonomous weapons are moving from the realm of science fiction to reality, and the moral implications are staggering. Will AI ever be sophisticated enough to distinguish between a civilian and a combatant reliably? And if it can’t, will militaries simply accept civilian casualties as the price of progress?

In a world where robots and AI-controlled systems make the first move, the very fabric of war shifts. Traditional concepts of honor, sacrifice, and courage fade, replaced by algorithms and machines that follow only the cold logic of their programming. Some experts believe that full autonomy is inevitable; others hope that humanity will remain in control, serving as a check on the relentless, unforgiving efficiency of AI.

The ultimate question we face isn’t just about the future of warfare — it’s about the future of humanity. Can we bear the cost of such a shift? Or will the price prove too high?

In the coming years, as we witness the continued evolution of AI and autonomous systems, the answers may arrive whether we’re ready for them or not. This is the new frontier, one filled with promise and peril alike.

--

--

Hirok
Hirok

Written by Hirok

Geopolitics⭐️ globe-trotter ⭐️cutting-edge technology ⭐️ Military⭐️Adventurous globe.

No responses yet