The dynamics of warfare are changing. In the last five decades, the militaries of the world have progressed rapidly, weapons which were once considered a distant dream were introduced and used with devastating effect during times of war.
Stealth technology, precision-guided weapons, terrain-hugging cruise missiles and other weapons systems developed during the last few decades have proven their effectiveness in degrading an enemy’s combat capability. As we move forward, the world can expect to see weapon systems that utilise zero human input.
Autonomous killer robots, programmed with artificial intelligence to identify, engage and destroy targets, are the changing face of warfare. Major nations are moving towards developing the technology. It is not a question of if but when these weapons will be introduced.
No international consensus currently exists on what constitutes a lethal autonomous weapon system; the systems are defined as capable of targeting or engaging without meaningful human control while functioning independently via artificial intelligence and machine learning.
In December 2016, 123 member states of the United Nations' Review Conference of the Convention on Conventional Weapons agreed to begin discussions on autonomous weapons. An outright ban on such weapon systems was advocated by 19 members.
The world body then voted to begin formal discussions on such weapons, which include automated tanks, machine guns, and drones.
Debates have raged in the United Nations regarding using killer robots and some of the pioneers in the fields of robotics and artificial intelligence have called on the UN to ban the development and use of killer robots.
Among those opposing such weapons are Tesla’s Elon Musk, Alphabet’s Mustafa Suleyman and Stephen Hawking.
“Autonomous weapons, which threaten to usher in the third revolution in warfare after gunpowder and nuclear arms, will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” the pioneers write in an open letter sent to the UN.
While arguments are strong for either side, experts warn of warfare on a greater scale if such a technology were to become mainstream. Removing human input, emotions and psyche from warfare would mean destruction on a greater scale as artificial intelligence enabled killer robots would pursue the destruction of their assigned targets without much thought for the consequences which might arise later.
“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” added the letter sent by Musk and his fellow pioneers in the field.
Experts have also previously warned that the deployment time for such robots is years, not decades. Another concern raised by experts suggests that, while using such weapons in a battlefield space would lower casualties among soldiers, it would still result in a greater loss of human life.
Another risk involved is that the technology can be used by non-state actors or rogue states to carry out indiscriminate attacks on civilian population. Such attacks pose the risk of large numbers of civilian casualties.
Another question raised by those opposing such weapons is the morality of using such weapons, as warfare, when waged on an industrial scale and without human input, poses a severe risk to the masses.
Those who support autonomous weapons, which includes the major nations of the world, maintain the view that efficiency and casualties among soldiers would be affected positively through deployment of such systems.
Autonomous weapons, especially killer drones, can be used in precision strikes to degrade an enemy's command and control and neutralise high-value targets without exposing military personnel to risks of battle.
Such weapons, in the case of AI-controlled tanks and armoured fighting vehicles, would also be effective in fighting through dense urban terrain, areas in recent conflicts have inflicted significant casualties on attacking forces.
AI-controlled weapons would also be a huge boon when fighting against irregular or insurgent forces, as once fed with target profiles or geographical parameters of an engagement zone, such weapon systems can autonomously scour the area and liquidate targets which fit the profile earlier fed into the system.
With the advent of technology, AI-controlled weapons will be among us sooner than we expect. The question which remains is whether such systems will adhere to human norms or will develop an independence which can, in the worst possible scenario, constitute a danger to the human race.