Autonomous Wars and Weaponized AI

AI has dual nature which means that the software designed to make our lives more comfortable can also be employed to harm us. This dual nature dilemma of AI is raising serious concerns especially when it is used in military applications.

Autonomous Wars and Weaponized AI

In the past few decades, there have been extensive developments in the field of artificial intelligence. It is continuously revolutionizing the way we live. We just love how we can unlock our phones with our faces and that Amazon can predict what we need. From a smart vacuum that can become skilled at floor plans to “Killer Robots” that can revolutionize the combat zone, AI has potential applications both ordinary and extraordinary. While AI applications in healthcare, education, logistics, and agriculture promote human development, its military applications can increase the lethality of the war.

Dual Nature of AI Technology

AI has dual nature which means that the software designed to make our lives more comfortable can also be employed to harm us. For instance, the same algorithm deployed to find out a junk e-mail to be sent into our spam folder can also be used in malware applications. The feature that allows us to unlock our phones with our faces is also being tested on rifles where object-oriented software is used to identify targets. Similarly, the same weapons used in the warfare to precise the target and save lives could also kill the humans making life-and-death decisions on their own. This dual nature dilemma of AI is raising serious concerns especially when it is used in military applications.

Weaponized AI

AI is undoubtedly making our lives easier; however, the same technology is being rapidly weaponized. AI weaponization means using AI to deliberately inflict harm on human beings by integrating it into the systems and tools of national militaries. Apart from the past revolutionary technologies, weaponized AI is believed to disturb international security and peace.

The US Department of Defense (DoD) calls weaponized AI “Algorithmic warfare” and the real objective of establishing JEDI (Joint Enterprise Defense Infrastructure: the US Department of Defense) was to weaponize AI. Pentagon launched a project “Maven” in April 2017. Maven was considered the military’s first endeavor to employ AI in warfare and was aimed at improving the precision of existing weapons like drones by incorporating AI. At its initial stage, machine learning was used to scan drone video footage. This scan further helped to spot individuals, automobiles, and places that might be worth bombing. However, the Google employees were not in the favor of this project. Many employees resigned and over 3000 employees signed a petition in protest against their company’s contribution to the project as they feared that it could initiate the use of AI against human survival.

Vadim Kozyulin in his paper published in 2019, explains that commercial companies like IBM, Amazon, and Microsoft have created most AI tools and presented them to the military. He further remarks that the Russian Ministry of Defense is fascinated by Combat Robots. Combat Robots are multi-functional machines and have sensors to get information, a control system, and actuation devices. They possess human-like behavior that can execute a combat mission like humans. Hence, weaponized AI is leading us towards autonomous wars where Lethal Autonomous Weapons System (LAWS) would be the soldiers.

Lethal Autonomous Weapon System (LAWS)

AI is emerging at a rapid pace in the Military sector where it is being used to develop and deploy fully autonomous weapons systems. Such autonomous weapons, once activated can locate, identify, attack, and kill human targets on their own without any human control. These weapon systems are called “Lethal Autonomous Weapons Systems” (LAWS) or “Autonomous Weapons Systems” (AWS) that include both lethal and less-lethal systems and could alter the entire nature of the war.

Lethal Autonomous Weapons Systems are progressing from imaginary sci-fi movies like Terminator to the combat zone. These autonomous weapons have stimulated a debate among military planners, and ethicists about the production and deployment of weapons that can perform progressively more advanced functions, including targeting and killing, with little or no human supervision.

Autonomous and Semiautonomous Systems at Present

Loitering Munition (also called a suicide drone) was designed in the 1980s to fly autonomously for a short time. While flying, first it looks for some particular signals and then identifies an enemy target. After identification, it crashes into the target, carrying an explosive payload. A Loitering Munition named Harpy is produced by Israel. Harpy is an unmanned aerial vehicle with a 500km range and is programmed to locate and smash the enemy’s radar stations. In the 1990s, Israel sold some 100 Harpys to China for about $55-70 million which became a turning point in US-Israeli relations. In Slovakia and the US, some companies have also produced loitering munitions. South Korea uses a sentry gun named SGR-A1 that is capable of autonomous firing. Hobbyist drones can take off, land, chase moving objects and avoid hindrances on the way by themselves.

THeMIS (Tracked Hybrid Modular Infantry System) is a robot developed by an Estonia-based company named Milrem Robotics. THeMIS contains a mobile body fixed on tank treads. There is also a remote-weapon tower equipped with machine guns on the top. This robot also carries cameras and software to track the targets. This target-tracking software is programmed to enable the tower to chase people or objects. THeMIS is a human-controlled system by far and Milrem ensures that it will remain that way.

There is a program by DARPA called CODE for designing complicated software. This software allows a set of drones to perform the task in close collaboration. According to Paul Scharray, the intention behind CODE is not to design an autonomous weapon, rather adapt to a world where teams of robots would be operating collaboratively under the supervisory control of a human being. The manager of the CODE program has correlated it to wolves hunting in synchronized packs.

Some countries like the United States and Russia are manufacturing robotic tanks that would be either remote-controlled or operate autonomously. The US already has launched an autonomous warship in 2016. This warship, though still in development, is likely to have offensive aptitude including anti-submarine weaponry.

These AI-enabled weapons were originally developed to curtail the threat to human beings in military conflicts; however, if become fully autonomous, they can cause mass destruction in autonomous wars. The advancement from semiautonomous weapons to fully autonomous weapons systems is taking place rapidly however it is quite unclear when the researchers would be able to produce a fully lethal autonomous weapon. The Taranis Drone, in the UK, is an autonomous warfare aerial vehicle that is likely to be fully operational by 2030. This drone is believed to be capable enough to replace Tornado GR4 fighter planes.

China has already Weaponized AI- a wake-up call for the world

The government of China is employing AI to victimize its Muslim minority called Uighur. The government is using AI-enabled facial recognition systems to observe and target the members of ill-treated Uighurs. This persecution by the Chinese government is posing an “unprecedented danger” to its civilians and all open societies. Human Rights Watch issued a report in 2019 named “China’s Algorithms of Oppression.” In this report, the Human Rights Watch presented further evidence regarding the use of new technologies by Beijing to restrain the rights of the Uighurs. According to the report, since late 2016, the government of China had victimized around 13 million ethnic Uighurs in Xinjiang.

Risks of Weaponizing AI or Fully Autonomous Weapons

As autonomous weapons systems are moving from imagination to reality, military planners and ethicists debate about the risks and morality of their use in the present and future operating environments. Hence weaponized AI and Lethal Autonomous Weapon Systems (LAWS) are grabbing great attention because such systems raise security, legal and ethical questions. Some of the potential risks associated with integrating AI into national militaries are as follows:

  • Autonomous weapons will generate an accountability gap because it might be quite hard to hold someone responsible for unexpected damage caused by an autonomous weapon. These weapons would also be susceptible to perform cyber crime like hacking and spoofing and would be a threat to global security.
  • Assigning the decisions of life-and-death to autonomous weapons crosses an ethical red line. Therefore, these autonomous weapons will also have to face a noteworthy challenge complying with human rights law.
  • AWS threatens the following human rights mentioned in ICCPR:
  1. The right to life mentioned in article 6
  2. The right to privacy mentioned in article 17
  3. The right to non-discrimination mentioned in article 26.

Under IHRL, the use of a force that is potentially lethal is lawful just in case if it meets the following criteria:

  1. It must have an adequate legal basis in compliance with international standards.
  2. It must protect human life.
  3. It must constitute a last alternative.
  4. It must be applied in a way proportional to the threats.
  5. In case this lethal force is used, the law enforcement officers must be held answerable for the losses.

Apart from the above risks, the most serious concern is that by Weaponizing AI, we are developing weapons of mass destruction that can kill a large number of humans just like a nuclear weapon but these can be built with more ease, and are much cheaper and scalable.

The plethora of problems resulting from AI weaponization and autonomous weapons demands immediate action. Many states suggest a wait-and-see approach based on unclarity about what AI would be able to achieve. However, the high stakes raise the need for the precautionary approach. Whatever the approach is, weaponized AI is ultimately leading us towards Autonomous Wars that are a big threat to humanity. So, whether we like it or not, we must admit this fact that we have stepped into the era of algorithms and just like any other sector, AI is changing our place in the war zone as well by its use in military applications.

Autonomous War

With every passing day, the weapons of war are becoming smarter and data-enabled. From early machine guns (with automated firing and reloading) to drone swarms, autonomous tanks, and sentry guns, the future of warfare is gradually reaching our doorsteps. These low-profile machines that are by far preprogrammed by GPS routes are the beginning of autonomous wars. The recent advancements in robotics, machine image recognition, and artificial intelligence are leading us to a future when there would be autonomous wars where autonomous weapon systems or killer robots would identify and shot anyone in the war zone without any human command. Many countries like the United States, Russia, and China are extensively working to join up the weapons with sensors and targeting computers to make them autonomous. Israel and Britain are among those countries that are already employing weapons containing autonomous features such as missiles and drones. These autonomous weapons can find and attack the enemy’s ship, radar, or vehicle on their own without any human intervention or command.

Fortunately, fully autonomous weapons, fighting on the battlefield without any preprogrammed human instructions, don’t exist at present; however, such autonomous wars are projected to be the biggest threat to humanity. Paul Scharray, in his book “Army of None: Autonomous Weapons and the Future of War,” states that the autonomous warfare future has not reached us overnight; rather we are reaping the result of decades of military development.

A bunch of people, however, assume that the same technology that will assist autonomous cars to stay away from pedestrians could be employed in making autonomous weapons. They further believe that such autonomous weapons used in autonomous wars would be able to target certain civilians or to avoid them intentionally. Unfortunately, they have perceived it wrong as that would not be the scenario in projected autonomous wars.

To understand LAWS and the concept of autonomous wars, let’s go back to the Gatling gun invented in 1861. Richard Gatling, looking at all the horrors resulting from a civil war desired to find an autonomous weapon to fight on the battlefield. His objective was to make war more humane and decrease the number of soldiers required on the battlefield. One hundred soldiers required to fire on the battlefield could be replaced by just four people needed to operate the Richard Galing gun; hence very few people would be needed. It was a pioneer to the machine gun and the intention behind was to save lives. The reality was quite different. Gatling’s gun had the reverse impact of what was intended and it intensified the devastation and killing on the battlefield. Gatling was wrong as autonomous weapons caused massive destruction and could not save lives. Antonio Guterres, UN Secretary-General has called the autonomous weapons to be used in autonomous wars “morally repugnant” and pushed the AI experts to ban them.

Keeping in view the high risks posed to human rights, in addition to the moral, ethical, and security threats entailed by AWS, Amnesty International has prohibited developing, producing, and using AWS and is trying to ensure that a significant human control is maintained over the use of force. Many states including, Mexico, Brazil, and Austria have emphasized the significance of maintaining human control over weapons. Most of the states have supported the development of new international laws on AWS.

Looking forward to the dystopian future by Autonomous weapons, the United Nations conferred the possibility to introduce a ban on Killer Robots on the International level in 2013. The debate involved more than 100 leaders from the community of AI, including Elon Musk from Tesla and Mustafa Suleyman from Alphabet. These leaders signed an open letter arguing that to build lethal autonomous weapons or killer robots means opening a Pandora’s Box that could alter the nature of war forever. They also warned that these autonomous weapons could lead to the “third revolution in warfare” following gunpowder and nuclear arms.

Jody Williams who worked on banning landmines and won a Nobel Peace Prize had warm participation in the movement of stopping killer robots. The objective of this campaign was to ban lethal autonomous weapons. The members of this campaign included activists, civil society organizations, well-known scientists like Noam Chomsky, and over 450 AI researchers.

Positively, the employees of the tech giants like Google, Microsoft, and Amazon have challenged their employers and raised ethical concerns regarding the use of AI for military purposes. In 2018, more than 170 tech companies including Google DeepMind and XPRIZE Foundation, and more than 2400 AI and robotic researchers, academics, and engineers endorsed a Lethal Autonomous Weapons Pledge. In this pledge, it was committed that they would not participate or support in the production or use of autonomous weapons.

Almost 29 states from the global south have strongly supported banning such autonomous weapons as they are frightened that these lethal weapons are most likely to be used against them. On 12 Sep 2018, 82% of the European Parliament passed a resolution to support an international ban on AWS to have significant human control over the dangerous functions of weapons.

The overwhelming concern of the states and groups who are raising campaigns to ban autonomous weapons or stop killer robots is that if machines become entirely autonomous, humans will not have any control over the killing decision. It will bring about a moral dilemma. And what would happen if some evil regime employs lethal autonomous systems on their nation?

Moreover, fully autonomous weapons are at present the most distressing but still blooming military technology. Such fully autonomous weapons must be closely observed under the Martens Clause, by the experts, the general public, and the states. Martens Clause is an exclusive provision of IHL (International Humanitarian Law) that ascertains a model of protection for both the combatants and the civilians when a particular agreement or law on a topic does not exist. According to Martens Clause, when there is not any international agreement or law on some topic, the combatants and civilians must be protected based on the established custom of the region, and the humanitarian principles. Hence, this law applies to entirely autonomous weapons because of the lack of any international law regarding their use. Martens Clause also provides main aspects or moral standards to the states to consider while evaluating rising weapons technology including autonomous weapons.

Summary

AI is undeniably the future of warfare and weaponization of AI is the name of the new game. This game includes the development and deployment of Lethal Autonomous Weapon Systems (LAWS). At present, many weapon systems with different levels of human intervention are in use already. However, advancement in AI is swiftly leading to Autonomous wars where AI-enabled weapons will be independently fighting with each other without any human involvement. However, the question arises about the circumstances when the militaries should delegate the decision of taking a human life to a machine? This question has extended serious concerns regarding the nature of warfare. Therefore, the human rights organizations, the militaries, the research analysts, ethicists, and defense officials have not reached a consensus so far as it’s a giant moral leap. Though AI has not yet produced the Terminator sort of killer robots but it has already begun changing the nature of warfare. According to Vladimir Putin, it would not be wrong to say that the war has already progressed from “informatization” to “intelligentization” and whoever wins the leadership in this field will rule the world.

To cut a long story short, “the AI genie is out of the bottle now” as remarked by Mr. Wilby. The ways to weaponize AI are less cinematic but equally scary. As weaponized AI has started impacting our world, there is a strong need to find the best ways to control it properly so that the anticipated autonomous wars are avoided. Furthermore, if the terrorist organizations start using AI with evil intentions, most probably our best defense should be an AI offense.