Artificial Intelligence is everywhere in today’s world. It is seen by many as being a solution to countless problems but could also bring forth another set of issues. This potential has not slipped past the world’s governments and militaries, which are starting to explore ways in which they could use artificial intelligence to gain an edge over their adversaries. Advancements in the field of artificial intelligence are being made by the day and it is only a matter of time until the technology starts fueling a race between rivaling powers as they implement it in new weapon and defense systems. However, we have never been faced with such a situation before. It was always people that started, fought and ended wars. Never have we had machines make life or death decisions in such an environment. Once we reach such a point, what could the consequences be? By giving a machine the power to decide whether to pull the trigger or not and how to proceed, we are taking away what little control we have over such a chaotic environment as war. This leads to many unforeseen issues, with possibly catastrophic consequences, and our only solution is to regulate the development of artificial intelligence and forbid such reckless use.
An important distinction to make is the distinction between some of the umbrella terms being thrown around and what exactly is meant by artificial intelligence and such weaponry. Artificial intelligence used in this context refers to a highly specialized piece of computer software which can make decisions based upon the way it is programmed. It does not refer to a general-purpose AI, which is something along the lines of what the personal assistants on many of today’s phones aspire to be. The type of artificial intelligence referred to in the context of weaponry is limited to making split-second calculated decisions in that situation. The issue is that this is the goal many researchers are trying to reach in all fields regarding artificial intelligence, an intelligent decision-making machine. However, so far, no technology can do this to such an extent, which is what leads to questioning the consequence of further development in this field.
Current technology scarcely implements AI. This is apparent from the reports from the US Department of Defense only recently starting to take this technology seriously. However, according to some sources, the department of defense has spent 7.3 billion dollars last year on big data and artificial intelligence. This is a very significant amount already. Moreover, the interest in this technology is steadily increasing, and the DoD outlined in the National Defense Strategy that artificial intelligence is one of the key ways in which the US military could be modernized. Therefore, we will be presented with such technology sooner than most would expect.
To be able to successfully understand the introduction of AI into the field of warfare, however, one must first be familiar with how previous technologies and ideas have affected the course of warfare. Experts generally define three generations of warfare and a fourth emerging one, still up for debate. First generation warfare emerged first when nations were able to gather large unified armies, starting wars between nations and not small factions. Battles were very linear, not dominated by strategy, two opposing factions would stand across from each other and engage. Second generation warfare changed this. After the industrial revolution, new weapons emerged. It no longer made sense to fight in open fields facing each other. Instead, warfare became like what we see in World War I textbooks. Trench warfare and relying on artillery instead of sheer numbers. Third generation warfare, however, was brought forth by a shift in military tactics. It first started appearing late in World War I but was further developed by the Germans in World War II. Their strategy was known as blitzkrieg. Fast, precise movement of smaller forces, striking key targets. Warfare became strategy and planning and not sheer force.
The debate around fourth generation warfare is the opinion that modern fight against terror is the next generation of warfare. It is no longer only bound to the military, factions use political, cultural and intimidation to gain advantages. In a way, it is a throwback to the days before first generation warfare, where wars were led by small faction pursuing personal agendas.
The issue arises from the removal of agency. Even currently, without decision-making machines, autonomous drone technology drastically changes how combat works. The difference is that without the burden of having to think about human casualties on one side, it is easier to make decisions which involve combat. The removal of the human factor can easily lead to a rise in aggression. This is even further accentuated when there is no person behind the autonomous weapon system. Artificial Intelligence in this manner is designed to make the best possible decision to achieve its goal. This decision may not always be what the human creator had intended. Because these are split-second decisions, sometimes the reluctance of a person behind the trigger may save many lives.
There is one notable situation during the cold war where human decision-making and reluctance proved to be of use. Stanislav Petrov was a Soviet officer who helped avert a nuclear war based on a hunch. During the cold war, he was in a pivotal decision-making chain in the Soviet missile tracking system. Once, there was an alert that 5 missiles were being launched from a US base, and he had to react accordingly, it was required of him to immediately notify his superiors, so a counterattack could be launched. However, due to his distrust in the missile tracking system, he decided to wait, and it paid off. There was no attack. This is a situation where a hypothetical AI officer would make the wrong decision. The AI would see a threat, and it would react immediately. Just as the missile tracking system failed, putting the decision on whether the trigger should be pulled on a machine could lead to dangerous consequences.
Additionally, modern artificial intelligence is already incredibly complex. So complex in fact that it is impossible for humans to understand its functionality completely. How does this happen? It is due to how we program these systems. Most of them utilize a system called machine learning. This is a system that has some similarities to the human brain, in the sense that it utilizes “neurons” to improve upon itself, but that is where the similarities end. These neurons are mathematical functions which take an input, modify it in a random way at first and then produce an output. The output is then taken and compared to the desired, correct output. If it is close, the functions that did a good job are only slightly modified, while the functions that did a bad job of approximating are modified more. For example, training a neural network to recognize a certain object. At first, it will be completely random. Then, by modifying these “neurons”, due to random good guesses and accidental changes on the neurons, the machine will get better and better at performing this certain task over time. Eventually, over millions of iterations, which is nothing for a fast, modern computer, the machine becomes very efficient at the task, even outperforming humans in some aspects. The connections between these neurons and the functions they perform on the data are extremely complex and are far beyond human understanding. Moreover, the machine does not reach the result in the same way a human does, it follows a completely different chain of logic, and that is where the issue occurs. Generally, these systems strive towards efficiency and a straight line path towards a goal, which might not always be the correct thing to do.
Therefore, due to the inherent dangers of artificial intelligence weaponry, what can we do? First of all, it is important to recognize where the threats come from exactly. One source of possible threats would be difficult to trace aggression with no consequences. These machines are already too advanced for us to decipher. With access to such weaponry, they could be employed to execute tasks which could be extremely difficult to trace back. This leads to an incredible amount of chaos and leaves our current intelligence systems useless to defend us against such attacks. Another source of threats is tasks that would normally not be executed by humans. Since machines eliminate the possibility of human casualty, all sides are more ready to take action, whatever form that may be.
To combat this, an efficient system of regulations must be set in place for artificial intelligence. Top AI experts from the Universities of Oxford and Cambridge influential people such as Elon Musk warn of possible dangerous scenarios and call for action as quickly as possible regarding these issues. However, what can we do? It is certain the arms race is not going to stop, it is impossible to stop rival governments from exploiting emerging technologies to gain a power advantage. However, progress must be made in the field of research first.
Currently, there are no regulations for the development of artificial intelligence. A first step would be to bring this issue to the awareness of the public and policymakers and make them aware of the importance. Policymakers should then collaborate with researchers and discover the possible malicious uses for artificial intelligence and mitigate it as much as possible from the very beginning.
It is also interesting to note how most of the research and progress in AI is currently being done in the private sector by large corporations. The next and crucial step would be to bring to light the dual nature of the work the engineers at these companies are doing. Most of the people in this field are in it for the technology itself. They want to work on it and advance it and mostly have optimistic outlooks on the future of the field they are working in. However, they must be taught to consider the possible negative implications of their work. This could mitigate damage right at the beginning and would enable engineers to prevent possible consequences when harmful applications are visible in their research.
Artificial intelligence is already solving many of the issues of modern life. It reduces human error and simplifies many redundant tasks people are just unwilling to do. However, putting the fate of another person in the hands of a machine is not a wise thing to do. We are reluctant to take lives amongst each other, and even when we do there is an incredible amount of emotion and reasoning behind such a decision. Leaving that to a machine removes the reluctance, the difficulty behind the act. It does without thinking what is “right”.
Citations
Eckersley, Peter. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” Electronic Frontier Foundation, 21 Feb. 2018, www.eff.org/deeplinks/2018/02/malicious-use-artificial-intelligence-forecasting-prevention-and-mitigation.
Clifford, Catherine. “A.I. Experts Warn of a ‘Black Mirror’-Esque Future with Swarms of Micro-Drones, Autonomous Weapons.” CNBC, CNBC, 21 Feb. 2018, www.cnbc.com/2018/02/21/openai-oxford-and-cambridge-ai-experts-warn-of-autonomous-weapons.html.
“DoD Official Highlights Value of Artificial Intelligence to Future War.” U.S. DEPARTMENT OF DEFENSE, www.defense.gov/News/Article/Article/1488660/dod-official-highlights-value-of-artificial-intelligence-to-future-warfare/.
Lind, William S. “The Four Generations of Modern War.” Www.counterpunch.org, 16 Oct. 2015, www.counterpunch.org/2003/04/21/the-four-generations-of-modern-war/.
Stroud, Matt. “The Pentagon Is Getting Serious about AI Weapons.” The Verge, The Verge, 12 Apr. 2018, www.theverge.com/2018/4/12/17229150/pentagon-project-maven-ai-google-war-military.
Leave a Reply
You must be logged in to post a comment.