The future of war will be fought by machines, but will humans still be in charge?

Swarms of drones. Automatic driving tanks. Autonomous sentinel cannons. Sometimes it seems that the future of the war came to our doors overnight, and we have all been surprised without preparation. But as Paul Scharre writes in his new book Army of None: Autonomous Weapons and the Future of War this has been slow to arrive, and we are currently the slow culmination of decades of development in military technology. . However, that does not mean it is not scary.

Scharre's book provides an excellent description of this field, tracing the history of autonomous weapons from the first machine guns (which automated the firing and reloading of a rifle) to the world of murderous drones, improvised in garages and sheds. As a former Army Ranger and someone who has helped write the government's policy on autonomous weapons, Scharre is knowledgeable and concise. More importantly, it pays as much attention to the political dimension of autonomous weapons as the underlying technology, looking at things like historical attempts at gun control (for example, the prohibition of Pope Innocent II on the use of crossbows against Christians in 1139, that he did not). to do a lot).

The Verge recently spoke with Scharre about Army of None, discussing the current attitude of the US Army. UU Towards autonomous weapons, the viability of attempts to control the so-called "killer robots" and whether it is not inevitable that the new military technology has unexpected and harmful side effects.

This interview has been condensed and slightly edited for clarity.

This book has arrived at an opportune time, I would say, just when the discussion about fight autonomous weapons systems is back in the news. What was your motivation to write it?

I have been working on these issues for eight or nine years, and I have been involved in discussions about autonomous weapons at the United Nations, NATO and the Pentagon. I felt I had enough to say that I wanted to write a book about that. Certainly, the issue is heating up, particularly when we see that autonomous technologies develop in other spaces, such as driverless cars.

People see a car with autonomy, and they establish the connection between that and the weapons. They solve the risks and start asking questions, such as: "What happens when a military drone has a lot of autonomy as a car without a driver?" It is because we are at this point very interesting in time when technology is becoming real and these questions are less theoretical.

H How did the US military arrive? UU To your current position? Our readers are familiar with the development of technology such as the autonomous vehicles of private companies, but how and when were you interested A rmy in this?

In the case of the United States, they stumbled upon this military robotic revolution through Iraq and Afghanistan. I do not think it was deliberately planned, to buy thousands of aerial and terrestrial robots, but that's what happened. Most people would have said that it was not a good idea, but it turned out that these robots were incredibly valuable for very specific tasks in these conflicts. The drones provided aerial surveillance and [bomb disposal robots] decreased the threat of things like improvised explosive devices on the ground.

During these conflicts, they saw the American army wake up with this technology, and they began to think strategically about the direction they wanted to take. So, a common theme has been wanting to develop more autonomy because systems [robotic] in the past had telecommunications links so fragile with humans. If they are stuck, then your robots can not do anything. But when the military says they want "full autonomy," they are not thinking about the Terminator. They are thinking of a robot that goes from point A to point B by itself. And they have not expressed it clearly.

I quote the Flight Plan of the US Air Force. UU From 2009 in an airplane system [uncrewed] that explicitly raises these issues of [autonomous weapon systems] and was the first civil defense document to do so. The document says that we can imagine this period of time in which the advantages of speed make it better to go to total autonomy, and this raises all these complicated ethical and legal issues, and we have to start talking about it. And I think that was correct.




There are only a few fully autonomous weapon systems deployed around the world, including the Aegis combat system (pictured) and the Israeli Harpy drone.

T he Air Force Flight Plan says that in a situation where computers can make decisions faster than humans, it could be advantageous to hand over control to machines. You point out that this has been the case with the very small amount of autonomous weapon systems currently in use, which are designed for situations where humans simply can not keep up.

As, for example, the Aegis Combat System of the US Navy. UU that is used in ships to defend against the bombing of guided missiles with precision, which in turn are a kind of semiautonomous system. Considering this fact that autonomous weapon systems are being constructed in response to autonomous weapon systems do you think the forward march of Is this technology unstoppable?

I think that is one of the central questions of the book. This road we are on, is it the inevitable destination? It is clear that technology is guiding us along a path in which fully autonomous weapon systems are certainly possible, and in some simple environments, they are possible today.

Is that a good thing? There are many reasons to think that no. I am inclined to think that it is not a great idea to have less human control over violence, [but] I also do not think it is easy to stop the advance of technology. One of the things that I tried to deal with in the book is the historical record of this because it is extremely varied. There are examples of successes and failures in arms control that goes back to ancient India, up to 1500 BC. There is this ancestral question of "Do we control technology or does our technology control us?". And I do not think there are easy answers for that. Ultimately, the challenge is not really the autonomy or technology itself, but ourselves.

One thing that I think your book does very well is help define the terms of this debate, distinguishing between different types of autonomy. This seems incredibly important because how can we discuss these problems without a common language? With this in mind, is there any particular concept s here that you think is regularly misunderstood?

[ Laughter ] This is always the challenge! I put 10,000 words in the book talking about this problem, and now I have to summarize it in a paragraph or two.

But yes, one thing is that people tend to talk about "autonomous systems", and I do not think it's a very significant concept. You need to talk about autonomy in what sense: what task are you talking about automate? Autonomy is not magic. It is simply the freedom, whether of a human being or a machine, to perform some action. As children grow, we give them more autonomy: staying out later, driving a car, going to college. But autonomy and intelligence are not the same. As systems become smarter, we can choose to grant them more autonomy, but we do not have to.

In tracing the history of autonomous weapons, you start with the American Civil War and the inventor of the Gatling gun, Richard Gatling. This was a precursor to modern machine guns, and includes a fantastic fragment of one of Gatling's letters in which he says his motivation was to save lives. He thought that a weapon that shot automatically would mean fewer soldiers on the battlefield and, therefore, fewer deaths. Of course, this was not the case. Do you think it is inevitable that new technologies in war have these bloody involuntary consequences?

Many technologies look great when you are the one. You say, "Wow, look at this! We can save the lives of our troops by being more effective on the battlefield!" But when both sides have them, as with machine guns, the war suddenly takes to a much more horrible place. I think that is a definite concern with autonomy and robotics. There is a risk of an arms race, where, individually, the nations pursue several military advances that are very reasonable. But collectively, that makes war less controllable and in general is detrimental to humanity.

With the Gatling gun, it was one of those fascinating things that I stumbled upon while investigating the history of this field. And the automation there reduced the number of people needed to deliver a certain amount of firepower: four people with a Gatling gun could deliver as much firepower as 100 people. But the question is, what did the military do with that? Did they reduce the number of people in their armies? No, they expanded their firepower, and in doing so, they took the violence to a new level. It is an important warning story.




Platform of battle robots "Platform-M" from Russia.
Image: Russian Ministry of Defense

Notes that people mistakenly assume that there is a rush for autonomy in the US military. UU When there is in fact, a lot of internal resistance. Unlike Russia, for example, the United States is not building ground robots for the front line, and the autonomous aircraft it is developing is intended for support roles not combat. How would you summarize the current US policy? UU On autonomous weapons?

There is a lot of rhetoric about the establishment of EE defense. UU And the AI ​​and autonomy. But if you look at what they are really spending money, the reality does not always coincide. In particular for the combat application, there is this disconnect in which there are engineers in places like DARPA running at full speed and making the technology work, but there is a death valley between R & D and operational use. And some of the obstacles are culture because combatants simply do not want to give up their jobs, particularly to people at the tip of the spear.

The result is that the leaders of the US Department of Defense UU They have said very firmly that they intend to keep a human being aware of future weapons systems, authorizing decisions of lethal force. And I do not hear that same language from other nations, like Russia, that talk about building a fully robotized combat unit capable of autonomous operations.

Russia and China obviously appear much in the book, but experts seem to be more concerned about non-state actors. They point out that much of this technology, such as autonomous navigation and small drones, are freely available. What is the threat there?

Non-state groups such as the Islamic State already have armed drones that have improvised using commercially available equipment. And technology is so ubiquitous that it's something we're going to have to solve. We have already seen attacks of low-level "massive" drones, such as that of a Russian air base in Syria. I hesitate to say that it was a swarm of drones because there are no signs of cooperation. But I think that attacks of that kind will expand in sophistication and size over time because technology is widely available. There is not a good solution for this.

This fear that AI is a "dual use" technology, that any commercial investigation may have malicious applications, seems to have motivated many people who argue that we need an international treaty that controls autonomous weapons . Do you think such a treaty is likely to happen?

There is some energy after the recent meetings at the United Nations because they saw significant movements from two main countries: Austria, which is going to ask for a ban, and China, declaring at the end of the week that they would like some kind of ban of autonomous weapons. But, I do not think we see the momentum for a treaty in the vein CCW [the 1983 Convention on Certain Conventional Weapons, which limits the use of mines, booby traps, incendiary weapons, blinding lasers, and others] of the UN. It just is not in the cards. [The UN] is an organization based on consensus, and all countries should agree. It's not going to happen.

What happened in the past is that these movements have matured for a time, in these large collective bodies in the United Nations, and then migrated to independent treaties. That led to the treaties on cluster munitions, for example. I do not think we're at that point yet. There is no central group of western democratic states involved, and that has been critical in the past, with countries like Canada and Norway leading the attack. It is possible that the Austrian movement will change that dynamic, but it is not clear at this time.

The big difference this time is the lack of a direct humanitarian threat. People were being killed and mutilated by land mines and cluster munitions, while here, the threat is very theoretical. Even if countries like China and the USA. UU They will sign some kind of treaty, the verification [that they were following the treaty’s rules] would be exceptionally difficult. It is very difficult to imagine how to get them to trust each other. And that is a central problem. If you can not solve it, there is no solution.

Since you believe that a UN ban or a set of restrictions will not happen, what is the best way we can guide the development of autonomous weapons? Because no one involved in this debate, even those who argue that autonomous weapons definitely save lives, think there are no risks involved.

I think more conversations on the subject by academics in the public sphere are for the better. This is a problem that brings together a great variety of disciplines: technology, military operations, laws, ethics and other things. And this is a place where having a solid discussion is useful and very necessary. I would like to think that this book could help advance this conversation, of course, by expanding the group of people who participate in it.

What I think is important is to establish the underlying principles of how the control of autonomous weapons looks. Things like defining what we mean by "meaningful human control" or "appropriate human judgment" or the concept of focusing on the human role. I like that, and I want to see more of that conversation internationally. I think it raises the question: if we had all the technology we could think about, what role would we want humans to play in the war? And because? What decisions require a unique human judgment? I do not know the answers, but these are the right questions to ask.

Leave a Reply