This was the central message that President Vladimir Putin conveyed to more than one million Russian school pupils in a video call in September 2017. The announcement was no surprise: AI has become one of the most promising technologies in more recent history and the pace of progress is astounding – this holds true both for the civilian and the military realm.
Most people forget, for example, that as late as 2004, autonomous cars were unable to drive more than a couple of miles on an empty desert track. In 2005, however, five cars were able to finish the second DARPA Grand Challenge, a race for robotic cars funded by the US Defense Advanced Research Projects Agency (DARPA).
‘Stanley’: Winner of the DARPA Grand Challenge 2005 in the Smithsonian National Air and Space Museum, Washington, D.C., 2015.
Source: Niklas Schörnig
While we are not seeing fully autonomous cars mixing with human-steered vehicles just yet, many AI-based technologies and assistant systems have already found their way into commercial current generation vehicles. Artificial intelligence has also surpassed human capabilities in contexts where many observers were expecting human superiority for a long time to come. 1996 the computer-program Deep Blue had beaten the Chess World Maser Kasparov with a simple brute-force approach.
In 2016, Google’s AlphaGo Program beat Grandmaster Lee Sedol in Go, a game significantly more complex than chess.
Another milestone was reached in August 2020. Again this was down to DARPA, pitching an AI-controlled jet fighter against a human Air Force pilot in a simulated dogfight. While the conditions were not as symmetric as in Go, the fact that AI won five to nil against the human was seen as the start of a new era by many. When it comes to the use of AI, the United States and Russia are not the only countries with a strategic interest in what artificial intelligence has to offer. Many countries have published AI strategies for the coming years and decades.1
It is no wonder that militaries worldwide are keen to implement AI to enhance their capabilities.2 The number of military applications for AI are vast:
Analysis of data collected by all kinds of sensors on the battlefield
Identification and classification of potential targets, even camouflaged
Enhanced automation of drones or the control of drone swarms
Support of tactical decisions, or even optimised logistics
What is obvious from this list is that AI is widely perceived not as a particular weapons system but as an enabler – just as the combustion engine was at the beginning of the 20th century. With the US, Russia, China – and to some extent the EU – competing for AI leadership, the fear of an AI arms race does not seem too far-fetched. In any case, the use of AI in the military realm is going to increase significantly in the years to come.
Artificial intelligence: What it is and how arms control can benefit from it
While most people think they know what AI is, it is always important to clarify what is understood by the term ‘AI’ in a specific context. There are two basic forms of AI. On the one hand, we have very complex ‘expert systems’, which can be understood as tremendously complex decision trees, where the system ‘decides’ based on a high number of different variables. In principle, these systems are deterministic as the same starting conditions always lead to the same result. Yet, due to the sheer complexity and number of variables, humans do have difficulty keeping up. While these systems were very common a few decades ago, modern systems use a different approach.
Expert system vs neural net
Source: Grübelfabrik, CC BY-NC-SA
What are more common today are AI systems based on machine learning. Here the system compares large amounts of data for similarities using statistical models. Given enough pictures of, for example, cats, the system can use statistical methods to determine similarities to identify cats on new pictures without being told what to look at. Machine learning has made tremendous progress in the last couple of years and some experts today only use the term AI to refer to machine learning algorithms. Another form of AI is “deep learning”, where the computer learns by trying to replicate a neural net as in the human brain.
But machine learning AI is not without difficulties. Expert Gary Marcus, for example, came up with four characteristics of AI3. Its ‘greedy’ (that is hungry for data), ‘brittle’ (fails spectacularly when confronted with untrained tasks), ‘opaque’ (prone to inexplicable errors and therefore difficult to debug) and ‘shallow’, because despite the use of the term ‘deep learning’, there is in fact no understanding of what has been learned.
This raises the basic question: How can one be sure about what the algorithms have actually learned? As correlation is not causation, how can AI be ‘trusted’?
From a military perspective, the issue of reliability is at least as important as in the civilian sphere. Imagine an AI-powered lethal autonomous weapons system going rogue, maybe even starting a war by mistake. Some experts fear that in a war of necessity, a war where national survival is at stake, states might use untested or unverified military AI to gain superiority.
While this is indeed a potential risk, many states are at least aware of the danger of unrestricted use of unreliable AI and some suggest developing norms on which to base the military use of it.
The declaration features ten measures, including, for example, the call for states to take proactive steps to minimise unintended bias; train users to sufficiently understand the capabilities and limitations of AI-powered systems; ensure that AI capabilities only have explicit, well-defined uses; and implement appropriate safeguards, e.g. the ability to deactivate a system when it shows unintended behaviour.
But an even more profound revolution in computing is in the starting blocks, one that will potentially cause even greater upheaval than machine learning already has. The technology we are talking about here is quantum computing.
Quantum computing
Quantum computers take the principle of miniaturisation which classical computers have followed down to the single atom on the next lowest level by utilising the fundamentally different physical principles that apply on the sub-atomic level for computer operations.
Experimental quantum computer at the IBM Quantum Lab in Yorktown Heights, New York
Source: IBM Research, CC BY-ND 2.0
Superposition is the first such phenomenon used for this purpose. It involves superposing different states. Quantum computers use superposition in quantum bits (qubits), which, unlike classical bits with their two discrete states (1 or 0), can take on all states between 1 and 0 at the same time. Quantum computers thus offer massive parallel computing performance compared with classical computers. They are also better at scaling – at least in theory – because ideally every additional qubit doubles the computer’s performance, which thus grows exponentially. With computing tasks of exponentially increasing complexity, quantum computers develop solutions quickly (in seconds or minutes) where even the biggest classical supercomputers would need too much time (tens of thousands of years). This is what is commonly understood by the term ‘quantum supremacy’.
Speed advantage through 'quantum supremacy'
Source: Grübelfabrik, CC BY-NC-SA
The second phenomenon utilised in quantum computing is entanglement. Entanglement means that two or more particles are linked with each other, i.e. they can represent the same state, even over long distances. The numerous possibilities for flexibly manipulating such entangled qubits contribute to the speed with which a quantum computer is able to handle complex computing problems.
Quantum Entanglement
Source: Grübelfabrik, CC BY-NC-SA
There are still a number of difficult hurdles to overcome before quantum computing can be used on a widespread basis (by miniaturising mass-producible architectures). Sceptics note that quantum computers could face the same fate as atomic fusion. Decade after decade, it has been predicted that this technology will finally have its breakthrough. But this breakthrough never seems to materialise.
However, if quantum computers were to one day make their way out of the lab and into everyday applications, the implications would be extremely wide-ranging. From a military perspective, the most important elements to focus on would be quantum cryptography and quantum sensing.
Quantum cryptography
When it comes to cryptography, established cryptographic methods take advantage of the fact that certain mathematical problems cannot be solved in a reasonable time frame by classical computers. As outlined above, quantum computers have the potential to introduce a paradigm shift in this respect by very quickly decrypting databases that, by current standards, have been securely encrypted. Stored datasets that so far have been impossible to decipher could also suddenly be accessed.
At the moment, this is just a theoretical scenario. Nevertheless, ideas for quantum computer-resistant encryption methods for the world of classical computers and the internet are already being discussed. In 2016, for example, the National Institute for Standards and Technology (NIST) in the US already initiated a process to develop and standardise such methods and make them more readily available. The first results for such quantum computer-resistant encryptions are currently under review.
In light of the progress that has been made in just under three decades, the prototypes and special applications that already exist, the investments already made and, last but not least, all the talent and time that has been devoted to the field worldwide, when it comes to a quantum computing breakthrough, it is more likely a question of ‘when’ rather than ‘if’.
Quantum sensing
Sensor technology is the area of quantum technology with the highest number of concrete applications that are already in use. Unlike quantum computers, quantum sensors do not require large numbers of entangled pairs of particles. Considerable advances over the last two decades when it comes to the production and manipulation of the quantum states of particles also mean that research now has a better handle on ‘noise’, which of course affects the precision of measurements. Much like in the field of computers, a variety of different physical principles and designs are also being explored simultaneously.
With quantum sensors, mass, time, place, speed, acceleration and electromagnetic field strength can be measured several orders of magnitude more accurately than with classical sensors. Spatial resolutions in the nanometre range are possible. Quantum clocks make it possible to synchronise processes precisely. Quantum gyroscopes for inertial navigation systems and quantum sensors for measuring earth’s magnetic field can make autonomous mobility possible without having to rely on GPS or other satellite navigation systems. Compact quantum magnetometers that work at room temperature are currently being developed. These could be used in areas ranging from submarine detection to brain–computer interfaces.
Footnotes
Galindo, L./ Perset K./ Sheeka, F. 2021. “An overview of national AI strategies and policies”, in: OECD Going Digital Toolkit Notes, no. 14, OECD Publishing, Paris, available at: https://doi.org/10.1787/c05140d9-en. ↩
Sauer, Frank. 2022. “The Military Rationale for AI”, In: Reinhold, T./Schörnig, N.: Armament, Arms Control and Artificial Intelligence. The Janus-faced Nature of Machine Learning in the Military Realm. Springer, 27–38. ↩