Artificial Intelligence and the New Nuclear Age

The global competition for artificial intelligence and new nuclear weapons will soon pose unprecedented policy challenges. Future decisionmakers and thinkers must be equipped with technological expertise as well as a greater capacity to harness innovations to public goods.

Photo by Andrey Suslov/Getty Images

(Photo by Andrey Suslov/Getty Images)

There are two major types of global competitions that could reshape the human condition. The first one is for superiority in artificial intelligence (AI), which may go down as the “single most influential human innovation in history”[i] that is even “more profound than electricity or fire.”[ii] Another one is the new nuclear arms race marked by multi-polarity and advancement of precision strike capabilities that are “eroding the foundation of nuclear deterrence”.[iii] At this rate, the global security environment may look unrecognizable by 2050. Future policymakers must start thinking about how to meet unprecedented challenges.

New Nuclear Arms Race

The second nuclear age has already begun.[iv] In contrast to the Cold War’s mainly bipolar framework, the global nuclear environment in 2018 is characterized by nine powers with different capabilities, intent, and relations to one another. Nuclear weapons technology is increasingly accessible, and has enabled rogue regimes like North Korea to become a major geopolitical player.[v] International nonproliferation mechanisms have been struggling to produce permanent achievement in the past decade.

The advancement in precision strike and sensor technologies has led to another paradigm shift in nuclear stability. Traditionally, the difficulty of decimating the enemy’s nuclear arsenal before the enemy has time to respond had allowed threats of retaliation to be credible on both sides of the strategic competition. This also came to be known as “mutually assured destruction”. However, heightened vulnerability to detection and precision strikes could lead to heightened mutual anxiety and temptation for preemptive strike, while the nuclear weapons themselves remain destructive.

Nuclear powers have already begun a sweeping update of their aging arsenal. In the US, the Obama administration began replacing all three legs of the nuclear triad with new land-based missiles (Ground-Based Strategic Deterrent), strategic bombers (B-21 Raider), and ballistic missile submarines (Columbia Class) as well as adding new Long Range Standoff cruise missiles (LRSO) and life-extension for some existing platforms. The nuclear modernization is projected to cost around $1.2 trillion by year 2046, accounting for about 6.4 percent of total defense budget.[vi] The Trump administration is continuing the plan and proposing new types of weapons capabilities, including low-yield missiles.[vii] The bipartisan commitment reinforces the notion that nuclear deterrence will be the “foundation” of US defenses.[viii]

Russia has revealed a new autonomous underwater vehicle (AUV) called Ocean Multipurpose System Status-6, which would be launched from submarines, autonomously circumvent antisubmarine defenses, and strike on the US coastline with its nuclear warhead.[ix] Russia, China, and the US are testing hypersonic glide vehicle that could travel at five times the speed of sound and render current defense systems useless.[x] Although no one has explicitly acknowledged the intention to mount nuclear warhead on hypersonic weapons, it is a growing possibility. The underreported but new era of nuclear arms race is likely to creep up on policymakers in the near future.

Artificial Intelligence and Nuclear Stability

The new nuclear age is also likely to coincide with the rapid advancement and ubiquity of AI technology. Artificial intelligence has long been defined as the “science of making machines do things that would require intelligence if done by men”.[xi] There are three types of manners in which AIs operate differently from previous human inventions. First, AI algorithms are designed to make decisions using real-time data, as opposed to mechanical or predetermined responses. Second, machine learning takes data and looks for underlying trends and useful patterns on its own. Third, AI can learn and adapt on its own while making decisions.[xii]

The global competition for AI superiority has begun, and military developments are outpacing civilian applications.[xiii] China has declared its national plan to be the global leader of AI by 2030 with the industry worth almost $150 billion.[xiv] China’s military is preparing for “intelligentized warfare” in the future.[xv] Russian president Vladimir Putin declared that whoever becomes the leader in AI will become the “ruler of the world”.[xvi] In the US, the funding for AI research by the private sector and academia “dwarfs” that of the government,[xvii] but its military has created a new Algorithmic Warfare Cross-Functional Team (AWCFT).[xviii] Such trend has led to concerns that “competition for AI superiority at national level will be the “most likely cause of [World War 3].”[xix]

According to a major study by RAND Corporation, the experts’ opinions of AI’s potential impact on nuclear stability are divided into three broad categories. The first category is “Complacents”, who believe that AI technology would never reach the level of sophistication to handle the challenges of nuclear war, and therefore, AI’s impact is negligible. The second category is “Alarmists” who believe that AI must never be allowed into nuclear decision-making because of its unpredictable trajectory of self-improvement. The last category is “Subversionists” who believe that the impact will be mainly driven by an adversary’s ability to alter, mislead, divert, or trick AI. This can be done by replacing data with erroneous sample or false precedent, or by more subtly manipulating inputs after AI is fully trained.[xx]

AI could be used for tracking mobile missile launchers by accumulating data and detecting underlying patterns, or as a “decision support system” assisting humans on complex matters of planning, training, response, and crisis management.[xxi] In many cases, AI could upset the “foundations of nuclear stability and undermine deterrence by the year 2040”. The breakthroughs could increase the pressure to “use [AI] before it is technologically mature”, subvert adversary’s AI, or launch the first strike based on the “perception” of adversary’s advanced AI capabilities.[xxii]

The rise of AI highlights another urgent issue of cybersecurity. Nuclear weapons were developed before the digital revolution, but now face new cyber risks in command and control communications, data relay between platforms, the supply chain, and many other supporting structures.[xxiii] Although AI can help defend networks, it can also be hacked or subverted. On the other hand, there is also a possibility that AI’s ability to produce more reliable information and foster transparency could reduce distrust as well as risk of nuclear war.[xxiv]

Policy Recommendations

Former Secretary of State Henry Kissinger noted that the rise of AI marks the end of the Enlightenment, in which humanity has now “generated a potentially dominating technology in search of a guiding philosophy.” Although the people who specialize in politics and philosophy may be at technical disadvantage, their role in channeling AI to humanistic traditions must be given “a high national priority.”[xxv] Policymakers and thinkers could harness the technology to solve pressing problems while boosting human intelligence and cognition process imbuing a deep search for meaning and morality.

In terms of security policy, the first recommendation is for the Department of Defense to create an AI strategy and doctrine in order to clarify how much decision-making authority is allowed for AI, and which type of systems AI can be integrated into.

Humans must retain most of decision-making authority, but under extreme circumstances such as shooting down multiple incoming missiles or pinpointing time-sensitive piece of information, AI could save more number of lives than human operators.[xxvi] The US Strategic Command (STRATCOM) must create a new division dedicated to studying and managing the role of AI in its nuclear command and control mission. This includes applying AI in space operations, global strike, global missile defense, and global command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR).[xxvii] The new DoD strategy must acknowledge the value of AI while simultaneously curbing the military’s over-reliance on the technology.

Another security policy recommendation is for the US government to take the lead in establishing international laws and norms on the military usage of AI.

Governments could augment the current WMD nonproliferation regimes to prevent AI technology from falling into the wrong hands. This would include working closely with US allies to establish a united front on AI, and reaching a new type of arms control agreement with near-peer competitors. Governments must also recognize the growing importance of the private sector and nongovernmental organizations in global affairs. Future policymakers must be equipped with technological knowledge as well as a greater capacity to convert innovations into public goods.

The ongoing transformations are reminiscent of the advent of the nuclear age. Following World War II, the global power balance was restructured and the new type of military technology with an unprecedented amount of destructive power sparked a period of uncertainty. Just like with nuclear weapons, fears of AI’s possibly devastating impact have long inspired fictional works and social movements. AI has an enormous potential to solve the problems of objectivity and productivity, but could also be destabilizing if it is left to prioritize efficiency and expediency over ethics and wider socio-political implications. AI must be treated like a drug that could prove to be vital at certain times, but still need to be used in moderation.

Perhaps the most important lesson that could come out of the new nuclear age marked by the rise of AI and multi-polarity is that, enhanced decision-making and international cooperation, which have been proven to be more strategically viable than maximum lethality in the past, may prove to be even more imperative in the future.


[i] West, D. M., & Allen, J. R. (2018). How Artificial Intelligence is Transforming the World. Brookings Institution.

[ii] Goode, L. (2018, January 19). Google CEO Sundar Pichai compares impact of AI to electricity and fire. The Verge.

[iii] Lieber, K. A., & Press, D. G. (2017). The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence. International Security , 41 (4).

[iv] Koblentz, G. D. (2014). Strategic Stability in the Second Nuclear Age. Council on Foreign Relations.

[v] Nolan, J. E., Steinbruner, J. D., Flamm, K., Miller, S. E., Mussington, D., Perry, W. J., et al. (1994). The Imperatives for Cooperation. In J. E. Nolan, Global Engagement: Cooperation and Security in the 21st Century. Washington: Brookings Institution Press.

[vi] Reif, K. (2018, March 9). U.S. Nuclear Modernization Programs. Retrieved from Arms Control Association:

[vii] Office of the Secretary of Defense. (2018). 2018 Nuclear Posture Review. Washington: Department of Defense.

[viii] Hyten, J. (2017, April 4). Statement Before the Senate Armed Services Committee. Washington, DC, USA.

[ix] Insinna, V. (2018, January 12). Russia’s nuclear underwater drone is real and in the Nuclear Posture Review. DefenseNews .

[x] Starr, B. (2018, March 27). US general warns of hypersonic weapons threat from Russia and China. Retrieved from CNN:

[xi] Minsky, M. (1968). Sematic Information Processing. Cambridge, Massachusetts, USA: MIT Press.

[xii] West, D. M., & Allen, J. R. (2018). How Artificial Intelligence is Transforming the World. Brookings Institution.

[xiii] Gershgorn, D. (2018, May 8). Forget the space race, the AI race is just beginning. World Economic Forum .

[xiv] Metz, C. (2018, February 12). As China Marches Forward on A.I., the White House Is Silent. The New York Times .

[xv] Kania, E. B. (2017). Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power. Washington: Center for New American Security.

[xvi] Bershidsky, L. (2017, September 5). Take Elon Musk Seriously on the Russian AI Threat. Bloomberg News .

[xvii] Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security. Belfer Center for Science and International Affairs.

[xviii] Deputy Defense Secretary (2017). Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven). Department of Defense.

[xix] Bershidsky, L. (2017, September 5). Take Elon Musk Seriously on the Russian AI Threat. Bloomberg News .

[xx] Geist, E., & Lohn, A. J. (2018). How Might Artificial Intelligence Affect the Risk of Nuclear War? Rand Corporation.

[xxi] Ibid.

[xxii] Bershidsky, L. (2017, September 5). Take Elon Musk Seriously on the Russian AI Threat. Bloomberg News

[xxiii] Unal, B., & Lewis, P. (2018). Cybersecurity of Nuclear Weapons: Threats, Vulnerabilities, and Consequences. Chatham House.

[xxiv] Geist, E., & Lohn, A. J. (2018). How Might Artificial Intelligence Affect the Risk of Nuclear War? Rand Corporation.

[xxv] Kissinger, H. A. (2018, June). How the Enlightenment Ends. The Atlantic .

[xxvi] Work, R. (2016, April 28). Remarks on the Third Offset Strategy. Brussels, Belgium.

[xxvii] U.S. Strategic Command. (2017, January). Retrieved from