thebulletin.org By Alice Saltini | December 22, 2025
In November, 115 states voted in favor, eight voted against, and 44 abstained from voting on a resolution adopted by the General Assembly’s First Committee that examines the possible risks of integrating AI into nuclear weapons systems. Image: depositphotos
The United Nations rarely moves fast on disarmament. This year, though, it did something unusual. On November 6, the General Assembly’s First Committee, where states debate over questions of disarmament and international security, adopted a resolution that directly looks at the possible risks of integrating artificial intelligence into nuclear weapons systems, especially in nuclear command, control, and communications, known as NC3. Austria, El Salvador, Kazakhstan, Kiribati, Malta, and Mexico pushed the text, bringing into a formal setting a problem that has mostly lived in expert circles and informal dialogues.
With 115 states voting in favor, eight voting against, and 44 abstaining, support was broad. Nuclear-armed states and many of their allies voted against the resolution or chose to abstain. In contrast, the Global South and most non-nuclear-weapon states expressed strong support. This split reflects how each group views threats and what policies they prioritize. It also reveals how early and unsettled global thinking on AI in the nuclear field still is. Rather than seeing the outcome of the vote as a clear-cut failure or success, it may be best read as an initial test case. In other words, the resolution was an early attempt to translate a fast-moving technical debate into diplomatic language.
It also opened a window: The resolution is a major steppingstone because it nudges the debate beyond the baseline notion of “keeping humans in control of nuclear weapons decisions” towards a more fine-grained recognition of how AI could fuel unintended escalation in decision processes. Moreover, the negotiations and the voting patterns show how states currently understand the AI-nuclear nexus, where their concerns lie, and what they are prepared to put on paper. They also offer clues for how to shape future work (from working papers and language for outcome documents, to concrete confidence-building and risk-reduction measures) in nuclear-focused forums, including the next Nonproliferation Treaty (NPT) Review Conference. Scheduled for next spring in New York City, the Review Conference offers a chance to move the discussion onto a relevant and practical, yet somewhat less sensitive, track that can help build momentum on harder issues like AI integration in nuclear command, control, and communications among nuclear-weapons states. Within the NPT, this could mean looking at the practical implications of artificial intelligence across all three pillars: non-proliferation, disarmament, and peaceful uses of nuclear energy.
What the resolution says. Substantively, the text reflects, and attempts to build on, political commitments that several states—such as France, China, Pakistan, the United Kingdom and the United States—have already endorsed in statements or in outcome documents in other forums. At the same time, it attempts to extend these commitments to nuclear command, control, and communications more broadly, namely the whole architecture that underpins nuclear decision-making.
In November 2024, on the margins of the Asia-Pacific Economic Cooperation summit, the United States and China jointly affirmed “the importance of maintaining human control over decisions to use nuclear weapons,” signaling a baseline understanding that AI should not displace human authority in this area. Likewise, at the 2024 Responsible AI in the Military Domain (REAIM) summit, 61 countries, including France, the United Kingdom, the United States, and Pakistan (all nuclear possessors), reportedly endorsed a blueprint that reiterated the same principle, stressing it is “especially crucial to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.” France, the United Kingdom, and the United States had already made a similar pledge in an earlier statement.
Although these statements converged on the need to preserve human authority in nuclear weapons employment, the recent First Committee resolution takes the next step by relocating that principle squarely in the broader nuclear command, control, and communications architecture. It extends the human control/oversight requirement to the larger architecture that feed, shape, and transmit nuclear decisions, without explicitly defining what those functions are. In its operative language, the resolution calls on states to “adopt and publish national policies and doctrines explicitly affirming and operationalizing that command, control, and communications systems of nuclear weapons that integrate artificial intelligence will remain subject to human control and oversight and that such systems will not autonomously initiate decisions on the use of nuclear weapons.”
The text also makes a second important move. It recognizes that risks are not limited to whether human authority over nuclear decisions is preserved. It highlights a wider set of concerns associated with AI integration in nuclear command, control, and communications, including potential reductions in human oversight, compressed decision timelines, and distortions that could increase the chances of misperception, misinterpretation, and miscalculation. It underscores inherent limitations in current AI systems (such as vulnerability to malfunction, exploitation, cognitive and automation bias, and hallucinated or misleading outputs) that could shape human decision-makers’ perceptions even when humans technically “retain the final say.” In that sense, the resolution nudges the conversation beyond the minimal baseline of “keep humans in control” toward a more comprehensive understanding of how AI could affect crisis dynamics and escalation even with human decision-makers still firmly in the loop.
What the opposing positions indicate. The divide over the resolution reflects long-standing fault lines in the nuclear landscape (particularly within the NPT) that have become more split in recent years. Additionally, divergent views on AI, the forums in which this discussion should take place, and the long-established practices of automating elements of nuclear command, control, and communications, add to the rift.
At the broadest level, the division reflects a clash between states that place nuclear disarmament at the center of their security thinking and those that see today’s security environment as not conducive to further disarmament steps. Many non-nuclear-weapon states sit in the first category. States recognized as nuclear-weapon states under the NPT—China, France, Russia, the United Kingdom, and the United States—and many of their allies, sit largely in the second category.
In a deteriorating security environment, several nuclear-armed states view AI as a key tool for gaining technological dominance and in the military domain as a source of operational and strategic advantage. Artificial intelligence apps promise to improve early warning, increase situational awareness, and make command-and-control more resilient. From that perspective, regulatory efforts can appear premature, potentially constraining, or misaligned with national-security priorities, and framings that focus too heavily on risks can seem unaligned with what these governments see as the operational and strategic benefits of AI. Alliance dynamics also clearly shape the positions of some non-nuclear allies.
By contrast, disarmament-oriented states often interpret “opportunities in the military domain” as reinforcing a trajectory toward more sophisticated (and hence more entrenched) militarization. They see AI in nuclear command, control, and communications as a new layer of risk laid over an already fragile system. From this perspective, stronger obligations and clearer guardrails are a logical continuation of efforts to reduce nuclear dangers.
To be sure, the resolution does acknowledge that AI could have positive uses, including for disarmament verification and broader disarmament support. Switzerland, which voted in favor, noted that the text could have gone further in showing how AI might concretely reduce nuclear risks. For states willing to explore such uses, the emphasis on risks likely appeared unbalanced. That said, it is worth noting that another resolution on responsible military use of AI, first introduced in 2024, has already attempted to strike a more explicit balance between benefits and risks (even if they do not address the nuclear domain directly), illustrating that the conversation on “responsible AI use” in the military—with a stronger focus on benefits—is already taking place.
Others objected to definitions and process. Russia, for example, explained its negative vote by pointing to the lack of established terminology, especially around “artificial intelligence” and “meaningful human control,” concepts that have long been contested in debates on lethal autonomous weapons systems. In Russia’s view, “without an agreed-upon foundational vocabulary, any declarations, and even more so any obligations, in this area will inevitably remain largely speculative and allow too much room for divergent interpretations.” Russia also argued that it is “counterproductive to extract the nuclear aspects of AI integration into weapons systems and their management from the broader context of strategic stability,” and that discussions on such sensitive issues should first and foremost take place among nuclear-armed states.
It was harder to track the reasoning behind the votes from other nuclear-armed states, but their positions were likely based on similar grounds, shaped by their doctrines and security priorities. Taken together, these reactions suggest that the resolution ran into two main kinds of resistance: substantive disagreement over how to balance risks and opportunities and procedural or political concerns about the forums in which the discussion is taking place, how key terms are defined, and what the text might imply for sovereign control over nuclear posture and other sensitive issues.
It is also worth pointing out that because parts of nuclear command, control, and communications have involved automation for decades, endorsing language that implies direct human control over all of these functions may have proved difficult. Public information on nuclear command, control, and communications is sparse, so it is hard to gauge exactly how much automation is already in place (apart from some well documented cases), which functions require humans in the loop and to what degree, and where states themselves draw the line between acceptable and unacceptable levels of delegation (and how much this line varies across states). It is plausible that this contributed to reluctance among nuclear-armed states to endorse stronger language.
In that sense, the vote exposed how differently states interpret what their commitments should mean in practice when AI enters the nuclear picture. Future multilateral efforts, including those linked to the NPT, will have to confront that gap.
Shaping the next phase of the debate. The negotiations and outcome of this resolution offer some practical lessons for how to carry this debate into future multilateral settings, especially in nuclear relevant forums such as the 2026 NPT Review Conference. Three lessons stand out.
First, any new initiative will need broader backing across the spectrum of views, including, crucially, from nuclear-weapon states. That means striking a better balance between risks and opportunities. Rather than focusing primarily on “operational efficiencies,” it may be more productive to explore how AI applications can help reduce nuclear risks in ways that speak to all sides.
There are concrete possibilities already on the table. AI tools can help reduce ambiguity by improving the consistency of information used in diplomatic exchanges and by flagging unusual patterns in open-source data on missile tests, exercises, or systems that can carry both conventional and nuclear payloads. AI could also support shared analysis of past crises, helping to clarify escalation pathways and identifying recurrent points of misunderstanding. These lower-stakes applications do not touch sensitive nuclear command, control, and communications functions but speak directly to concerns about misperception and miscalculation. As such, they offer a constructive entry point for discussing positive, risk-reducing uses of AI before moving to higher stakes areas.
Second, venues do matter, but no single forum can do everything. The NPT is a natural place to think about AI’s implications through the Treaty’s three pillars: disarmament, non-proliferation, and peaceful uses. As an example, for non-proliferation, AI could assist in monitoring and detection of illicit nuclear-related activities, but it might also enable more sophisticated ways to evade them. For disarmament, AI could enhance verification regimes as well as transparency and reporting by processing monitoring data, improving nuclear material accounting, and assisting in the preparation and verification of national reports, but it could also help identify and exploit verification gaps. For peaceful uses, AI applications in nuclear safety, security, and safeguards could bolster confidence, but dual-use concerns will remain. Structuring the discussion around these pillars gives all parties a clearer stake in the outcome.
At the same time, the NPT is not the right forum to work through the operational details of AI integration into nuclear command, control, and communications. For that, nuclear-weapon states will need to engage directly among themselves. When conditions allow, the P5 format (the five NPT-recognized nuclear-weapon states) is the logical setting to examine AI-nuclear command, control, and communications implications in greater depth, including how different national approaches could interact in a crisis.
In the meantime, the United States, the United Kingdom, and France can take a pragmatic step to build on their existing statements on human control and launch a structured trilateral dialogue on the AI-nuclear nexus. Even a shared understanding of escalation risks, assumptions, and potential failure modes would be a meaningful starting point. Senior officials have already highlighted possible NC3-relevant applications: Former US Strategic Command commander Gen. Anthony J. Cotton, for example, stated that AI will enable “seamless coordination with allies and partners.” The dialogue could explore what such applications might look like in practice and what failure scenarios should be anticipated. If these nations could bring China into related conversations, even informally or at a technical level, the process would be more valuable, even though China’s full participation remains unlikely in the near term.
Meanwhile, it is important to underline the significance of beginning this discussion within the First Committee. Although not the ideal venue for detailed operational conversations about AI–nuclear command, control, and communications, it provides an inclusive forum where non-nuclear-weapon states can voice concerns about an issue with potentially global implications. Elevating the conversation beyond the baseline commitment to keep humans in control of nuclear weapons employment and toward a broader recognition of AI-related escalation risks is an important achievement. Going forward, nuclear-weapon states should build on this foundation and begin deeper discussions among themselves.
Third, the concept of “human in the loop” needs to be unpacked. Most states now endorse some version of a commitment to human control, but what they mean by that (and whether they mean the same thing) is still unclear. In future discussions, they could focus less on restating the same principle and more on its implications. This might include clarifying how much reliance on AI-generated assessment is considered acceptable, what level of transparency and testing is needed to sustain confidence, and how to avoid situations where humans formally retain the final decision but, in practice, are heavily dependent on opaque AI systems.
The next phase of work will need to move toward concrete and context specific discussions. That includes how AI relates to each pillar of the NPT, how nuclear weapon states understand and manage the risks of AI in nuclear command, control, and communications, and what meaningful human control should look like in practice. Framed this way, future efforts stand a better chance of drawing support from both sides of the current divide.