This month marks the 20th anniversary of the last test of U.S. nuclear weaponry. For those opposed to the U.S. maintaining superiority in nuclear arms, it’s cause for celebration. But there will be no high-fiving among realists. The flawed assumptions that led America to adopt self-imposed restraint on nuclear weapons testing remain as flawed as ever. Meanwhile, the global nuclear neighborhood has gotten far more dangerous.
In 1992, President George H.W. Bush accepted the congressional provision banning nuclear weapons explosions with grave concern. He feared it could prevent the United States from conducting underground nuclear tests. Worse, it could make future U.S. nuclear testing dependent on actions by another country, rather than on U.S. national security requirements. What Bush didn’t anticipate was that even highly provocative actions by foreign countries would prove insufficient to get the U.S. nuclear weapons program back on a sound track.
What happened? In 1996, the Clinton administration signed a fatally flawed Comprehensive Test Ban Treaty (CTBT). The pact was advertised as banning all nuclear weapons tests. One problem, though: The treaty does not define what constitutes a nuclear weapons test.
The upshot has been that the United States has adhered to its understanding of what the treaty means: a zero-yield interpretation. But that interpretation is not necessarily shared by Russia or China. The 2009 Strategic Posture Commission recognized that Russia and possibly China are conducting low-yield nuclear weapons tests.
Moreover, U.S. forbearance in nuclear testing has not prevented — or even slowed — North Korea or Pakistan from developing their own nuclear weapon capabilities. After all, you don’t need to test a weapon to develop it. In its latest compliance report, even the State Department acknowledges that "it is difficult to assess the compliance of a given state with its own moratorium, when the scope or meaning of a moratorium is unclear."
While the Senate rejected the treaty after a full floor debate in 1999, bad policy has accomplished what the CTBT intended. For 30 years, U.S. nuclear weapons designers and engineers have been banned from what they have been the best at: innovating.
As a result, we are left with Cold War nuclear weapons, based on 1970s designs, intended to deter the Soviet Union. These legacy weapons have high yields and are designed to take down hardened silos or command centers. Some U.S. policymakers seem to have missed the memo: The Soviet Union is no longer the paramount threat. We have to worry about new bad actors, for example North Korea or Iran.
High-yield nukes threaten civilian populations. But that may not constitute a credible deterrent in the eyes of North Korean or Iranian leaders. They simply do not value lives the way Western society does.
Yet, credibility is at the heart of deterrence. If nuclear-armed mullahs in Iran don’t believe the United States will come to the rescue of Israel, their calculus as to whether to attack will change dramatically. Additionally, tyrannical regimes value any and all means of blackmail to coerce neighboring nations and faraway foes.
The United States must be able to credibly threaten what its adversaries value. That requires having the flexibility to address developing challenges in innovative ways. If it means developing new nuclear weapons and delivery systems, so be it.
Nuclear weapons modernization served us well for over 40 years during the Cold War. We need the flexibility we enjoyed then to be able to adapt and respond to the rising and ever-evolving threats of the 21st Century.
Michaela Bendikova is a Research Associate for Strategic Issues at the Heritage Foundation’s Allison Center for Foreign Policy Studies.
First appeared in Real Clear World.