May 15, 2013 | Issue Brief on Missile Defense
Missile defense is a proven technology; repeated tests have proved that the system is so accurate that it can “hit a bullet with a bullet.” The United States should continue to provide and encourage a rigorous missile defense testing program, even if it means that intercepts do not happen.
Even “failed” tests, if properly constructed, contribute to the understanding and advance of ballistic missile technology. The ultimate goal of such tests is to ensure that the system does not fail when it counts the most—when a ballistic missile is en route toward its victims.
Split-Second Precision an Imperative
The highly sophisticated nature of ballistic missile defenses makes testing an imperative.
Interceptors comprise of many parts that must work with split-second precision in complex and rapidly evolving environments. Radars provide timely information about that environment, and the command-and-control system enables execution of the ballistic missile defense mission. Intercepts must occur at very high speeds so that the sheer force of the impact destroys the incoming threat. Tests are conducted under increasingly more realistic conditions and often include multiple salvos, coordinated intercepts, and decoys.
As with any complex system, glitches are to be expected—and the testing program should be constructed to find and fix them.
Failures during ballistic missile defense tests help the U.S. avoid failures during an actual operational engagement of the interceptor—when an interceptor is on its way to shoot down an incoming ballistic missile en route toward its victims. Even if the test intercept does not occur, it might be just as valuable to prove and validate underlying concepts and technologies that performed as expected during other phases of the test.
Following such tests, the Missile Defense Agency works very hard with all the participating actors to find and fix the problems that occur so that the missile defense system works when it is needed most.
It would be fairly easy to set up a missile defense test in a way that essentially guarantees its success, but the United States would not learn much from such a test. Tests during which missile defenses fail because the parameters of the test were pushing the system’s design envelope are more valuable because they are more likely to lead to discoveries of new flaws in the system.
Michael Gilmore, director of the Operational Test and Evaluation Office of the Secretary of Defense, recently stated that “the value of the tests is most demonstrated by…the failure modes that we’ve found by conducting those tests in Aegis and ground-based missile defense over the last couple of years, because those failures would not have been found if we didn’t do that testing and relied solely on modeling and simulation.”
Getting the System Right
Sometimes it takes many failures to engineer a good, reliable system. The best example is the Polaris AX submarine-launched ballistic missile (SLBM) developed and deployed in the 1950s and 1960s.
The Polaris missile failed five times before it had a successful test. Twelve out of 17 tests failed—all within a span of about a year. Despite these failures, the Polaris missile was eventually deployed, and its many technologies were used in the Trident SLBM, which is expected to remain in service until 2040. No one claims that the Trident system is unproven because it is based on a missile that failed 70 percent of the time during its development.
The Ground-Based Midcourse Defense (GMD) is another example. A GMD test in December 2004, which was classified as a failure according to the Missile Defense Agency, showed that the software for the GMD was out of tolerance because of checks that were too rigid. Consequently, the engineers were able to fix the problem.
Another GMD test, in January 2010, was also deemed a failure, yet the test validated that the new kill vehicle managed to correctly discriminate the incoming warhead amid decoys and debris, which have a similar radar footprint. This is an extremely difficult task for the interceptor. The “failed” test was a success because it confirmed the validity of algorithms for discrimination.
Robust Testing Program Essential
Opponents of missile defense argue that the missile defense system is unproven. They argue that its test record must be nearly perfect for the system to be considered proven and worthy of supporting. This is just not so, as the Polaris development shows. Failures are a natural part of any technology maturation process.
The United States should not focus solely on whether an intercept occurred. Rather, it should strive to strike a balance between advancing the technology and validating its missile defense system. At a price of as much as $300 million per test, the focus on the advancement of the technology, pushing the design envelope of the tested systems, and making sure our engineers and scientists learn as much as possible from each test is very much worthwhile.
—Michaela Dodge is Policy Analyst for Defense Issues in the Douglas and Sarah Allison Center for Foreign Policy Studies, a division of the Kathryn and Shelby Cullom Davis Institute for International Studies, at The Heritage Foundation.
J. Michael Gilmore, “Fiscal Year 2014 National Defense Authorization Budget Request for Missile Defense Programs,” testimony before the Subcommittee on Strategic Forces, Committee on Armed Services, U.S. House of Representatives, May 8, 2013, http://armedservices.house.gov/index.cfm/hearings-display?ContentRecord_id=5826aab7-fe7c-4999-b5cc-913c1afa8a33&ContentType_id=14f995b9-dfa5-407a-9d35-56cc7152a7ed&Group_id=64562e79-731a-4ac6-aab0-7bd8d1b7e890 (accessed May 10, 2013).