(Archived document, may contain errors) 7/25/83 29 J7 BETTER TESTfNG TO 'PREVENT -SHdDD'Y' WEAPONS.
A number of the Pentagon's new weapons are being rushed into pro- duction without t horough testing under-realistic dombat conditions. The result: 'the armed forces-are.being equipped 'ith a growing number of costly weapons of dubious combat effectiveness! This is-eroding rapidly the national consensus for increased defense spending. Sin c e the famous 1970 Blue-Ribbon Panel review of the weapons ac- guisition cycle, many defense experts-have been urging establishment of an independent operational testing office to ensure more rigorous and lls seek to implement comprehensive testing. Recent legislative proposa these recommendations. While not solving all the'.testing process prob-- lems, the proposals would help assure that American servicemen will have weapons of proven combat effectiveness and reliability. Weapons are subjected to testing f or two different purposes: 1) development testing by engineers and technician's in laboratory con- ditions ieasures -the extent to which a system meets technical specifi- cations; 2) operational testing is conducted by r4@gular servicemen in the field to a scertain how weapons perform in co"at, the maintenance they need, and what changes, if any, in strategic.doctrine, tactics, and organizational structures are required to integrate the'new systems optimally into existing force posture. Too often, however, o perational-testing is n ot rigorous enough to measure the actual combat value of a weapon system. Weapons are fre- quently tested against easy targets in non-hostile; environments. If ,tests yield poor results, performance standards often are lowered to m a ke the weapon acceptable. Actual test resu'lts,,moreover have not always been presented accurately to.Congress and the public. And to accelerate weapons procurement, testing sc'hedules-have been compressed and truncated. Poor testing is partly an organiza t ional problem. overall respon- sibility for operational testing in the DefenseDd partment is vested in the Director of Defense Testing and Evaluation (DDT&E) who has little real authority in the acquisition process. He reports to the Under Secretary of De f ense for Research and Engineering!(UtDR&E) who is responsible 'for 'both the development and testing of weapons. Conflicts of interest are bound to arise when an individual:is charged with evalu- ating the effectiveness of weapons systems which were devel oped under his authority. obgerves.a senior congressional staffer, with respect to operational testing,' the Defense Department is '..in the "position of -students who not only grade their.own exams, but make them up as well."2
To ensure honest and rigo rous weapons te sting, an independent of- fice Of operational testing is necessary. It should be-headed by a civilian, appointed by the President -for a fixed term, who reports directly to the Secretary of Defense. The Director of,the office - should prov i de guidance to the Services in constructing adequate test- ing programs, review them, recommend changes and modifications, advise the Secretary of Defense on the adequacy of operational testing pro- grams and schedules, and 'assess their outcomes in terms of overall sys- tems effectiveness. The Director's status should equal that of the officials responsible for development and research and engineering and he should have a seat on the Defense Systems Acquisition Review Council (DSARC), which makes mileston e decisions on weapofis acquisitions. His independent analyses of testing results should be-reported to the Armed Services Committees of both-Houses of Congress concurrent. with the Secretary's Annual Report. .Current legislation embodies these features.: B ut an independent testing office is not a panacea for all the deficiencies-of operational testing. It is along step in'the right direction only if accompanied by more weapons testing against sophisticated tar4ets'in conditions closely resembling the moder n battlefield. To'assess cost-effective7 ness, new weapons also should be tested against tl@e older, less sophis- ticated and less expensive arms which they are des'ignedto replace. Such rigorous testing is costly. It requires development and pro- curement of test facilities, target simulators, testing ammunitions, and weapon prototypes. Yet funding-has not been Adequate-mainly be- cause testing is often shortchanged to cover cost-.overruns in develop- ment and procurement accounts. Concerns that testing re f orm will add a'bureciucratic layer and lengthen'the already excessively long acquisition!cycle is unwarranted. While it is imperative to shorten the leadtime in;U.S. weapons develop- ment to counter the more rapid.pace of Soviet force modernization, it is equally important not to waste money on poorly@designed weapons. An independent office of operational testing, moreovdr, will delay produc- tion only-of those weapons which do not work and,'therefore,.should not be procured. To ensure effective operationa l testing, Congress must earmark in specific budget line-items the funds needed by th6 Services and 'the new office of independent testing. It alsbmust be willing to cancel wea- pons that fail their tests. It is the responsibility of Congress to ensure tha t the armed forces are adequately equipped with the weapons capable o 'f resisting threats to national security.' The current opera- tional testing reform efforts are a long overdue attempt to begin meeting this responsibility. .Robert Foelber Manfred Hamm Policy AnalystsFor Further Information: "The Need for an Independent Office of Operational Test and;Evaluation in the Depart- ment of Defense,t' Statement of Senator William V. Roth, Jr., before the Senate Com- mittee on Governmental Affairs, June 23, 1 983. Statement of Russell Murray, II, before the Senate Committee on Governmental Affairs,, June 23, 1983.