September 27, 2016 | Backgrounder on Budget and Spending
America’s debt is out of control, and Congress and recent Presidents have done little to decrease spending. The use of rigorous evidence in evaluating program effectiveness is a crucial area where the next President can help to improve accountability and fiscal discipline in the federal budget process. A genuine evidence-based agenda focused on fiscal discipline will help the next President re-assert control over runaway spending. The next Administration should re-establish a modified and improved Program Assessment Rating Tool (PART) along with a fiscally disciplined evidence-based spring review by the Office of Management and Budget. Programs that fail to produce results should receive less funding or be terminated altogether, while programs that generate results should continue to receive funding. Unless rigorous evaluation results are strongly linked to budget decisions, any proclamations about the benefits of evidence-based policymaking are meaningless.
Given that the federal government’s debt is over $19.4 trillion—$14.0 trillion in debt held by the public and nearly $5.4 trillion in intergovernmental holdings—every American should be concerned about the nation’s extraordinary level of debt. Congress, which in recent years has seemed incapable of curbing spending and allocating resources effectively, needs to relearn how to be a wise steward of the federal purse. Through leadership, the next President can help restore fiscal discipline in the federal government. Such leadership does not mean merely releasing statements about funding the programs that work and cavalierly demanding results. It does not mean calling for the creation of new “evidence-based” programs, while leaving the vast majority of current federal programs untouched. Real leadership requires articulation of a clear and persuasive message that is backed by concrete actions that instill a culture of fiscal discipline in the nation’s capital.
At times, the Office of Management and Budget (OMB) has been labeled the most powerful naysayer in government. The OMB has not always lived up to this stingy reputation and its influence has fluctuated over the years. In addition to formulating the President’s budget recommendation to Congress, the OMB “operates as a clearinghouse for legislative proposals that departments and agencies wish to see introduced into and passed by the Congress. Such initiatives must receive OMB’s approval as conforming with presidential policy guidelines.” The OMB is expected to provide the President with objective information and analysis, while White House staff may be less willing to deliver bad news. Presidents need to hear the complete case before making a decision. The OMB has the advantage of longer-term institutional memory than White House staff.
During the George W. Bush Administration, the OMB created the Program Assessment Rating Tool (PART) to help inform budget decisions by holding federal government programs accountable. Debuting in President Bush’s fiscal year (FY) 2004 budget recommendation, PART was an attempt to assess every federal program’s purpose, management, and results to determine its overall effectiveness. The extremely ambitious PART was a first-of-its-kind attempt to link federal budgetary decisions to performance. Such accountability had never been attempted by a President. PART placed “unprecedented focus and sustained pressure on executive agencies to improve performance.” Unfortunately, President Barack Obama terminated the original PART.
Instituting an improved PART (PART 2.0) will help the next President pressure Congress to eliminate wasteful and ineffective programs, no matter how politically popular they may be, and to make remaining federal programs operate as efficiently as possible to save taxpayer money.
America’s debt is out of control, and Congress and recent Presidents have done little to decrease spending and reduce the debt. The current fiscal path will debilitate the economy, substantially weaken prosperity, and lead to massive tax burdens for future generations.
The United States has four basic options to prevent out-of-control debt from devastating the economy. The first is to raise taxes. The second is to cut spending. The third is to print more money to pay down the debt, while increasing inflation. The last is to default. While the “correct” option is often based on one’s ideology, there is a body of empirical research that indicates that the best option is to cut spending.
Several studies strongly suggest that cutting spending and reducing debt—instead of increasing taxes and spending—can help to boost the economy. This body of literature suggests a clear path for America: cutting spending to boost the economy and reduce debt. A good place to start is with the elimination of funding for ineffective programs.
Reducing spending and debt is an ambitious agenda. However, ambition must be matched with persistence and momentum. One obvious tool missing from the budget-cutters’ toolbox is strongly linking evidence-based policymaking to budgetary decisions. When practiced correctly, evidence-based policymaking is a tool that allows policymakers, especially at the OMB, to base funding decisions on scientifically rigorous impact evaluations of programs. Given scarce federal resources, federal policymakers should fund only those programs that have been proven to work and defund programs that do not work.
In the free market, businesses that do not produce profits either innovate to become successful, or they go out of business. In the government sector, there is no such profit-loss mechanism. In essence, an evidence-based policymaking agenda that is strongly linked to performance budgeting will bring something similar to the accountability seen in the free market to the federal government. Government programs that fail to produce verifiable results should lose funding, while truly effective programs should retain their budget.
The Appalling Lack of Accountability. The effectiveness of federal programs is often unknown. Many programs operate for decades without undergoing thorough scientific evaluations. The federal government needs to prioritize government functions by intelligently targeting resources. Federal bureaucrats should be expected to make a credible case that the programs they manage deliver evidence-based results. Objective, reliable evidence of program effectiveness or ineffectiveness should encourage Congress to be a wiser steward of the federal purse.
The potential of performance budgeting and management is degraded when agencies turn the system into “make work” and compliance exercises. In order for performance management to lead to a leaner, more effective government, performance needs to be strongly linked to budget decisions. Without a serious commitment from the executive and legislative branches to funding programs that work and defunding programs that do not work, performance management will never live up to its potential. Encouraging policymakers to utilize performance information in budget decision making is the key task at hand.
Learning from Experience. Performance management and budgeting helps policymakers learn from experience. By systematically analyzing what works and what does not, and then employing what is learned, government resources can be allocated more effectively. The federal government needs to develop the capacity not only to assess its successes and failures honestly, but also to translate this information directly into budget decisions. The most scientific evaluation is ultimately meaningless if the results are not incorporated into budgets.
As a management tool, performance-monitoring systems monitor the implementation (not the effectiveness) of programs. Monitoring systems that rely on “outputs” and “outcomes” without a clear counterfactual too often fail to provide reliable evidence of effectiveness or lack thereof. This “attribution problem” can be solved by the use of large-scale experimental (random assignment) evaluations.
The performance reports released by Cabinet-level departments of the federal government make this case in point, such as the FY 2015 performance report by the U.S. Department of Labor, which relies on outputs and outcomes to assess performance, while ignoring multi-site experimental evaluations that have found its programs to be ineffective. As for performance measures of job-training programs, the Department of Labor relies on outputs, such as before-and-after participation changes in employment and earnings, that have no reliable counterfactual to estimate effectiveness accurately.
As the example of federal job-training programs strongly suggests, performance monitoring has serious limitations in assessing effectiveness. While the U.S. Department of Labor’s job-training performance-monitoring system collects some useful information, “it suffers from shortcomings,” according to Diane Blank of the Government Accountability Office (GAO) and her coauthors, “that may limit its usefulness in understanding the full reach of the system and may lead to disincentives to serve those who may most need services.”
First, performance monitoring does not measure program “impact.” Instead, it measures outcome or output. Program impact is assessed by comparing outcomes for program participants with estimates of what the outcomes would have been had the participants not partaken in the program. Without a valid comparison, performance monitoring based on “output” or “outcome” cannot provide valid estimates of program effectiveness.
Second, the effect of cream skimming can make the results of the performance-monitoring system overstate the effectiveness of programs. Through gaming the system, administrators can engage in strategic decision making by selectively including certain performance data that misrepresent program effectiveness. Professor Burt Barnow at George Washington University and Professor Jeffrey Smith at the University of Michigan found that local job-training administrators engaged in strategic behavior by manipulating whether participants were formally enrolled and thus recorded in the performance monitoring system. Under the Department of Labor’s performance-monitoring system, only individuals officially enrolled in job-training programs were counted toward performance standards. For instance, some local administrators increased reported performance by only including participants in the monitoring system if those individuals gained employment, thus counting them as successes. Alternatively, those job-training participants who never obtained employment were not officially counted in the performance-monitoring system. Thus, these failures were never recorded as part of the program’s performance.
Another case is the Department of Labor’s performance measures for the Reintegration of Ex-Offenders (RExO) program. The Department of Labor uses the percentage of re-entry program participants who are employed a year after program exit and their recidivism rate. As a measure of success, the department notes that six grantees have managed to each place more than 100 enrollees into jobs as an indication of “What Worked.” Further, the department reports that the recidivism rate of participants for FY 2014—12.33 percent—as reported by grantees, is a success because it is lower than the target of 22 percent. While this outcome is counted as a success, the department notes that “[t]here is a problem with the recidivism rates grantees have been reporting as Social Policy Research has found much higher recidivism rates of past enrollees based on state criminal records data.” This clearly indicates that the performance data reported by grantees is unreliable.
The Department of Labor’s report also fails to mention the large-scale experimental evaluation of the RExO program. According to that evaluation, RExO is ineffective. The services provided by the RExO grantees had a small effect on the employment and earnings of participants. One year after random assignment, participants in the re-entry programs were slightly more likely to be employed, but they were no more likely to be employed during the following year, compared to similar former prisoners not receiving services. Over two years, the program had no effect on the average number of days worked. In fact, on average, the program participants earned only $883 more in income than the control group over the two-year period.
Did RExO reduce recidivism? Based on administrative data, the services provided by the RExO grantees failed to improve the recidivism, convictions, and re-incarceration rates of participants. The evaluation’s authors conclude that the criminal justice results based on administrative data provide “no evidence whatsoever of any impacts of RExO.” Yet, this scientifically rigorous evaluation is omitted from the Labor Department’s performance report.
Due to the limitations of performance monitoring, budget decisions, wherever possible, need to be based on large-scale experimental evaluations.
Anemic Performance. While the supply of performance information has increased, there is less evidence of use of performance information by government in decision making.[ 27] Previous reform efforts, such as Planning-Programming-Budgeting Systems, Management by Objectives, and Zero-Based Budgeting, have failed because performance budgeting systems did not account for the political process of Congress.
Low use of performance information in budgeting decisions occurs not only in America. A review of the relevant research concluded that legislatures in Organization for Economic Co-operation and Development countries frequently failed to use performance information meaningfully in budgetary decisions.
The political process that ultimately decides budgetary decisions is heavily influenced by ideological biases, special interest groups, and protective bureaucracies. These factors are all too often in conflict with performance budgeting and evidence-based policymaking.
Third parties, like special interests, are often dependent on continued funding, even if the programs are ineffective. Legislators, too, want to dole out taxpayer dollars with little regard to credible evidence that such funding will work. Thus, they have strong incentives to confuse the public about the effectiveness of programs.
For example, Yasmina Vinci, the executive director of the National Head Start Association—an organization that represents Head Start grantees in Washington, DC—spun the dismal effects of the large-scale experimental Head Start Impact Study to appear as if the program had a much more substantial impact than found by the evaluation. According to Vinci, “the study documented children’s significant gains at the end of the Head Start experience and the flattening benefits of Head Start attendance at the end of third grade.” This assessment is entirely wrong. Almost all of the benefits of participating in Head Start disappeared by the time students were re-assessed in kindergarten. A “flattening” of benefits would suggest that beneficial impacts were retained through the third grade.
Overall, the Head Start Impact Study found that the program largely failed to improve the cognitive, socio-emotional, health, and parenting outcomes of children in kindergarten and first grade who participated, compared with the outcomes of similar children who did not participate. According to the report, “[T]he benefits of access to Head Start at age four are largely absent by 1st grade for the program population as a whole.” The few beneficial effects of the program disappeared in kindergarten. The third-grade follow-up to the Head Start Impact Study followed students’ performance through the end of third grade. The results shed further light on the ineffectiveness of Head Start. By third grade, Head Start had little to no effect on cognitive, social-emotional, health, or parenting outcomes of participating children.
While Members of Congress have passed legislation that requires many programs to be evidence-based, Congress continually funds programs, such as Head Start and Early Head Start, which are known to be ineffective based on the results of large-scale experimental impact evaluations. This dilemma arises because intentions and symbolism are sometimes more important to Congress than the performance of the programs it funds.
Claiming to hold government programs accountable for their performance is popular among politicians of all stripes. According to Professor Donald P. Moynihan of the University of Wisconsin-Madison, “Performance management is attractive because it communicates to the public that elected officials share their frustration with inefficient bureaucracies and are holding them accountable, saving taxpayer money and fostering better performance.” Further, “Performance management reforms rest on the assumption that once performance information is made available, it will be widely used and result in better decisions because it will foster consensus and make decision making more objective. But the limited evidence of use does not match that model.” The use of performance information, especially claims of supporting evidence-based policymaking, is often merely symbolic. If politicians do not base their funding decisions on rigorous evidence, they are only making symbolic gestures.
In many cases, adequately defining which outcomes should constitute success is difficult. However, this difficulty cannot be used as an excuse for failing to assess performance adequately. The solution to the problems of performance management is to strongly incorporate evidence-based policymaking into budget decisions.
Evidence-Based Policymaking. Unless rigorous evaluation results are strongly linked to budget decisions, all the proclamations about the benefits of evidence-based policymaking are meaningless. Promising to create new evidence-based paradigms, such as the Social Innovation Fund for funding additional programs, while continuing to fund—and in some cases expand federal programs amply demonstrated to be ineffective—is fiscally irresponsible.
The term “evidence-based” should mean that experimental evaluations of a program model have found consistent statistically significant effects that meaningfully ameliorate a targeted social problem in at least three different settings. Once a program model has been found to produce meaningful results in multiple settings, the likelihood of its successful replication elsewhere should increase greatly.
Can Government Replicate Success? In practice, policymakers frequently assume that when something has been found effective in one setting, the same results will be repeated elsewhere. However, the history of social programs is replete with examples of programs effective in one location that simply failed to work elsewhere.
The federal government has a poor record of replicating effective social programs. An excellent example of a federal attempt to replicate an effective local program is the Center for Employment Training (CET) replication. Of 13 youth job-training programs evaluated, the JOBSTART demonstration found only one program to have a positive impact on earnings: the CET in San Jose, California. Based on the results for the CET, the U.S. Department of Labor replicated and evaluated the impact of CET in 12 other sites using random assignment. The CET model had little to no effect on short-term and long-term employment and earnings outcomes at these other locations. According to the evaluation’s authors, “[E]ven in sites that best implemented the model, CET had no overall employment and earnings effects for youth in the program, even though it increased participants’ hours of training and receipt of credentials.”
A more recent example is the Obama Administration’s funding of Teen Pregnancy Prevention (TPP) grants. The Department of Health and Human Services (HHS) “invests in the implementation of evidence-based TPP programs, and provides funding to develop and evaluate new and innovative approaches to prevent teen pregnancy.” In June 2016, Ron Haskins, a research fellow at the Brookings Institution and co-chair of the Evidence-Based Policymaking Commission, testified before Congress that HHS requires “high-quality evidence showing that the programs produced significant impacts on important measures of teen sexual activity or teen pregnancy for the TPP program.”
According to HHS, Tier 1 grants are awarded to grantees replicating programs that “have been shown, in at least one program evaluation, to have a positive impact on preventing teen pregnancies, sexually transmitted infections, or sexual risk behaviors.” Does this definition include methodologically weak evaluations that are likely to overstate the effectiveness of programs? The belief is that these grants will be effective because they are replicating programs labeled “evidence-based.” Is this assumption correct?
Each of the Tier 1 grantees is supposed to evaluate the impact of the evidence-based model they are replicating. So far in 2016, HHS has released five final reports based on experimental evaluations of these grant programs. All five evaluations of Tier 1 TPP grant-funded programs failed to affect all sexual outcome measures. Clearly, replicating an evidenced-based program model does not guarantee similar results.
The other set of TPP grants (called Tier 2) fund demonstration programs that do not meet HHS’s evidence-based definition, but are considered by HHS to be innovative programs worthy of funding. To date, HHS has released five final reports based on experimental evaluations of Tier 2 grant programs. All five evaluations overwhelmingly find that these programs fail to affect the sexual outcome measures.
Just because an evidence-based program appears to have worked in one location, does not mean that the program can be effectively implemented on a larger scale or in a different location. Proponents of evidence-based policymaking should not automatically assume that pumping taxpayer dollars toward programs attempting to replicate previously successful findings will yield the same results.
The faulty reasoning that drives such failed expansions of social programs is known as the “single-instance fallacy.” This fallacy occurs when a person believes that a small-scale social program that works in one instance will yield the same results when replicated elsewhere. Compounding the effects of this fallacy, one often does not truly know why a certain program worked in the first place. In particular, the dedication and entrepreneurial enthusiasm of a program’s founder is difficult to quantify or duplicate. HHS’s definition that defines a program model as “evidence-based” based on a single evaluation is faulty.
Benefits of Fiscally Disciplined Evidence-Based Policymaking. Evidence-based policymaking that is focused on fiscal discipline has several benefits. First, judging the performance of programs based on rigorous evidence leads to improved allocative efficiency. When programs that fail to produce results receive reduced funding or are terminated altogether, and programs that produce credible and meaningful results continue to receive funding, a better allocation of scarce resources is the result.
Second, a fiscally disciplined evidence-based policymaking process helps hold federal programs accountable to the public. For external accountability by the public to work, information on the performance of programs must be released on a timely basis and made widely available to the public. These requirements mean that the federal government will no longer withhold or delay the release of evaluations that find programs to be ineffective.
For example, a cost-benefit analysis of Job Corps—a Great Society–era job-training program for disadvantaged youth—that found that program costs outweighed the benefits was finalized in 2003, but the Department of Labor withheld it from the public until 2006. The GAO has criticized the Department of Labor for its history of delaying the release of its research findings.
Similarly, HHS has noticeably delayed the release of reports based on the Head Start Impact Study that reported underwhelming results. There appears to be a pattern of withholding the results of experimental evaluations at HHS. There is reason to believe that the 2010 study of kindergarten and first-grade students was neither completed nor published in a timely fashion. According to the report, data collection for the kindergarten and first-grade evaluation was completed in 2006—nearly four years before its results were made public. For the national impact evaluation of third-grade students, data collection was conducted during the springs of 2007 and 2008. On December 21, 2012, the Friday before Christmas, HHS released the findings of the Third-Grade Head Start Impact Study without a press release to notify the public. HHS withheld this study for about four and a half years after the final data were collected.
Third, a fiscally disciplined evidence-based policymaking process helps elected officials hold bureaucrats accountable for the performance of programs. Such internal accountability assists the President’s ability to hold administrators accountable and aids Congress in practicing oversight.
While political factors, such as values and judgments on the proper role of the federal government, will always influence budget decisions, programs funded by Congress should produce their intended results. Programs that fail to produce their intended results should not be continually funded by Congress. This is where evidence-based policymaking should matter.
Evidence-based policymaking can play a role in improving the deliberative process in Congress and lead to a better-informed public about the role of public policy. While emotions and beliefs will always strongly influence political decisions, the degree to which these decisions are based on rigorous evidence may be the difference between creating public policies that fail or succeed. The question is whether policymakers in the executive and legislative branches can create an environment where rigorous evidence informs political decisions.
Empty Promises. All too often, promises of making funding contingent upon evidence are merely rhetoric without substance. To date, evidence-based policymaking has yet to be systematically linked to budget decisions. For the FY 2011 budget, the OMB announced that the Obama Administration would invest in program evaluations so that federal agencies would “have the capacity to use evidence to invest more in what works and less in what does not.” The Administration made an important distinction between performance monitoring and program evaluation: “Performance measurement is a critical tool managers use to improve performance, but often cannot conclusively answer questions about how outcomes would differ in the absence of a program or if a program had been administered in a different way. That is where program evaluations play a critical role.”
President Obama’s first Director of the OMB, Peter R. Orszag, argued that empirical evidence is the foundation of policymaking in the Obama Administration. Orszag asserted that the Obama Administration “has been clear that it places a very significant emphasis on making policy conclusions based on what the evidence suggests.”
To demonstrate how the Obama Administration is using empirical evidence to guide decision making, Orszag used the examples of Head Start and Early Head Start:
Head Start and Early Head Start also both have documented very strong suggestive evidence that they pay off over the medium and long term, both in terms of narrow indicators and broader social indicators for society as a whole. These evaluations demonstrated progress against important program goals and provided documentation necessary to justify increases in funding in the president’s budget to…further expand access, in the cases of Head Start and Early Head Start.
Of particular interest are Orszag’s comments on Head Start and Early Head Start. Orszag cites the 2010 Head Start Impact Study as evidence that the number of children participating in Head Start needs to be expanded. Unwittingly, Orszag also justifies the proposed termination of the Even Start Family Literacy Program based on its first-year follow-up’s findings because the program “has been evaluated rigorously three times” and “out of forty-one measurable outcomes, the program demonstrated no measured difference between those enrolled in the program and those not on thirty-eight of the outcomes.” Due to the program being a failure, the Obama Administration decided, as the previous Administration proposed, that Even Start should be terminated.
However, Orszag’s logic does not hold for Head Start and Early Head Start. While the first-year follow-up evaluation found Even Start to have no effect on 38 of 41 outcome measures, Head Start’s performance was even worse. Overall, the 2010 Head Start Impact Study that assessed findings for kindergarten and first grade found that Head Start failed to have an effect on 110 of 112 outcome measures for the four-year-old group, with one harmful and one beneficial impact. For the three-year-old group, Head Start failed to have an impact on 106 of 112 measures, with five beneficial impacts and one harmful impact.
As for Early Head Start, the initial benefits produced by the program are limited to a minority of participants, and these benefits quickly fade. Early Head Start, created during the 1990s, is a federally funded community-based program that serves low-income families with pregnant women, infants, and toddlers up to age three. The results of the multisite experimental evaluation of Early Head Start are particularly important because the program was inspired by the findings of the Abecedarian Project, an early-childhood education that is assumed by many to be an effective program. By the time participants reached age three, Early Head Start had beneficial impacts on two of six outcome measures for child cognitive and language development, while the program had beneficial effects on four of nine measures of child-social-emotional development. While the short-term (age three) findings indicated modest positive impacts, almost all of the positive findings for all Early Head Start participants were driven by the positive findings for black children. The program had little to no effect on white and Hispanic participants, who are the majority of program participants. For Hispanic children, the program failed to have a short-term impact on all six measures of child cognitive and language development, while the program had a beneficial effect on only one of nine measures of child-social-emotional development. For white children, the program failed to produce any beneficial impacts on these outcome measures.
For the long-term findings, the overall initial effects of Early Head Start at age three clearly faded away by the fifth grade. For the 11 child-social-emotional outcomes, none of the results were found to have statistically meaningful impacts. Further, Early Head Start failed to have statistically measurable effects on the 10 measures of child academic outcomes, including reading, vocabulary, and math.
What happened when the long-term results were analyzed by race and ethnicity? There were only two beneficial impacts for black children on 11 of the child-social-emotional outcomes. For Hispanic and white children, there was no beneficial effect for any outcome. For child academic outcomes, the long-term findings by race and ethnicity were consistent. Early Head Start failed to affect any of the 10 academic outcomes for each of the subgroups. Despite the dismal results of the scientifically rigorous evaluations of Head Start and Early Head Start, Orzag called for increased funding for these failed programs.
Orszag concluded that “the highest level of integrity must be maintained in the process of using science to inform public policy. Sound data are not sufficient to guarantee sound policy decisions, but they are necessary.” Indeed, sound data are not a sufficient guarantee for sound policy decisions. Dealing with the data forthrightly is necessary as well.
In no way does the 2010 Head Start Impact Study demonstrate “very strong suggestive evidence” that Head Start “pay[s] off over the medium and long term.” Placing more children into an already failed program does not represent placing “significant emphasis on making policy conclusions based on what the evidence suggests.” Instead of acknowledging failure and eliminating Head Start and Early Head Start, the Obama Administration has sought to expand the program to serve more children for longer periods of time. It is as if the Administration never read the large-scale experimental evaluations of these programs.
While the Obama Administration is interested in evaluating federal programs, the link between results and budgets is less clear. In 2010, the Administration announced 128 “high priority performance goals” (HPPG) that define its priorities, but “it is unclear how HPPG performance review by OMB will be integrated into budget decision making, or whether it is intended to be integrated at all.” A safe expectation is that highly performing programs will receive requests for more funding. But will poorly performing programs receive declining budget requests? Professors L. R. Jones and Jerry McCaffery of the Graduate School of Business and Public Policy at the Naval Postgraduate School add that “[d]espite this interest in careful assessment of the performance of federal agencies, there is no evidence to suggest that the Obama administration will attempt to implement performance budgeting.” Their assessment is still relevant at the end of Obama’s presidency. The next President should implement a genuine evidence-based policymaking agenda that is truly focused on fiscal discipline.
The formulation of the President’s budget recommendation begins soon after the last recommendation is submitted to Congress. Each spring, the OMB starts the process of sending out planning guidance to agencies in the executive branch. Until the early 1980s, “spring review” entailed a detailed analysis of agencies by the OMB. During this review period, OMB career staff identified policy and budgetary issues that were anticipated to impact the upcoming budget. Afterwards, a series of planning review sessions were held for various departments and agencies. Once the review was complete, the findings were presented to the OMB Director and then to the President. The results of the review process provided “the foundation for a series of relatively in-depth programmatic guidelines and budgetary targets for agencies to use during preparation of their budgetary requests, which would be submitted to OMB in September.”
The spring reviews from 1981 and 1982 were successfully used to build support for President Ronald Reagan’s budget and policy priorities. However, the role of spring review was significantly reduced during the spring of 1983. For 12 years, a formal spring review was absent at the OMB. During the Clinton Administration in 1995, a formal spring review was re-established due to the opportunity presented by the passage of the Government Performance and Results Act (GPRA).
Enacted in 1993, the GPRA was intended to improve the public’s confidence in government, program effectiveness and accountability, administrative management, and congressional decision making. While the GPRA was an important development in gathering information about the performance of federal programs, the original act and its reauthorization in 2010 have serious limitations for assessing performance and holding bureaucracies accountable. First, the information collected through the GPRA provides information that cannot tell policymakers about the actual effectiveness of federal programs. The GPRA’s performance information requirements are weak on counterfactuals needed to accurately assess effectiveness. Second, the GPRA requires that stakeholders in federal programs have influence over which outcome measures are used to assess performance. Allowing special interests dependent on funding to have a say in defining outcomes undercuts objectively determining effectiveness. Instead, easy to achieve outputs are used as proof of effectiveness. Third, and most important, the GPRA is not adequately used by policymakers for budgetary decisions. Without performance information being strongly linked to budgetary decisions, agencies have little incentive to improve performance.
The creation of PART during the George W. Bush Administration was largely a response to the inadequacies of the GPRA’s weak connection to budget decision making. (The re-authorization of the GPRA through the Government Performance and Results Modernization Act of 2010 did not improve upon this situation.) In 2002, the Bush Administration created the PART scoring mechanism. The creation of PART represented a wager by the OMB that improving the information used for budget recommendations would change the decision-making process.
Under PART, based on answers to a series of questions, federal programs were rated in the following areas:
With the goal of integrating performance and budget requests, the results-related questions were given the greatest weight in calculating the overall PART score. Recognizing the diverse array of federal programs, the PART questions were tailored by the following program classifications:
Overall PART scores were divided into five categories:
Programs that received a “results not demonstrated” rating had no performance measures or data for OMB to assess.
A Paradigm Shift. PART represented an important shift in thinking about accountability and performance. According to David Frederickson and H. George Frederickson, the authors of Measuring the Performance of the Hollow State, “PART holds programs to high standards. Simple adequacy or compliance with the letter of the law is not enough; a program must show it is achieving its purpose and that it is well managed. PART requires a high level of evidence to justify a ‘yes’ response. Answers must be based on the most recent credible evidence.” The burden of proof was placed firmly on the agencies.
According to Professor Paul Posner of George Mason University and the late Denise Fantone of the GAO, PART had the potential “to link performance more directly with consequences for funding and program design.” One particular agency was informed by the OMB that if it did not reduce the number of programs rated as “results not demonstrated,” the OMB would consider reducing their administrative budget. U.S. Department of Justice Weed and Seed grants were rated as “results not demonstrated,” after being criticized by the GAO for failure to adequately assess the performance of the grant program.
An underlying rationale for PART was that through the use of an index assessing the performance of each federal program, all stakeholders in the budget process would be encouraged to respond to questions asking whether they agree or disagree with the assessments. If Congress, in particular, disagrees with an assessment, it should respond with evidence that is more credible than the evidence presented by the OMB. PART could have a profound impact on congressional appropriations if Congress was forced to examine the evidence more objectively, rather than relying on political rhetoric and anecdotal evidence.
According to Professor Moynihan, the OMB viewed PART as creating an evidence-based dialogue within the federal agencies in the following ways. First, PART was based on the review by a third party—OMB—that is sheltered from the unreliability of agency self-reporting on performance.[ ]Second, PART’s focus on performance during the budget preparation process would lead program managers within the executive branch to place greater attention on performance measures than would otherwise be the case. Third, the standard of proof for judging effectiveness required evidence of positive outcomes, rather than an absence of clear failure.[ ]Fourth, the burden of proof was placed on the agencies for demonstrating effectiveness. Fifth, PART required all programs to be reviewed over five-year intervals, therefore, placing pressure on agencies to continually collect performance information throughout their programs’ existence.[ ]Sixth, the routine process of PART was intended to create an incentive for change in agencies.
Accountability. Did PART increase accountability? PART was an attempt to hold federal programs accountable through the executive branch’s role in the budget process. Budget cuts are rare; program terminations are even rarer. Of the 65 programs recommended for elimination in FY 2005 by President Bush, only one, the Small Business Administration Business Information Centers, was eliminated that year. During President Bush’s first term, several education programs were rated as “results not demonstrated” or “ineffective,” but the Administration did not propose any cuts until after the President’s re-election. While the Bush Administration originally recommended that the failed Even Start be eliminated, it was eventually eliminated in FY 2011 by Congress.
Research on PART. Several studies have assessed the association between PART ratings on the President’s budget recommendations and congressional actions. PART scores have been found to have slight or modest positive associations with budget recommendations made by President Bush. While PART did have some influence over the President’s budget recommendations, once the recommendations moved across Pennsylvania Avenue, PART had little or no effect on congressional activity appropriations. In sum, PART was more useful for decision making in the executive branch than the legislative branch. Legislators with a business background were likely to support PART, while those with longer terms in Congress or who had higher levels of campaign contributions from special interest groups were less likely to support PART.
Severing Performance from Budget Decisions. In any case, one of President Obama’s first budgetary actions was to terminate PART and replace it with a loosely structured “performance improvement and analysis framework” that is separated from annual budgetary decisions. According to President Obama, the “ideological performance goals” of PART were replaced “with goals Americans care about and that are based on congressional intent and feedback from the people served by government programs.” The new framework was introduced during President Obama’s first budget request, for FY 2010. Instead of being a formal tool used by the OMB, this new, vaguely defined framework switched “the focus from grading programs as successful or unsuccessful to requiring agency leaders to set priority goals, demonstrate progress in achieving goals, and explain performance trends.” However, this new framework lacked budget accountability.
According to Robert D. Lee Jr., professor emeritus at Pennsylvania State University, and his coauthors, “Although a new framework for performance improvement and analysis was forecast, what actually developed were specific initiatives to conduct more rigorous program evaluations and publish the results.” While increasing the number of evaluations that assess the effectiveness of federal programs is an important success of the Obama Administration, the elimination of PART delinked accountability from budget decisions. Continuing to fund programs, regardless of evaluation results, does not serve the interests of federal taxpayers.
“While the expressed interest of the Obama administration in assessing the performance of federal government agencies is evident,” according to Professors Jones and McCaffery, “PART as employed by the Bush administration has been abandoned and there is no evidence to suggest that the [Obama] administration will attempt to implement performance budgeting per se as a means to accomplish this end. Rather, performance is now reviewed by OMB less formally than under PART.” Thus, the Obama Administration diminished the OMB’s role in assessing the performance of federal programs by effectively separating budget recommendations from performance measurement and evaluation. If the information obtained from evaluations has no influence over budgetary decisions, the knowledge gained from the evaluations is of little use.
In addition, the Obama Administration created a website—performance.gov—to highlight its plans to improve the performance of the federal government. Commenting on the site, Sean Reilly of The Financial Times reported that the “site also offers no comprehensive assessment of federal programs’ performance.” “Instead,” he continued,
the site, which is run by the Office of Management and Budget and the General Services Administration, offers anecdotal summaries of what various agencies have done to improve financial management, human resources, sustainability and other areas. It also includes performance reports and other information previously buried on individual agency websites.
To date, the information provided at performance.gov has not substantially changed since Reilly’s original assessment.
The use of rigorous evidence in policymaking is a crucial area where the next President can help to improve accountability and fiscal discipline in the federal budget process. The President’s budget recommendation is considered an opening maneuver in the budget process. As an opening maneuver, the President can encourage Congress to be more fiscally disciplined by incorporating rigorous evidence into budget recommendations. The next President must sell his or her vision to the American people to build support for presidential priorities, but the President must also devise tactics to sell budget recommendations to Congress. This is exactly where a revitalized and improved PART (PART 2.0) will play a vital role.
While the annual budget recommendation obviously serves the President’s political interests, it also provides useful information to Congress. Through PART 2.0, the OMB’s analytical skills will contribute to the budget information provided to Congress on how well federal agencies are performing.
Returning a reinvigorated PART to the President’s budget recommendations is crucial to setting an evidence-based agenda focused on fiscal discipline—but Congress will need to place a greater emphasis in its funding decisions on what rigorous evidence indicates about the performance of federal programs. If history is any guide, PART 2.0 will face congressional resistance.
Legislation that would have created a statutory obligation to implement some aspects of PART has been introduced in Congress, but not passed. However, the House of Representatives Appropriations Subcommittee on Financial Services and General Government, which has jurisdiction over the OMB, “put a limitation on OMB’s authority and approach to PART.” Further, the subcommittee “stipulated that if the committee did not agree with OMB’s plans for PART, it would prohibit OMB from using PART in its budget requests.”
Under the assumption that government agencies seek to maximize their budgets, an instrument, like PART, is essential for applying an offsetting force against the ever-increasing need for more expenditures. Because appropriations by Congress largely occur on an incremental basis, targeting annual budgets based on performance is a reasonable method for counteracting bureaucratic behavior. While the effect of PART on congressional appropriations was largely nonexistent, the next Administration should bring the performance budgeting tool back. An Administration that clearly believes that rigorous evidence should be used in budgeting will encourage Congress to become more fiscally disciplined.
Revising and improving the original PART, essentially creating PART 2.0, will help the next President restore fiscal discipline and spend taxpayers’ hard-earned dollars wisely. In addition, PART 2.0 should help structure communication between the OMB and agencies by focusing administrators and managers on key presidential priorities. Creating PART 2.0 will send a clear message that fiscal responsibility dictates that funding allocations must be linked to performance.
How Would the Process Work? The emphasis on the use of rigorous evidence in the formulation of the President’s annual budget recommendation would logically begin with a reinvigorated spring planning review by the OMB.
A revitalized spring review should be focused on evidence-based policymaking. Federal agencies would be required to present the OMB with evidence on their performance for the OMB to review in the spring. Budget requests from agencies should be based on their performance, not just desired levels of funding.
PART 2.0 assessments should be performed during the spring and summer, before agency budget submissions occur in the fall. By strongly embedding PART during this time period, the OMB will increase its influence by defining what performance information is credible and relevant to more adequately classify programs as succeeding or failing. The reinvigoration of spring planning review is not a new idea. The next Administration should use PART and the OMB’s analytic capability to advance an evidence-based policy agenda that has real budgetary consequences. PART 2.0 should be used to tightly focus the OMB’s analytic resources on funding what works, and defunding what does not work.
Early Collaboration Is Essential. For PART 2.0 to be successful, the next Administration should facilitate a process for the OMB to collaborate with departments to ensure that federal programs are rigorously assessed for effectiveness. Collaboration will be especially important when the next Administration attempts to enact its policy agenda. Early collaboration between the OMB and agencies to embed rigorous evaluations during the development of the President’s agenda is often a missing ingredient. While the OMB is often perceived as blocking bad ideas from being developed by agencies, OMB staff also need to develop cooperative relationships with agencies to identify and encourage the evaluation of programs.
Not only do large-scale experimental evaluations need to be embedded into newly created programs and initiatives, existing programs need to undergo rigorous evaluations as well. Professor Dale Farran of Vanderbilt University has correctly criticized the Obama Administration for not requiring the rigorous evaluation of $226 million awarded to 18 states to fund new “high quality” state-level preschool programs. This is a mistake. The OMB needs to be involved early in the program-formulation stage. This means that the OMB program examiners should be trained in the benefits of random assignment and more involved in the development of the new initiatives. Not only will the OMB need to have enough expertise to recognize when an experimental evaluation can be applied, but the agency must be assertive in making these evaluations occur, and that the results are released without unnecessary delay.
Leadership Is Vital. Leadership is crucial to setting an evidence-based agenda. Effective leadership is more than offering mere political rhetoric. It has to have actual budget ramifications.
First, the next President needs to send a clear message to the OMB and the entire federal bureaucracy that the West Wing believes evidence-based policymaking should influence budget decisions and policy formulation. Evidence needs to be tied to budget decisions. Second, focusing the OMB on evidence-based policymaking will require the OMB Director and senior staff to develop clear expectations that program associate directors (PADs) and program examiners are to concentrate on rigorous evidence for justifying agency budgets.
Setting a clear evidence-based agenda that is tied to the performance evaluations of OMB personnel is a crucial element in shifting the federal government away from funding programs based on intentions, and toward results. A strong message from the White House and the OMB Director is needed to set expectations of OMB staff.
The OMB has five Resource Management Offices (RMOs)—(1) Natural Resources Programs; (2) Education, Income Maintenance and Labor Programs; (3) Health Programs; (4) General Government Programs; and (5) National Security Programs—that oversee the budgets and management of federal agencies. Each RMO is overseen by political appointees deemed PADs. The PADs oversee career civil servants. Under PADs are deputy associate directors (DADs, or “division chiefs”) that supervise branch chiefs.
Program examiners serve under branch chiefs and have been coined “critical foot soldiers” responsible for reviewing all budgetary, legislative, and program issues under their review areas. Program examiners perform the bulk of the information gathering and analysis by the OMB. Program examiners “must be proficient as translators of broad presidential policies into specific programming applications in order to be able to explain presidential policies to agencies, and to be able to make analysis and recommendations they offer the President useful in light of his agenda.” These examiners assist in clearing legislative proposals before being sent to Congress, help clear congressional testimony of executive branch appointees and career civil service staff, and occasionally participate in interagency task forces and commissions. OMB program examiners played crucial roles in implementing PART.
While there must be buy-in by career OMB staff in order for PART 2.0 to be successful, federal agencies may be resistant to any proposals to hold government accountable. What can the OMB do to encourage federal agencies to perform large-scale experimental evaluations of their programs?
First, the OMB’s apportionment powers need to be strategically exercised. Throughout the fiscal year, the OMB makes quarterly apportionments of funding appropriated by Congress. The OMB should encourage agencies that are reluctant to rigorously evaluate their programs by making apportionment contingent on performing such evaluations and releasing the results to the public on a timely basis.
In order to force agencies to overcome their reluctance in performing rigorous experimental evaluations, the OMB should withhold funds when agencies are determined to not be complying with directives to rigorously evaluate their programs. For example, the OMB should temporarily withhold apportioning funding directed toward the salaries of the leadership of a foot-dragging agency.
Second, the OMB has the authority to approve the “reprogramming” of funds—the “shifting of monies from one project to another within the same appropriations account.” If an agency is not devoting enough resources toward rigorous evaluation, the OMB should reprogram funds within an agency towards rigorous evaluations.
Assistance to Congress. What can the executive branch do to make the congressional process more inclined to adopt evidence-based policymaking? The executive branch should offer clear language on outcome expectations for authorization legislation. Further, it can propose performance measures that will be used to gauge progress toward the goals of the legislation.
To inform Congress and the policy community on the benefits of PART 2.0, senior OMB staff need to reach out to Congress, just like OMB officials did during the Bush Administration’s advocacy of PART. Bush Administration OMB Associate Director for Administration and Government Performance Robert Shea
made an admirable effort to win the attention and approval of key committee and agency staff, appearing and engaging all comers at a seemingly endless series of appointments on the Hill and around Washington for meetings sponsored by management advocacy organizations, think tanks, and consulting firms.
Not only should PART 2.0 make fiscal discipline a central aspect of the President’s budget requests, it can potentially improve fiscal discipline within Congress.
The current link between performance and congressional appropriations is, at best, tenuous. Barriers to performance budgeting created by legislatures include uncertain or vague policy goals that impede deliberate goal setting, reliance on anecdotal information rather than rigorous evidence in making budget allocations, and dedicating inadequate attention to oversight of program performance. The following sections offer recommendations for how Congress, with the assistance of the executive branch, can become a wiser steward of the federal purse.
Appropriations. According to Philip Joyce, professor of public policy at the University of Maryland, “there is little evidence that appropriations committees consider performance information in any systematic way.” The appropriations committees tend to focus on marginal decisions, rather than on the effectiveness of spending. The process is too often a vehicle for doling out money to special interests and bureaucracies.
The best way to get Congress to adopt evidence-based policymaking is to cement reforms in the appropriations process. For example, passing legislation that would anchor PART 2.0 in the appropriations process will go a long way in making evidence-based policymaking influential in the federal government.
In 2009, Representative Henry Cuellar (D–TX) introduced the Government Efficiency, Effectiveness, and Performance Improvement Act of 2009 (H.R. 2142). The act would have codified the original PART into federal law. However, the legislation morphed into the GPRA Modernization Act of 2010 that completely dropped PART from the legislation that became law.
Oversight. During confirmation hearings of presidential nominees, committee members should ask detailed questions on how the next Administration can improve the application of evidence-based policymaking. For example, how can they improve upon what previous Administrations have done? By engaging political appointees on this topic, Congress will spur their interest in evidence-based policymaking.
Authorizations. Congress can take several steps to ensure that federal programs are properly assessed for effectiveness. First, when Congress authorizes existing or new programs, the legislation should set clear expectations for performance that are confirmed by large-scale, multisite experimental evaluations wherever possible. The expectations and evaluation of the program need to take hold during the authorization process. Second, the experimental evaluations should be large-scale, nationally representative, multisite studies. Third, Congress should specify the types of impact measures to be assessed. Fourth, Congress should institute procedures that encourage government agencies to carry out congressionally mandated evaluations, despite any entrenched biases against experimental evaluations. Fifth, Congress should require that congressionally mandated evaluations be submitted to the relevant congressional committees and released to the public in a timely manner after completion.
Commissions. Congress should consider creating commissions to help provide recommendations for creating a leaner, more effective federal government. The Bush Administration proposed the Government Reorganization and Program Performance Improvement Act of 2005, which would have empowered the President to create a commission to review the performance of federal agencies and recommend programs for termination. The termination recommendations would have to be approved through an expedited process by Congress. Similar legislation, the Government Reorganization and Program Performance Improvement Act of 2005 (S. 1399), which was also not passed, was proposed by Senator Craig Thomas (R–WY) during the 109th Congress.
The Bush Administration proposed two types of commissions to regularly assess the performance of federal programs. First, the Government Reorganization and Improvement of Performance Act would have created a bipartisan “sunset” commission to review the performance of federal programs over a 10-year period. It would have recommended ways to improve the performance of worthy programs and the abolishing of ineffective programs. Second, the Sunset Act would have created a “results” commission to evaluate the degree to which specific programs are producing their intended outcomes.
A similar commission proposal, the Commission on the Accountability and Review of Federal Agencies Act (H.R. 522), during the 114th Congress by Representative Doug Collins (R–GA), would create a federal commission to evaluate federal agencies and their programs over a six-year period to identify duplicative programs for consolidation, and wasteful programs for termination. For the identified programs and agencies, the commission would recommend realignment or termination. Similar legislative proposals have been viewed as a potentially effective means to consolidate duplicative programs and eliminate wasteful spending.
America’s debt is out of control, and Congress and recent Presidents have done little to decrease spending. The federal government needs to prioritize government spending by targeting resources intelligently. The use of rigorous evidence is a crucial area where the next President can help improve accountability and fiscal discipline in the federal budget process. A genuine evidence-based agenda focused on fiscal discipline will help the next President re-assert control over runaway spending.
The next Administration should re-establish a modified and improved PART along with a fiscally disciplined evidence-based spring review by OMB. When programs that fail to produce results receive reduced funding or are terminated altogether, and programs that generate results continue to receive funding, a better allocation of scarce resources is the result. Unless rigorous evaluation results are strongly linked to budget decisions, any proclamations about the benefits of evidence-based policymaking are meaningless.—David B. Muhlhausen, PhD, is a Research Fellow for Empirical Policy Analysis in the Center for Data Analysis, of the Institute for Economic Freedom and Opportunity, at The Heritage Foundation.
 Shelley Lynne Tomkin, Inside OMB: Politics and Process in the President’s Budget Office (Armonk, NY: M. E. Sharpe, 1998), pp. 3–4.
 Ibid., p. 5.
 Ibid., p. 7.
 Ibid., pp. 6–7.
 Office of Management and Budget, “Performance and Management Assessments,” Budget of the United States Government: Fiscal Year 2004 (Washington, DC: U.S. Government Printing Office, 2003), p. 10, https://www.gpo.gov/fdsys/pkg/BUDGET-2004-PMA/pdf/BUDGET-2004-PMA.pdf (accessed July 25, 2016).
 F. Stevens Redburn and Philip G. Joyce, “Strengthening the President’s Management Hand: Budgeting and Financial Management,” in F. Stevens Redburn, Robert J. Shea, and Terry F. Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience (Armonk, NY: M. E. Sharpe, 2008), pp. 337–338.
 Alberto Alesina, Carlo Faverob, Francesco Giavazzic, “The Output Effect of Fiscal Consolidation Plans,” Journal of International Economics, Vol. 96, Supplement 1 (July 2015), pp. S19–S42; Alberto Alesina and Silva Ardagna, “Large Changes in Fiscal Policy: Taxes versus Spending,” Tax Policy and the Economy, Vol. 24, No. 1 (2010), pp. 35–68; Alberto Alesina, Silva Ardagna, Roberto Peroti, and Fabio Schiantarelli, “Fiscal Policy, Profits, and Investment,” American Economic Review, Vol. 92, No. 3 (June 2002), pp. 571–589; and Silva Ardagna, “Fiscal Policy Composition, Public Debt, and Economic Activity,” Public Choice, Vol. 109 (2001), pp. 301–325. For a summary of some of the research on this topic, see David B. Muhlhausen, Do Federal Social Programs Work? (Santa Barbara, CA: Praeger, 2013), pp. 38–40.
 David B. Muhlhausen, “Evidence-Based Policymaking: A Primer,” Heritage Foundation Backgrounder No. 3063, October 15, 2015, http://www.heritage.org/research/reports/2015/10/evidence-based-policymaking-a-primer.
 F. Stevens Redburn, Robert J. Shea, Terry F. Buss, and Ednilson Quintanilla, “Performance-Based Management: How Governments Can Learn from Experience,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, pp. 3–19.
 Burt S. Barnow, “Lessons from the WIA Performance Measures,” in Douglas J. Besharov and Phoebe H. Cottingham, eds., The Workforce Investment Act: Implementation Experiences and Evaluation Findings (Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 2011), pp. 209–210.
 Muhlhausen, Do Federal Social Programs Work?
 U.S. Department of Labor, U.S. Department of Labor FY 2015 Annual Performance Report, https://www.dol.gov/sites/default/files/documents/general/budget/CBJ-2017-V1-01.pdf (accessed July 21, 2016).
 Ibid., p. 26.
 Dianne Blank, Laura Heald, and Cynthia Fagoni, “An Overview of WIA,” in Besharov and Cottingham, eds., The Workforce Investment Act: Implementation Experiences and Evaluation Findings, p. 64.
 Christopher T. King and Burt S. Barnow, “The Use of Market Mechanisms,” in Besharov and Cottingham, eds., The Workforce Investment Act: Implementation Experiences and Evaluation Findings, pp. 81–111.
 Burt S. Barnow and Jeffrey A. Smith, “Performance Management of U.S. Job Training Programs,” in Christopher J. O’Leary, Robert A. Straits, and Stephen A. Wandner, eds., Job Training in the United States (Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 2004), pp. 21–55.
 U.S. Department of Labor, U.S, Department of Labor FY 2015 Annual Performance Report, pp. 26–27.
 Ibid., p. 34.
 Ibid., p. 35.
 Andrew Wiegand, Jesse Sussell, Erin Valentine, and Brittany Henderson, “Evaluation of the Re-Integration of Ex-Offenders (RExO) Program: Two-Year Impact Report,” Social Policy Research Associates, May 2015, http://wdr.doleta.gov/research/FullText_Documents/ETAOP_2015-04.pdf (accessed July 21, 2016). For a review of the effectiveness of prisoner re-entry programs, see David B. Muhlhausen, “Studies Cast Doubt on Effectiveness of Prisoner Re-entry Programs,” Heritage Foundation Backgrounder No. 3010, December 10, 2015, http://www.heritage.org/research/reports/2015/12/studies-cast-doubt-on-effectiveness-of-prisoner-reentry-programs.
 Wiegand, Sussell, Valentine, and Henderson, “Evaluation of the Re-Integration of Ex-Offenders (RExO) Program,” p. III-3, Table III-1.
 Ibid., p. IV-3, Table IV-1.
 Ibid., p. IV-15.
 Philip G. Joyce, “Linking Performance and Budgeting: Opportunities for Federal Executives,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, p. 49.
 U.S. General Accounting Office, “Performance Budgeting: Past Initiatives Offer Insights for GPRA Implementation,” March 1997, http://gao.gov/products/AIMD-97-46 (accessed June 7, 2016), and Beryl A. Radin, “The Legacy of Federal Management Change: PART Repeats Familiar Problems,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, pp. 114–132.
 Teresa Curristine, “OECD Countries’ Experiences of Performance Budgeting and Management: Lessons Learned,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, pp. 209–230.
 Yasmina Vinci, “Does Head Start work?” Reuters, December 27, 2012, http://blogs.reuters.com/great-debate/2012/12/27/does-head-start-work/ (accessed July 14, 2016).
 Jason Richwine and Lindsey Burke, “In Preschool Debate, Politics Trumps Evidence,” RealClearPolitics.com, April 22, 2013, http://www.realclearpolitics.com/articles/2013/04/22/in_preschool_debate_politics_trumps_evidence_118064.html#ixzz4FdeGyGW9 (accessed July 27, 2016).
 U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research, and Evaluation, “Head Start Impact Study: Final Report,” January 2010, p. xxxviii.
 Mike Puma et al., “Third Grade Follow-Up to the Head Start Impact Study: Final Report,” U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research, and Evaluation, OPRE Report 2012-45, October 2012.
 Muhlhausen, Do Federal Social Programs Work? pp. 80–125, and David B. Muhlhausen, “Do Federal Social Programs for Children Work?” testimony before the Committee on the Budget, United States Senate, June 26, 2013, http://www.heritage.org/research/testimony/2013/07/evaluating-federal-social-programs-finding-out-what-works-and-what-does-not.
 Donald P. Moynihan, The Dynamics of Performance Management: Constructing Information and Reform (Washington, DC: Georgetown University Press, 2008), p. 14.
 Ibid., p. 15.
 David B. Muhlhausen, “Evidence-Based Policymaking: A Primer.”
 Muhlhausen, Do Federal Social Programs Work?; Stuart M. Butler and David B. Muhlhausen, “Can Government Replicate Success?” National Affairs (Spring 2014), pp. 25–39, http://www.nationalaffairs.com/publications/detail/can-government-replicate-success (accessed July 15, 2016); and Muhlhausen, “Evidence-Based Policymaking: A Primer.”
 Muhlhausen, Do Federal Social Programs Work?
 Cynthia Miller et al., “The Challenge of Replicating Success in a Changing World: Final Report on the Center for Employment Training Replication Cites,” Manpower Demonstration Research Corporation, September 2005, http://www.mdrc.org/publication/challenge-repeating-success-changing-world (accessed September 2, 2015).
 George Cave et al., “JOBSTART: Final Report on a Program for School Dropouts,” Manpower Demonstration Research Corporation, October 1993, http://www.mdrc.org/project/jobstart (accessed September 2, 2015).
 Miller et al., “The Challenge of Replicating Success in a Changing World.”
 Ibid., p. xi.
 U.S. Department of Health and Human Services, Office of Adolescent Health, “Teen Pregnancy Prevention,” http://www.hhs.gov/ash/oah/oah-initiatives/tpp_program/about/ (accessed July 22, 2016).
 Ron Haskins, “Renewing Communities and Providing Opportunities Through Innovative Solutions to Poverty,” testimony before the Committee on Homeland Security and Governmental Affairs, U.S. Senate, June 22, 2016, http://www.brookings.edu/research/testimony/2016/06/22-renewing-communities-and-providing-opportunities-through-innovative-solutions-to-poverty-haskins (accessed July 22, 2016).
 U.S. Department of Health and Human Services, Office of Adolescent Health, “Evidence-Based TPP Programs,” http://www.hhs.gov/ash/oah/oah-initiatives/tpp_program/db/ (accessed July 22, 2016).
 U.S. Department of Health and Human Services, Office of Adolescent Health, “Grantees FY 2010–2014,” http://www.hhs.gov/ash/oah/oah-initiatives/evaluation/grantee-led-evaluation/grantees-2010-2014.html (accessed September 26, 2016).
 Joan Eichner et al., “Evaluation of Seventeen Days in Ohio, Pennsylvania, and West Virginia,” University of Pittsburgh, Office of Child Development, August 31, 2015; Eric Jenner et al., “Evaluation of Safer Sex Intervention in New Orleans, LA: Findings from the Replication of an Evidence-Based Teen Pregnancy Prevention Program. New Orleans, LA,” The Policy & Research Group, January 22, 2016; Karin Coyle et al., “Evaluation of It’s Your Game…Keep It Real In Houston, TX: Final Report,” Scotts Valley, CA: ETR Associates, February 10, 2016; Elaine M. Walker, Rafael Inoa, and Nanci Coppola, “Evaluation of Promoting Health Among Teens Abstinence-Only Intervention in Yonkers, NY,” Sametric Research, Princeton, NJ, March 9, 2016; and Scott Herrling, “Evaluation of the Children’s Aid Society (CAS)-Carrera Adolescent Pregnancy Prevention Program in Chicago, IL: Findings from the Replication of an Evidence-Based Teen Pregnancy Prevention Program,” Accord, NY, Philliber Research & Evaluation, February 29, 2016.
 U.S. Department of Health and Human Services, Office of Adolescent Health, “Grantees FY 2010–2014.”
 Stephanie Martin et al., “Evaluation of Alaska Promoting Health Among Teens, Comprehensive Abstinence and Safer Sex (AKPHAT) in Alaska,” Institute of Social and Economic Research, University of Alaska-Anchorage, October 6, 2015; Holli Slater and Diane Mitschke, “Evaluation of the Crossroads Program in Arlington, TX: Findings from an Innovative Teen Pregnancy Prevention Program,” Arlington, TX, University of Texas at Arlington, December 20, 2015; Traci Schwinn et al., “Evaluation of mCircle of Life in Tribes of the Northern Plains: Findings from an Innovative Teen Pregnancy Prevention Program,” final behavioral impact report submitted to the Office of Adolescent Health, August 18, 2015; Amita N. Vyas et al., “The Evaluation of Be Yourself/Sé Tú Mismo in Montgomery & Prince Georges Counties, Maryland,” Washington, DC, The George Washington University Milken Institute School of Public Health, October 20, 2015; and Patricia Kissinger, Norine Schmidt, and Jakevia Green, “Evaluation of BUtiful: An Internet Pregnancy Prevention for Older Teenage Girls in New Orleans, Louisiana,” Tulane University School of Public Health and Tropical Medicine, November 11, 2015.
 Butler and Muhlhausen, “Can Government Replicate Success?”
 Erik Eckholm, “Job Corps Plans Makeover for a Changed Economy,” The New York Times, February 20, 2007, http://www.nytimes.com/2007/02/20/washington/20jobcorps.html (accessed July 14, 2016), and Peter Z. Schochet, Sheena McConnell, and John Burghardt, National Job Corps Study: Findings Using Administrative Earnings Records Data: Final Report (Princeton, NJ: Mathematica Policy Research, Inc., October 2003).
 U.S. Government Accountability Office, “Employment and Training Administration: More Actions Needed to Improve Transparency and Accountability of Its Research Programs,” GAO–11–285, March 2011, http://www.gao.gov/new.items/d11285.pdf (accessed July 14, 2016).
 Dan Lips, “Politicizing Preschool,” Fox News, December 28, 2009, http://www.foxnews.com/opinion/2009/12/29/dan-lips-heritage-preschool-head-start-politics.html (accessed July 14, 2016).
 Mike Puma et al., “Third Grade Follow-up to the Head Start Impact Study Final Report,” OPRE Report No. 2012-45, Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, October 2012, http://www.acf.hhs.gov/opre/resource/third-grade-follow-up-to-the-head-start-impact-study-final-report (accessed July 25, 2016).
 U.S. Office of Management and Budget, “Program Evaluation,” in Budget of the United States Government, Fiscal Year 2011: Analytical Perspectives (Washington, DC: U.S. Government Printing Office, 2010), p. 91, https://www.gpo.gov/fdsys/pkg/BUDGET-2011-PER/pdf/BUDGET-2011-PER.pdf (accessed June 30, 2016).
 Peter R. Orszag, “Federal Statistics in the Policy Making Process,” Annals of the American Academy of Political and Social Science, Vol. 631, No. 1 (September 2010), pp. 34–42.
 Ibid., pp. 34–35.
 Ibid., pp. 35–36.
 Ibid, p. 36.
 For a review of the effectiveness of Even Start and Head Start, see Muhlhausen, Do Federal Social Programs Work? pp. 125–138 and pp. 104–125, respectively.
 Geoffrey D. Borman, “National Efforts to Bring Reform to Scale in High-Poverty Schools: Outcomes and Implications,” in Barbara Schneider and Sarah-Kathryn McDonald, eds., Scaled-Up in Education: Issues in Practice, Vol. II (Lanham, MD: Rowman & Littlefield, Inc., 2007), pp. 41–67.
 John M. Love et al., Making a Difference in the Lives of Infants and Toddlers and Their Families: The Impacts of Early Head Start, Volume 1: Final Technical Report (Princeton, NJ: Mathematica Policy Research, June 2002).
 Cheri A. Vogel et al., Early Head Start Children in Grade 5: Long-Term Follow-Up of the Early Head Start Research Evaluation Project Study Sample: Final Report, OPRE Report No. 2011–8 (Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, December 2010).
 Ibid., Table III.2, pp. 24–25.
 Orszag, “Federal Statistics in the Policy Making Process,” p. 41.
 Ibid., p. 35.
 Ibid., pp. 34–35.
 L. R. Jones and Jerry L. McCaffery, “Performance Budgeting in the U.S. Federal Government: History, Status, and Future Implications,” Public Finance and Management, Vol. 10, No. 3 (2010), p. 507.
 Ibid., p. 508.
 U.S. Office of Management and Budget, “Preparation, Submission, and Execution of the Budget,” Circular No. A–11, June 2015, https://www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/a11_2015.pdf (accessed June 2, 2016).
 Tomkin, Inside OMB, p. 119, and Matthew Dull, “Why PART? The Institutional Politics of Presidential Budget Reform,” Journal of Public Administration Research and Theory, Vol. 16, No. 2 (2006), p. 198.
 Tomkin, Inside OMB, p. 119.
 Angela M. Antonelli and Peter B. Sperry, “Achieving Fiscal Discipline in an Era of Surplus,” pp. 3–18, in A Budget for America (Washington, D.C.: The Heritage Foundation, 2001).
 Tomkin, Inside OMB, p. 119.
 Dull, “Why PART?” p. 199.
 David G. Frederickson and H. George Frederickson, Measuring the Performance of the Hollow State (Washington, DC: Georgetown University Press, 2007), p. 2.
 Ibid., p. 37.
 Paul L. Posner and Denise M. Fantone, “Assessing Federal Program Performance: Observations on the U.S. Office of Management and Budget’s Program Assessment Rating Tool and Its Use in the Budget Process,” Public Performance & Management Review, Vol. 30, No. 3 (March 2007), pp. 351–368.
 Moynihan, The Dynamics of Performance Management, pp. 131–132.
 Office of Management and Budget, “Performance and Management Assessments,” in Budget of the United States Government: Fiscal Year 2004 (Washington, DC: U.S. Government Printing Office, 2003), p. 10, https://www.gpo.gov/fdsys/pkg/BUDGET-2004-PMA/pdf/BUDGET-2004-PMA.pdf (accessed July 25, 2016).
 Ibid., pp. 12–13.
 Ibid., p. 14.
 Frederickson and Frederickson, Measuring the Performance of the Hollow State, p. 41.
 Dull, “Why PART?” pp. 202–203.
 Posner and Fantone, “Assessing Federal Program Performance: Observations on the U.S. Office of Management and Budget’s Program Assessment Rating Tool and Its Use in the Budget Process,” p. 364.
 John B. Gilmour, “Implementing OMB’s Program Assessment Rating Tool: Meeting the Challenges of Performance-Based Budgeting,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, p. 25.
 Moynihan, The Dynamics of Performance Management, p. 139.
 Ibid., pp. 139–140.
 Herbert Kaufman, Are Government Organizations Immortal? (Washington, DC: The Brooking Institution, 1976), and Mark R. Daniels, Terminating Public Programs: An American Political Paradox (Armonk, NY: M. E. Sharpe, 1997).
 Dull, “Why PART?” p. 188, and Amelia Gruber, “Administration Faces Uphill Task in Eliminating Programs,” Government Executive, February 7, 2005, http://www.govexec.com/management/2005/02/administration-faces-uphill-task-in-eliminating-programs/18510/print/ (accessed June 16, 2016).
 Moynihan, The Dynamics of Performance Management, p. 128.
 John B. Gilmour and David E. Lewis, “Assessing Performance Budgeting at OMB: The Influence of Politics, Performance, and Program Size,” Journal of Public Administration Research and Theory: J-PART, Vol. 16, No. 2 (April 2006), pp. 169–186; Thomas J. Greitens and M. Ernita Joaquin, “Policy Typology and Performance Measurement,” Public Performance & Management Review, Vol. 33, No. 4 (2010), pp. 555–570; and Dull, “Why PART?”
 Donald P. Moynihan, “Advancing the Empirical Study of Performance Management: What We Learned from the Program Assessment Rating Tool,” The American Review of Public Administration, Vol. 43 No. 5 (2013), pp. 499–517; Velda Frisco and Odd J. Stalebrink, “Congressional Use of the Program Assessment Rating Tool,” Public Budgeting & Finance (Summer 2008), pp. 1–19; John B. Gilmour and David E. Lewis, “Assessing Performance Budgeting at OMB: The Influence of Politics, Performance, and Program,” Journal of Public Administration Research and Theory: J-PART, Vol. 16, No. 2 (April 2006), Table 1, p. 179; Carolyn J. Heinrich, “How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool,” Public Administration Review, Vol. 72, No. 1 (2011), pp. 123–134; and Dong-Young Rhee, “The Impact of Performance Information on Congressional Appropriations,” Public Performance & Management Review, Vol. 38, No. 1 (2014), pp. 100–124.
 Odd J. Stalebrink and Velda Frisco, “PART in Retrospect: An Examination of Legislators’ Attitudes Toward PART,” Public Budgeting & Finance (Summer 2011), pp. 1–21.
 Robert D. Lee, Ronald W. Johnson, and Philip G. Joyce, Public Budgeting Systems, 9th ed. (Burlington, MA: Jones & Bartlett Learning, 2013), p. 216, and Office of Management and Budget, Budget of the U.S. Government, Fiscal Year 2010: Analytical Perspectives, https://www.gpo.gov/fdsys/pkg/BUDGET-2010-PER/pdf/BUDGET-2010-PER.pdf (accessed June 15, 2016).
 Elizabeth Newell, “Budget Summary Is Littered with Management Proposals,” Government Executive, February 26, 2009, http://www.govexec.com/oversight/2009/02/budget-summary-is-littered-with-management-proposals/28645/ (accessed July 1, 2016).
 Office of Management and Budget, Budget of the U.S. Government, Fiscal Year 2010: Analytical Perspectives.
 Ibid., p. 9.
 Lee, Johnson, and Joyce, Public Budgeting Systems, p. 216.
 Jones and McCaffery, “Performance Budgeting in the U.S. Federal Government,” pp. 503–504.
 Sean Reilly, “Performance Website Underperforms,” Federal Times, September 22, 2011, http://www.federaltimes.com/story/defense/archives/2011/09/22/performance-website-underperforms/78532216/ (accessed June 15, 2016).
 Representative Todd Russell Platts (R–PA) introduced the Program Assessment and Results Act (H.R. 3826 and H.R. 185 during the 108th and 109th Congresses, while Senator Peter Fitzgerald (R–IL) introduced similar legislation (S. 2898) during the 108th Congress.
 Beryl A. Radin, “The Legacy of Federal Management Change: PART Repeats Familiar Problems,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, pp. 123–124.
 Ibid., p. 124.
 William A. Niskanen Jr., Bureaucracy and Public Economics (Brookfield, VT: Edward Elgar Publishing Company, 2000), and Lloyd A. Blanchard, “PART and Performance Budgeting Effectiveness,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, pp. 67–91.
 Blanchard, “PART and Performance Budgeting Effectiveness,” p. 72.
 The idea for incorporating evidence-based policymaking into the spring planning review was proposed in a personal conversation with the author by Robert J. Shea, former Associate Director for the U.S. Office of Management and Budget for Administration and Government Performance during the Bush Administration and a current member of the Evidence-Based Policymaking Commission.
 Antonelli and Sperry, “Achieving Fiscal Discipline in an Era of Surplus,” pp. 3–18.
 The origin of this idea came from Andrew R. Feldman, a visiting fellow at the Brookings Institution.
 Dale C. Farran, “Federal Preschool Development Grants: Evaluation Needed,” Brookings Institution Evidence Speaks Reports, Vol. 1, No. 22 (July 14, 2016), http://www.brookings.edu/~/media/research/files/reports/2016/07/14-federal-preschool-development-grnats-evaluation-needed-farran/prek-development-grants-2.pdf (accessed July 22, 2016).
 The origin of this idea for setting clear expectations for PADs and program examiners came from a conversation with Andrew R. Feldman, a visiting fellow at the Brookings Institution.
 Tomkin, Inside OMB, p. 13.
 Ibid., p. 14.
 Radin, “The Legacy of Federal Management Change.”
 Tomkin, Inside OMB, p. 187.
 Joyce, “Linking Performance and Budgeting,” p. 58.
 Dull, “Why PART?” p. 203.
 Paul L. Posner and Denise M. Fantone, “Performance Budgeting: Prospects for Sustainability Effectiveness,” in Redburn, Shea, and Buss, eds., Performance Management and Budgeting: How Governments Can Learn from Experience, p. 108, and Joyce, “Linking Performance and Budgeting,” pp. 442–461.
 Joyce, “Linking Performance and Budgeting,” p. 57.
 The idea for using nomination hearings to spur the interest of presidential nominees in evidence-based policymaking was proposed by Jon Baron, vice president of Evidence-Based Policy at the Laura and John Arnold Foundation in a conversation with the author.
 See Muhlhausen, Do Federal Social Programs Work? pp. 313–319 for a more detailed presentation of the necessary steps Congress needs to accomplish to ensure programs are evaluated. In addition, see the model legislation for evaluating programs in the book’s appendix, pp. 321–323.
 For more information on the benefits of large-scale multisite experimental evaluations, see Muhlhausen, Do Federal Social Programs Work? pp. 76–78.
 The Government Reorganization and Program Performance Improvement Act of 2005, not enacted, https://www.whitehouse.gov/sites/default/files/omb/assets/omb/legislative/grppi_act_2005.pdf (accessed June 7, 2016).
 Jonathan D. Breul, “Three Bush Administration Management Reform Initiatives: The President’s Management Agenda, Freedom to Manage Legislative Proposals, and the Program Assessment Rating Tool,” Public Administration Review, Vol. 67, No. 1 (January–February 2007), pp. 21–26.
 Romina Boccia, “How Congress Can Improve Government Programs and Save Taxpayer Dollars,” Heritage Foundation Backgrounder No. 2915, June 10, 2014, http://www.heritage.org/research/reports/2014/06/how-congress-can-improve-government-programs-and-save-taxpayer-dollars, and Jerry Brito, “Running for Cover: The BRAC Commission as a Model for Federal Spending Reform,” The Georgetown Journal of Law & Public Policy, Vol. 9, No. 1 (Winter 2011), http://mercatus.org/sites/default/files/BRAC-commission-model-for-federal-spending-reform.jpg (accessed July 25, 2016).