October 22, 2010 | WebMemo on Education
In the recent edition of the Annals of the American Academy of Political and Social Science, former Director of the Office of Management and Budget (OMB) Peter R. Orszag argues that empirical evidence is the foundation of policymaking in the Obama Administration. Orszag asserts that the Administration “has been clear that it places a very significant emphasis on making policy conclusions based on what the evidence suggests.”
The current acting Director of OMB, Jeffrey Zients, also supports that notion that empirical evidence should drive policymaking. He recently stated that “too many important programs have never been formally evaluated. And when they have, the results of those evaluations have not been fully taken into the decision-making process, at the level of either budgetary decisions or management practices.”
Head Start: The Evidence Is Clear
To demonstrate how the Obama Administration is using empirical evidence to guide decision-making, Orszag used the examples of Head Start and Early Head Start:
Head Start and Early Head Start also both have documented very strong suggestive evidence that they pay off over the medium and long term, both in terms of narrow indicators and broader social indicators for society as a whole. These evaluations demonstrated progress against important program goals and provided documentation necessary to justify increases in funding in the president’s budget to either bring the programs to scale … or to further expand access, in the cases of Head Start and Early Head Start.
Of particular interest is Orszag’s comments on Head Start—a “Great Society” pre-school program intended to provide a boost to disadvantaged children before they enter elementary school. From fiscal year (FY) 1965 to FY 2009, Congress has “invested” $167.5 billion (in 2009 dollars) on Head Start.
Orszag cites the 2010 Head Start Impact Study as evidence that the number of children participating in Head Start needs to be expanded. While the study experienced unusual delays in being released by Department of Health and Human Services, one would still naturally presume that the study found the program to be highly effective and, thus, deserving of expansion. Using random assignment, the study placed almost 5,000 children eligible for Head Start into two treatment conditions based on a lottery. The children who won the lottery were awarded “free” (taxpayer-paid) access to pre-kindergarten Head Start services, while the others either did not attend preschool or sought out alternatives to Head Start.
The study tracked the progress of three- and four-year-olds entering Head Start through kindergarten and the first grade. Overall, the program had little to no positive effects for children granted access to Head Start. For the four-year-old group, compared to similarly situated children not allowed access to Head Start, access to the program failed to raise the cognitive abilities of Head Start participants on 41 measures. Specifically, the language skills, literacy, math skills, and school performance of the participating children failed to improve.
Alarmingly, access to Head Start for the three-year-old group actually had a harmful effect on the teacher-assessed math ability of these children once they entered kindergarten. Teachers reported that non-participating children were more prepared in math skills than those children who participated in Head Start.
Also, Head Start has little to no effect on the other socio-emotional, health, and parenting outcomes of children participating in the program. For the four-year-old group, access to Head Start failed to have an effect for 70 out of 71 socio-emotional, health, and parenting outcomes. The three-year-old group did slightly better: Access to Head Start failed to have an effect for 66 of the 71 socio-emotional, health, and parenting outcomes.
In the same Annals article, Orszag justifies the proposed termination of Even Start, an early childhood education program, because the program “has been evaluated rigorously three times” and “out of forty-one measurable outcomes, the program demonstrated no measured difference between those enrolled in the program and those not on thirty-eight of the outcomes.” Due to the program being a failure, the Obama Administration has decided that Even Start should be terminated.
However, Orszag’s logic does not hold for Head Start. While Even Start was found to have no effect on 38 out of 41 outcome measures, Head Start’s performance is even worse. Overall, Head Start failed to have an effect on 110 out of 112 outcome measures for the four-year-old group. For the three-year-old group, Head Start failed to have an impact on 106 out of 112 measures, with five beneficial impacts and one harmful impact.
Sound Data Does Not Guarantee Sound Policy
Orszag concludes that “the highest level of integrity must be maintained in the process of using science to inform public policy. Sound data are not sufficient to guarantee sound policy decisions, but they are necessary.” Indeed, sound data are not a sufficient guarantee for sound policy decisions. Dealing with the data forthrightly is necessary as well.
In no way does the 2010 Head Start Impact Study demonstrate “very strong suggestive evidence” that Head Start “pay[s] off over the medium and long term.” Placing more children into an already failed program does not represent placing “significant emphasis on making policy conclusions based on what the evidence suggests.” In addition to Head Start being a highly ineffective program, the U.S. Government Accountability Office found that Head Start centers across the nation committed fraud by actively enrolling children from families not qualified to participate in the early education program.
Let’s hope that Zients is more serious than Orszag about using empirical evidence to inform policymaking when he wrote, “Finding out if a program works is common sense, and the basis upon which we can decide which programs should continue and which need to be fixed or terminated.”
David B. Muhlhausen, Ph.D., is Research Fellow in Empirical Policy Analysis in the Center for Data Analysis at The Heritage Foundation.
Peter R. Orszag, “Federal Statistics in the Policy Making Process,” Annals of the American Academy of Political and Social Science, Vol. 631 (September 2010), pp. 34–42.
 Ibid., pp. 34–35.
Jeffrey Zients, “Discovering What Works,” OMB Blog, August 2, 2010, at http://www.whitehouse.gov/blog/2010/08/02/discovering-what-works (October 19, 2010).
Orszag, “Federal Statistics,” pp. 35–36.
David B. Muhlhausen and Dan Lips, “Head Start Earns an F: No Lasting Impact for Children by First Grade,” Heritage Foundation Backgrounder No. 2363, January 21, 2010, at http://www.heritage.org/Research/Reports/2010/01/Head-Start-Earns-an-F-No-Lasting-Impact-for-Children-by-First-Grade.
U.S. Department of Health and Human Services, Administration for Children and Families, “Head Start Impact Study: Final Report,” January 2010, at http://www.acf.hhs.gov/programs/opre/hs/impact_study/reports/impact_study/hs_impact_study_final.pdf (October 19, 2010).
Orszag, “Federal Statistics,” p. 36.
Jennifer Marshall et al., “Is Head Start Helping Children Succeed and Does Anyone Care?” The Heritage Foundation, March 22, 2010, at http://www.heritage.org/Events/2010/03/Head-Start (July 19, 2010).
Muhlhausen and Lips, “Head Start Earns an F.”
Orszag, “Federal Statistics,” p. 36.
 Ibid., at p. 35.
 Ibid., at 34–35.
Gregory D. Kutz, “Head Start: Undercover Testing Finds Fraud and Abuse at Selected Head Start Centers,” testimony before the Committee on Education and Labor, U.S. House of Representatives, May 18, 2010, at http://www.gao.gov/new.items/d10733t.pdf (October 21, 2010). For a discussion of Head Start fraud, see David B. Muhlhausen, “Head Start Program: Fraudulent and Ineffective,” Heritage Foundation WebMemo No. 2919, May 28, 2010, at http://www.heritage.org/Research/Reports/2010/05/Head-Start-Program-Fraudulent-and-Ineffective#_ednref2.