Head Start Earns an F: No Lasting Impact for Children by First Grade

Report Education

Head Start Earns an F: No Lasting Impact for Children by First Grade

January 21, 2010 20 min read Download Report

Authors: Dan Lips and David Muhlhausen

Abstract: Recently released results from the Head Start Impact Study indicate that the benefits of participating in Head Start almost completely disappear by first grade. While other studies have previously assessed Head Start's effectiveness, this is the only study that used a rigorous experimental design. Given this strongly negative evaluation, Congress should reconsider spending more than $9 billion per year on a program that produces few positive lasting effects. Furthermore, instead of creating yet another new federal preschool program at a cost of $8 billion, Congress and the Obama Administration should focus on terminating, consolidating, and reforming existing preschool and child care programs to better serve children's needs and to improve efficiency for taxpayers.

 

The federal government spent at least $25 billion on federal preschool and child care programs in 2009,[1] but President Obama has pressed for significant increases in preschool spending. The Administration approved $5 billion in new early education and child care spending in the American Recovery and Reinvestment Act. Congress may soon approve $8 billion in new spending on the Early Learning Challenge Fund in the Student Aid and Fiscal Responsibility Act (H.R. 3221), which has already passed the House of Representatives.

Before Congress creates a new preschool program and increases spending on preschool and child care, it should evaluate whether the current programs are working. Topping the list of programs to review should be Head Start, which serves approximately 900,000 low-income children at a cost of $9 billion per year. A recently released experimental evaluation by the U.S. Department of Health and Human Services found that Head Start has had little to no effect on cognitive, socio-emotional, health, and parenting outcomes of participating children. For the four-year-old cohort, access to Head Start had a beneficial effect on only two outcomes (1.8 percent) out of 112 measures. For the three-year-old cohort, access to Head Start had one harmful impact (0.9 percent) and five (4.5 percent) beneficial impacts out of 112 measures. Specifically,

  • For the 41 measures of cognitive outcomes for the four-year-old cohort, access to Head Start failed to have an impact on all measures.
  • For the 41 measures of cognitive outcomes for the three-year-old cohort, access to Head Start had a harmful effect on teacher-assessed math ability in kindergarten and failed to have an impact on the 40 other measures.
  • For the 40 measures of socio-emotional outcomes for the four-year-old cohort, access to Head Start had only one beneficial effect and failed to have an impact on the 39 other measures.
  • For the 40 measures of socio-emotional outcomes for the three-year-old cohort, access to Head Start had only two beneficial effects and failed to have an impact on the 38 other measures.
  • For the 10 measures of parent-reported health outcomes for the four-year-old cohort, access to Head Start had only one beneficial effect and failed to have an impact on the nine other measures.
  • For the 10 measures of parent-reported health outcomes for the three-year-old cohort, access to Head Start had only one beneficial effect and failed to have an impact on the nine other measures.
  • For the 21 measures of parenting outcomes for the four-year-old cohort, access to Head Start had no effect on all of the measures.
  • For the 21 measures of parent-reported health outcomes for the three-year-old cohort, access to Head Start had only one beneficial effect and failed to have an impact on the 20 other measures.

Rather than create a new federal preschool program, Congress should focus on terminating, consolidating, and reforming existing programs to serve children's needs better and to improve efficiency for taxpayers.

Head Start, 1965-Present

Created as part of the War on Poverty in 1965, Head Start is a preschool community-based program funded by the federal government. By providing education, nutrition, and health services, Head Start is intended to provide a boost to disadvantaged children before they enter elementary school. Its goal is to help disadvantaged children catch up to children living in more fortunate circumstances. From fiscal year (FY) 1965 to FY 2009, Congress spent $167.5 billion in 2009 dollars on Head Start.[2] (See Chart 1.) From FY 2000 to FY 2009, the average annual appropriation for Head Start was $7.6 billion.

Total Head Start Spending

 

Despite Head Start's long life, the program had never undergone a thorough, scientifically rigorous evaluation of its effectiveness until Congress mandated an evaluation in 1998. The Head Start Impact Study began in 2002, and the results released in 2010 are disappointing. Overall, the evaluation found that the program largely failed to improve the cognitive, socio-emotional, health, and parenting outcomes of children who participated compared to the outcomes of similar children who did not participate. According to the report, "the benefits of access to Head Start at age four are largely absent by 1st grade for the program population as a whole."[3]

Background on the National Evaluation

The Head Start Impact Study began in 2002 as an ongoing randomized experiment based on a nationally representative sample of Head Start programs and approximately 5,000 children who applied to participate in Head Start.[4] The sample of children applying for Head Start was randomly assigned to intervention and control groups. The intervention group participated in Head Start services, while the control group was excluded from Head Start participation. The parents of control group children were free to enroll their children in other early education programs.

Determining the impact of social programs, such as Head Start, requires comparing the conditions of those who received assistance with the conditions of an equivalent group that did not experience the intervention. Experimental evaluations in which eligible participants are randomly assigned to either intervention or control groups represent the "gold standard" of evaluation designs. Experimental evaluations are widely acknowledged to have the highest degree of internal validity. The higher internal validity means that researchers can be more certain in answering the question: Did the program have an impact on the participants? Random assignment allows the evaluator to test for differences between the experimental and control groups that are due to the intervention, not to pre-intervention discrepancies between the groups.

The 2010 Head Start Impact Study

Is Head Start worth more than $7 billion per year? The 2010 Head Start Impact Study found that Head Start largely failed to improve the cognitive, socio-emotional, health, and parenting outcomes compared to the outcomes of similar children. The authors disappointingly concluded:

In sum, this report finds that providing access to Head Start has benefits for both 3-year-olds and 4-year-olds in the cognitive, health, and parenting domains, and for 3-year-olds in the social-emotional domain. However, the benefits of access to Head Start at age four are largely absent by 1st grade for the program population as a whole.[5]

While the results of the 2010 study have been known to officials within the Department of Health and Human Services since the end of the Bush Administration, Congress added $1 billion to the original $7.5 billion in FY 2009 funding for Head Start with the passage of the American Recovery and Reinvestment Act of 2009.[6]

Understanding Statistical Significance

A "statistically significant" finding indicates that the effect of a particular intervention is statistically distinguishable from no effect. For example, if analysis finds that Head Start has had a statistically significant effect on a particular outcome, then social scientists can conclude with a high degree of confidence that the result was caused by the program, not by chance.

A "statistically insignificant" finding indicates that the effect of a particular intervention is, for statistical purposes, no different from zero. For example, if Head Start is found to have a statistically insignificant effect on a particular outcome, the probability that the effect was caused by chance are too great for social scientists to conclude with confidence that the program produced the effect. In other words, access to Head Start had no statistically measurable effect on the particular outcome.

The common standard among social scientists for declaring a finding statistically significant is the 5 percent significance level (p ≤ 0.05). This means that there is at least a 95 percent statistical probability that the program caused the effect and at most a 5 percent probability that the program had no measurable effect. Most social scientists use this rigorous standard of statistical significance because they want a high degree of confidence in their findings. Policymakers who make decisions based on social science research should also want a high degree of confidence. The 1 percent significance level (p ≤ 0.01) is an even more rigorous standard, meaning that there is only a 1 percent probability that results were the product of chance.

Sometimes, social scientists will use the less rigorous standard of 10 percent (p ≤ 0.10). Under this looser standard, social scientists are willing to risk a 10 percent chance of mistakenly concluding that the program had an effect, when it really had no effect at all. The 10 percent significance standard can be justified when social scientists are analyzing small samples, such as 100 cases. Studies using small sample sizes are less likely to be sensitive enough to find statistically significant findings at the 5 percent significance level than studies using much larger sample sizes[1] Thus, social scientists sometimes use the less rigorous 10 percent significance level for small sample sizes. In contrast, the larger the sample size used in a study, the more sensitive the study will be in finding statistically significant effects. For this reason, most social scientists use the 5 percent confidence level when working with large sample sizes.

In some cases, the authors of the 2010 Head Start evaluation reported statistically significant impacts based on the 10 percent significance level (p ≤ 0.10). However, this level of statistical significance is hard to justify with a sample of 4,667 children participating in the 2010 study. Under this looser standard, the authors reported that Head Start had a few more positive impacts than they could have reported using the more commonly accepted 5 percent confidence level. Yet, despite using this looser standard of statistical significance, the evaluation found few incidents of positive impact.


Mark W. Lipsey, Design Sensitivity: Statistical Power for Experimental Research (Newbury Park, Calif.: SAGE Publications, 1990).

The following is an overview of the findings of the 2010 study. For the development of the four-year-old and three-year-old cohorts, the 2010 study measured outcomes during kindergarten and the first grade. For the four-year-old cohort, access to Head Start had a beneficial effect on only two outcomes (1.8 percent) out of 112 measures. For the three-year-old cohort, access to Head Start had one harmful impact (0.9 percent) and five beneficial impacts (4.5 percent) out of 112 measures.

Head Start Impacts on Cognitive Outcomes

 

Cognitive Development: Four-Year-Old Cohort. For cognitive development, the 2010 study assessed 19 kindergarten outcomes and 22 first-grade outcomes for the four-year-old cohort. (See Table 1.) For kindergarten, access to Head Start had no statistically measurable effect on nine measures of language and literacy, two measures of Spanish language and literacy, three measures of math skills, and five measures of school performance assessment at the 5 percent significance level.[7]

For the first grade, access to Head Start for the four-year-old cohort had similarly dismal results. None of the 22 first-grade cognitive outcomes showed a statistically measurable impact at the 5 percent significance level.[8] However, the authors reported a small positive and statistically significant outcome for the Peabody Picture Vocabulary Test (PPVT Adapted) outcome measure at the less rigorous 10 percent significance level. Under traditional scientific standards, this finding is considered to be statistically indistinguishable from no impact. Thus, for all 41 outcome measures for kindergarten and the first grade, Head Start failed to produce measurable impacts at the standard level of statistical significance.

Cognitive Development: Three-Year-Old Cohort. For cognitive development, the 2010 study assessed 19 kindergarten outcomes and 22 first-grade outcomes for the three-year-old cohort. (See Table 1.) For kindergarten, access to Head Start had no statistically measurable effects at the 5 percent significance level on nine measures of language and literacy, two measures of Spanish language and literacy, and three measures of math skills.[9] The negative effect of Head Start was statistically significant at the 1 percent significance level for one of the five measures of school performance assessment outcomes: "Kindergarten teachers reported poorer math skills for children in the Head Start group than for those in the control group."[10] Head Start had no statistically measurable impacts on the remaining four school assessment outcomes.[11]

For the first grade, access to Head Start for the three-year-old cohort has similarly bleak results. None of the 22 first-grade cognitive outcomes showed a statistically measurable impact at the 5 percent significance level.[12] The authors reported a small positive and statistically significant positive outcome at the 10 percent significance level for the Woodcock-Johnson (WJ) III Oral Comprehension outcome measure, but under traditional scientific standards this finding is considered statistically indistinguishable from no impact. Thus, for all 41 outcome measures for kindergarten and the first grade, Head Start failed to have a measurable impact at the standard level of statistical significance.

Head Start Impacts on SocioEmotional Outcomes

 

Socio-Emotional Development: Four-Year-Old Cohort. For socio-emotional development, the 2010 study assessed 20 kindergarten outcomes and 20 first-grade outcomes for the four-year-old cohort. (See Table 2.) For kindergarten, access to Head Start had no statistically measurable effect on nine parent-reported measures and 11 teacher-reported measures.[13]

For the first grade, access to Head Start for the four-year-old cohort had similarly underwhelming results, having no statistically measurable impact on the nine first-grade parent-reported outcomes.[14] However, at the less rigorous 10 percent significance level, the authors reported that the parents of children in the Head Start group perceived their children to be less likely to display withdrawn behavior. One of the 11 teacher-reported measures showed a statistically significant outcome at the 5 percent significance level. According to the authors, "Teachers reported that Head Start group children were more shy or socially reticent than the control group children."[15] At the 10 percent significance level, the authors reported that teachers perceived to have more interaction problems with Head Start students than the students in the control group.

Socio-Emotional Development: Three-Year-Old Cohort. For socio-emotional development, the 2010 study assessed 20 kindergarten outcomes and 20 first-grade outcomes for the three-year-old cohort. (See Table 2.) For kindergarten, access to Head Start had no statistically measurable effect on the 11 teacher-reported measures and eight of nine parent-reported measures.[16] However, parents of children with access to Head Start reported less hyperactive behavior than parents of children in the control group. This finding was significant at the 5 percent level. The authors also reported that Head Start had a positive impact on improving social skills and approaches to learning at the 10 percent significance level.

For the first grade, access to Head Start for the four-year-old cohort had similarly ineffective results. Head Start had no statistically measurable impact on the 11 teacher-reported measures and eight of the nine first-grade parent-reported outcomes. Head Start appears to have had a positive impact on parent reports of closeness with their child at the 5 percent significance level. In addition, the authors reported that Head Start had a positive impact on improving positive relationships with their children. However, this finding is statistically significant at the less rigorous 10 percent significance level.[17]

Head Start Impacts on Parent Reported Child Health Outcomes

 

Child Health Outcomes: Four-Year-Old Cohort. For parent-reported child health, the 2010 study assessed five kindergarten outcomes and five first-grade outcomes for the four-year-old cohort. (See Table 3.) For kindergarten, access to Head Start had no statistically measurable effect on five measures: dental care, health insurance coverage, overall health status, ongoing care needs, and received care for an injury within the past month.[18] The authors reported that Head Start had small positive impacts on insurance coverage and on the parents' perception of the overall health status of their child, but these findings were not statistically significant at the 5 percent significance level.

For the first grade, access to Head Start failed to affect four of the five parent-reported health outcomes.[19] While access to Head Start had no effect on dental care, overall health status, ongoing care needs, and received care for an injury within the past month, Head Start had a small positive effect for health insurance coverage at the 5 percent significance level.

Child Health Outcomes: Three-Year-Old Cohort. For parent-reported child health, the 2010 study assessed five kindergarten outcomes and five first-grade outcomes for the four-year-old cohort. (See Table 3.) For kindergarten, access to Head Start had no statistically measurable effect on four of the five health measures. Access to Head Start showed a small positive effect for health insurance coverage at the 5 percent significance level. For the first grade, access to Head Start failed to affect the five parent-reported health outcomes.[20]

Head Start Impacts on Parenting Outcomes

 

Parenting Outcomes: Four-Year-Old Cohort. For parenting outcomes, the 2010 study assessed 11 kindergarten measures and 10 first-grade measures for the four-year-old cohort. (See Table 4.) For kindergarten, access to Head Start had no statistically measurable effect on the nine measures reported by parents and the two measures reported by teachers. The trend of no statistically measurable impact continued in the first grade, with access to Head Start failing to have statistically measurable impacts.[21]

Parenting Outcomes: Three-Year-Old Cohort. For parenting outcomes, the 2010 study assessed 11 kindergarten measures and 10 first-grade measures for the three-year-old cohort. (See Table 4.) For kindergarten, access to Head Start had no statistically measurable effect on eight of the nine measures reported by parents and the two measures reported by teachers.[22] However, parents of children with access to Head Start were less likely to use a "time out" in the past week. The negative effect of this outcome was small, but statistically significant at the 5 percent significance level. The authors reported that access to Head Start had a small negative impact on parents spanking their children at the 10 percent significance level.

For the first grade, access to Head Start failed to have an impact on seven of the eight parent-reported measures of parenting.[23] Parents of children with access to Head Start were less likely to report using an authoritarian parenting style. The negative effect of this outcome was small, but statistically significant at the 5 percent significance level. The authors reported that parents of children with access to Head Start were less likely to use a "time out" within the past week. However, this finding is statistically significant at only the 10 percent significance level. On the two measures of teacher-reported perceptions of parenting, access to Head Start failed to have statistically measurable impacts.

Attempts to Undercut the Study Findings

Some may argue that other research that directly assessed the Head Start performance shows that the program is effective. Research based on the Head Start Family and Child Experiences Survey (FACES) found that Head Start children made gains in vocabulary, math, and writing skills during the Head Start program year.[24] However, the research design of FACES is inadequate for determining the program's effectiveness.

Without a control group, FACES assesses the academic skills of Head Start children at the start and end of the program year. In the scientific literature, this evaluation design is called the one-group pretest-posttest design. This design has poor internal validity because of its inability to rule out rival hypotheses that may have caused the gains.[25]

First, the changes in the outcome measures may be the result of factors acting independently between the pretest and posttest. The gains could be a result of some parents more actively teaching their children at home. In the scientific literature, this threat to internal validity is called history.

Second, the FACES design cannot rule out the fact that the cognitive abilities of children naturally evolve with age. This internal validity threat, called maturation, means that the observed gains found in the FACES research are also likely to be strongly influenced by the natural biological and psychological developmental process of children. Without a control group, the FACES design cannot separate the effect of maturation in the measured outcomes.

Third, the FACES design is susceptible to the internal validity threat of testing. The testing threat occurs when the effect of initially taking a pretest influences the results of the posttest. After the initial student assessment at the start of the Head Start year, children may adapt and learn how to perform better on the year-end test. In essence, the lack of a control group means that FACES research cannot determine whether the children became better test takers by themselves or the program actually helped them to improve their academic skills.

On the other hand, the experimental design of the 2010 Head Start Impact Study rules out the influences of history, maturation, and testing. The use of random assignment and a control group equally distributes the potential influences of these threats between the intervention group and control group. Therefore, these potential threats to internal validity should not affect the results of the Head Start Impact Study.

Another argument offered to undercut the 2010 study's kindergarten and first-grade findings is that the program produces gains, but those gains fade out due to Head Start students attending poorly performing elementary and middle schools. This assumption is based on research by Professors Valerie E. Lee of the University of Michigan and Susanna Loeb of Stanford University. They used the National Education Longitudinal Study (NELS) of 1988 to assess the quality of middle schools attended by eighth graders who attended Head Start, attended other preschool programs, or did not attend preschool.[26] Using a nationally representative sample of all eighth graders, Professors Lee and Loeb found that former Head Start participants attended lower-quality schools compared to the schools attended by students who had attended other preschool programs or did not attend preschool programs. However, the finding that Head Start students go on to attend worse schools than other students is not surprising. Children living in impoverished, socially disorganized neighborhoods are more likely than children in wealthier neighborhoods to attend lower-performing schools.

The potential suggestion that this finding explains why the 2010 Head Start Impact Study found no effect on kindergarten and first-grade academic achievement is dubious. The fact that former Head Start students attend poorly performing schools should not affect the results of the experimental evaluation because the evaluation assembled similarly situated children and randomly assigned them to intervention and control groups. Random assignment establishes equivalency on pre-existing differences between the intervention and control groups (the groups have similar socioeconomic backgrounds). Because the intervention and control groups are equal on pre-existing differences, it is highly unlikely that the schools attended by the intervention group after participation in Head Start were systematically worse than the schools attended by the control group. For this argument to hold any credence, one must assume that children in the intervention group were systematically sorted into worse schools than members of the similarly situated control group. If this sorting is in fact a reality, such a negative result for the intervention group would be attributable to attending Head Start.

The Forthcoming Third-Grade Impact Study

Following this new impact evaluation of Head Start's effect on kindergarten and first-grade students, the national evaluation is designed to continue following students' performance through the end of third grade. The results of the forthcoming third-grade impact evaluation will shed further light on the question of whether Head Start is effective and provides lasting benefits to participating students.

Members of Congress should request that the Department of Health and Human Services complete this third-grade evaluation in a timely fashion and present the findings to Congress and the public immediately upon completion. There is reason to believe that the 2010 study of first-grade students was not completed or published in a timely fashion.[27] According to the report, data collection for the kindergarten and first-grade evaluation was completed in 2006--nearly four years before its results were made public. For the national impact evaluation of third-grade students, data collection was conducted during the springs of 2007 and 2008.[28] Results from this important third-grade follow-up evaluation should be published as soon as possible.

Taxpayers are spending considerable sums on Head Start and other early childhood education programs. Policymakers should be basing their decisions about Head Start and other preschool programs on the most useful and up-to-date empirical evidence possible.

What Members of Congress and the Administration Should Do

President Barack Obama has declared that he is willing to eliminate "government programs shown to be wasteful or ineffective." Further, he has asserted that "there will be no sacred cows, and no pet projects. All across America, families are making hard choices, and it's time their government did the same."[29] President Obama was correct to call for placing wasteful and ineffective programs on the chopping block. Given that scientifically rigorous research demonstrates that Head Start is ineffective, Head Start is an ideal candidate for the budget chopping block.

If Head Start is not terminated, Congress and the Obama Administration should reform the program (and other federal early childhood education programs) to improve their impact for targeted students and to increase efficiency for federal and state taxpayers. In 2005, the Government Accountability Office (GAO) identified 69 federal programs that provide support for pre-kindergarten and child care. According to a conservative estimate, the federal government will spend more than $25 billion on these programs in FY 2009.[30]

Despite these existing programs and the new empirical evidence confirming Head Start's ineffectiveness, Congress and the Obama Administration may soon authorize $8 billion in new funding for the Early Learning Challenge Fund, which is included in the Student Aid and Fiscal Responsibility Act, which passed the U.S. House of Representatives in September. This Early Learning Challenge Fund would award competitive grants to states that expand early childhood education programs.[31]

Rather than create a new federal preschool program, Congress should focus on reforming and improving the existing federal programs for early childhood education. Congress should:

  • End ineffective programs and consolidate duplicative programs.
  • Reform the remaining federal early child education and child care programs to serve children better. Congress could accomplish this in a number of ways. For example, the Head Start program could be reformed to grant families greater ability to use their children's $7,300 share of Head Start funding to enroll in a preschool program of choice. In addition, states should be granted more autonomy in how they use funding for Head Start and other federal early childhood education and child care programs to benefit students. Across the country, many states are enacting early childhood education programs. States should be granted the flexibility and autonomy to consolidate and coordinate federal and state programs to best meet students' needs.
 
 
 

Conclusion

Since 1965, the federal government has sought to improve early educational opportunities for disadvantaged children through the Head Start program, spending more than $167 billion of taxpayers' money on Head Start. Head Start currently serves approximately 900,000 at an annual cost of at least $7,300 per child.

In the 1990s, Congress mandated an evaluation of Head Start's effectiveness. In 2010, the Department of Health and Human Services finally released the results of the impact evaluation of first-grade students. Overall, the evaluation found that the program largely failed to improve the cognitive, socio-emotional, health, and parenting outcomes of children who participated in Head Start compared to the outcomes of similar children. According to the report, "the benefits of access to Head Start at age four are largely absent by 1st grade for the program population as a whole." Head Start's disappointing results cast doubt over the effectiveness of federal preschool interventions and highlight the need to review the effectiveness of the federal government's current 69 preschool and child care programs.

These results should be of importance to Members of Congress and the Administration. However, the Administration has called for significant increases in federal spending on preschool, and the House of Representatives has already passed legislation to create an $8 billion preschool program.

Rather than create a new federal preschool program, Congress should focus on terminating, consolidating, and reforming existing programs to serve children's needs better and to improve efficiency for taxpayers.

David B. Muhlhausen, Ph.D., is Senior Policy Analyst in the Center for Data Analysis and Dan Lips is Senior Policy Analyst in Education in the Domestic Policy Studies Department at The Heritage Foundation.

[1]Dan Lips, "Reforming and Improving Federal Preschool and Child Care Programs Without Increasing the Deficit," Heritage Foundation Backgrounder No. 2297, July 13, 2009, at http://www.heritage.org/Research/Education
/bg2297.cfm
.

 

[2]U.S. Department of Health and Human Services, "Head Start Program Fact Sheet," at http://www.acf.hhs.gov/programs/ohs/about/fy2008.html (January 14, 2010).

[3]U.S. Department of Health and Human Services, Administration for Children and Families, Head Start Impact Study: Final Report, p. xxxviii, at http://www.acf.hhs.gov/programs/opre/hs/impact_study/reports/impact_study
/hs_impact_study_final.pdf
(January 15, 2010).

[4]U.S. Department of Health and Human Services, Administration for Children and Families, Head Start Impact Study: First Year Findings, June 2005, at http://www.acf.hhs.gov/programs/opre/hs/impact_study/
reports/first_yr_finds/first_yr_finds.pdf
(January 15, 2010).

[5]U.S. Department of Health and Human Services, Head Start Impact Study: Final Report, p. xxxviii.

[6]Public Law 111-5.

[7]U.S. Department of Health and Human Services, Head Start Impact Study: Final Report, pp. 4-10-4-13, Exhibit 4.2.

[8]Ibid.

[9]Ibid., pp. 4-21-4-25, Exhibit 4.5.

[10]Ibid., p. 4-26.

[11]Ibid., pp. 4-21-4-25, Exhibit 4.5.

[12]Ibid.

[13]Ibid., pp. 5-4-5-6, Exhibit 5.1.

[14]Ibid.

[15]Ibid., p. 5-3.

[16]Ibid., pp. 5-8-5-10, Exhibit 5.2.

[17]Ibid.

[18]Ibid., pp. 6-3-6-4, Exhibit 6.1.

[19]Ibid.

[20]Ibid., pp. 6-6-6-7, Exhibit 6.2.

[21]Ibid., pp. 7-4-7-5, Exhibit 7.1.

[22]Ibid., pp. 7-8-7-10, Exhibit 7.2.

[23]Ibid.

[24]Nicholas Zill, Alberto Sorongon, Kwang Kim, Cheryl Clark, and Maria Woolverton, "Children's Outcomes and Program Quality in Head Start," U.S. Department of Health and Human Services, Administration for Children and Families, FACES 2003 Research Brief, December 2006, at http://www.acf.hhs.gov/programs/opre/hs/faces/reports/research_2003
/research_2003.pdf
(January 12, 2010).

[25]Donald T. Campbell, and Julian C. Stanley, Experimental and Quasi-Experimental Designs for Research (Boston: Houghton Mifflin Company, 1963).

[26]Valerie E. Lee and Susanna Loeb, "Where Do Head Start Attendees End Up? One Reason Why Preschool Effects Fade Out," Educational Evaluation and Policy Analysis, Vol. 17, No, 1 (Spring 1995), pp. 62-82.

[27]Dan Lips, "Politicizing Preschool," Fox News, December 28, 2009, at http://www.foxnews.com/opinion/2009/12/29/dan-lips-heritage-preschool-head
-start-politics
(January 19, 2010).

[28]U.S. Department of Health and Human Services, Administration for Children and Families, "Head Start Impact Study and Follow-Up: Overview," at http://www.acf.hhs.gov/programs/opre/hs/impact_study/
imptstudy_overview.html
(January 14, 2010).

[29]Barack Obama, "President Obama Discusses Efforts to Reform Spending, Government Waste; Names Chief Performance Officer and Chief Technology Officer," The White House, April 18, 2009, at http://www.whitehouse.gov/the_press_office/Weekly-Address-President
-Obama-Discusses-Efforts-to-Reform-Spending
(January 15, 2010).

[30]Marnie Shaul, "GAO Update on Prekindergarten Care and Education Programs," letter to Senators Michael B. Enzi, Lamar Alexander, and George V. Voinovich, June 2, 2005, at http://www.gao.gov/new.items
/d05678r.pdf
(July 6, 2009).

 

[31]Lindsey Burke, "The Early Learning Challenge Fund: Increased Federal Role in Early Education," Heritage Foundation WebMemo No. 2643, October 6, 2009, at http://www.heritage.org/Research/Education/wm2643.cfm.

Authors

Dan Lips
Dan Lips

Former Senior Policy Analyst

David Muhlhausen
David Muhlhausen

Research Fellow in Empirical Policy Analysis