August 10, 2011
By David B. Muhlhausen, Ph.D.
Health, education, welfare ... the federal government spends more than $630 billion annually on hundreds of social programs. How many of them work? No one knows. And that's a problem.
Most federal programs have never been evaluated for true effectiveness. And most evaluations that are conducted – and there are many – aren't worth the paper they're written on. They may examine a national program only locally, or lack a "control group" to compare against.
The best way to determine whether such programs work is to conduct large-scale, multisite, experimental evaluations. These studies should use random assignment to compare results of people assigned to programs with those in similar circumstances but not assigned to the programs.
The good news is that federal programs are ideally situated to accommodate such evaluations. The bad news: The federal government has conducted only 13 such evaluations since it began to study itself in the 1960s.
Maybe the Feds just don't want to be purveyors of bad news. That's certainly what emerged from the 2010 Head Start Impact Study. A rigorous experimental evaluation, the study placed almost 5,000 children eligible for Head Start into two treatment conditions, determined by a lottery. Children who won the lottery got access to prekindergarten Head Start services; the others either didn't attend preschool or found alternatives to Head Start.
The study tracked the children's progress through kindergarten and the first grade. Overall, the program yielded little to no positive effects. On all 41 measures of cognitive ability, Head Start failed to raise abilities of those who entered the program as 4-year-olds. Specifically, their language skills, literacy, math skills, and school performance were no better than those of the children denied access to the program.
Those who entered as 3-year-olds had similar results. They scored no better than nonparticipants on 40 of the cognitive measures and significantly worse on one: Head Start grads, according to their kindergarten teachers, were significantly less well prepared in math skills.
The quintessential "Great Society" program, Head Start was intended to give disadvantaged children an educational boost before starting elementary school. When enacted in 1965, its $96 million budget was intended to help kids in the summer. Early, small-bore evaluations were positive, and the program grew.
Today, Head Start has a $7 billion budget and legions of invested stakeholders. But it's not working for the kids and it's awfully expensive. Even liberal Time magazine columnist Joe Klein, commenting on Head Start, recently wrote, "[W]e need world-class education programs, from infancy on up. But we can no longer afford to be sloppy about dispensing cash.... "
It's past time for lawmakers to figure out just how well the programs Congress funds are working. As a first step, every time it authorizes or reauthorizes a social program, Congress should specifically mandate that the program undergo a rigorous experimental evaluation.
This is imminently doable. When Congress creates social programs, the funded activities spread out across the nation. The stage is set for a large-scale, multisite evaluation.
Unfortunately, mandating evaluations isn't the same as getting them done. Federal agencies fearful of losing funding for pet programs are expert dawdlers when it comes to performing hard-nosed evaluations.
In 1998, Congress passed the Workforce Investment Act, which authorized the Labor Department's major job-training programs. Given the past failure of these programs, Congress stipulated that the department had to complete a large-scale, multisite evaluation of its job-training efforts by September 2005.
Labor promptly procrastinated. It didn’t even award a contract for the evaluation until June 2008. According to the US Government Accountability Office, the evaluation will not be completed until June 2015 – nearly a decade past the original due date.
The second step, then, is meaningful congressional oversight – and consequences. Lawmakers must be diligent in ensuring that reluctant agencies carry out any and all congressionally required program evaluations. Funding should hinge on their compliance.
Congress is morally obligated to spend taxpayer dollars effectively. Experimental evaluations are the only way to determine to a high degree of certainty the effectiveness of the government's social programs.
David Muhlhausen is a research fellow in empirical policy analysis at the Heritage Foundation.
First appeared in The Christian Science Monitor
David B. Muhlhausen, Ph.D.
Research Fellow in Empirical Policy Analysis
Read More >>
Request an interview >>
Please complete the following form to request an interview with a Heritage expert.
Please note that all fields must be completed.
Heritage's daily Morning Bell e-mail keeps you updated on the ongoing policy battles in Washington and around the country.
The subscription is free and delivers you the latest conservative policy perspectives on the news each weekday--straight from Heritage experts.
The Morning Bell is your daily wake-up call offering a fresh, conservative analysis of the news.
More than 200,000 Americans rely on Heritage's Morning Bell to stay up to date on the policy battles that affect them.
Rush Limbaugh says "The Heritage Foundation's Morning Bell is just terrific!"
Rep. Peter Roskam (R-IL) says it's "a great way to start the day for any conservative who wants to get America back on track."
Sign up to start your free subscription today!
The Heritage Foundation is the nation’s most broadly supported public policy research institute, with hundreds of thousands of individual, foundation and corporate donors. Heritage, founded in February 1973, has a staff of 275 and an annual expense budget of $82.4 million.
Our mission is to formulate and promote conservative public policies based on the principles of free enterprise, limited government, individual freedom, traditional American values, and a strong national defense. Read More
© 2014, The Heritage Foundation Conservative policy research since 1973