A Guide to the NAEP Academic Achievement Test

Report Education

A Guide to the NAEP Academic Achievement Test

March 15, 2001 7 min read

While presenting his budget before a joint session of Congress on February 27, President George W. Bush declared that "[m]easuring is the only way to know whether all our children are learning." Echoing the principles articulated in his education plan, No Child Left Behind, the President went on to outline his plan to infuse accountability into federal education spending. For the first time, states would be required to demonstrate annual increases in academic achievement, especially among disadvantaged students and English-as-second-language learners.

President Bush sees testing children as the way to verify that states are truly improving achievement. Under his plan, state-developed tests would be used to measure the success of states and their schools.

In order to confirm state progress on state assessments, the plan calls for an annual sampling of 4th and 8th grade students on the National Assessment of Educational Progress (NAEP) in math and reading. But what exactly is the NAEP test? How is it administered? Who runs it? And how would it have to be retooled to fit the President's plan?

WHAT IS THE NAEP?

Commonly known as the "The Nation's Report Card," the NAEP was first administered in 1969. The examination measures academic achievement of 4th, 8th, and 12th grade students. Assessments developed by a 26-member National Assessment Governing Board (NAGB) are used to test reading, writing, mathematics, science, geography, civics, the arts, and other fields. Tests in math and reading are given more often than those for other subjects.

Unlike other tests, the NAEP does not provide information about a particular school's or student's performance. Rather, it is designed to provide a general picture of the levels of skill and knowledge among students nationwide or in a particular state. Only a small sample of students statewide are tested, and no student takes the entire test. The scores of individual students and schools are not released.

In other words, the NAEP can reveal whether students in a particular state are reading at a proficient level as determined by a NAEP standard. It can show how states rank when these statistics are compared. But it cannot show whether a particular student is reading proficiently or how his or her school compares to other schools.

The NAEP provides a rich database on educational performance and student background. From the test, data on such factors as teacher qualifications, socioeconomic status, computer usage, hours spent watching television, reading habits, and other demographic and school information can be gleaned. Such information is valuable to education reform groups because researchers are able to isolate factors, such as the number of reading materials in the home, that correlate to higher achievement.

The NAEP is a criterion-referenced test. This means it is designed to show how well students know the body of information and skills according to specified criteria. Scores are arranged in three categories: basic, proficient, and advanced.

Other well-known tests, such as the Iowa Test of Basic Skills (ITBS), Stanford 9 (SAT-9), and Scholastic Aptitude Test (SAT), are norm-referenced, meaning they measure how well a student knows the content compared with a representative sample of students. The results of these tests thus are given as a percentile rank: A student in the 90th percentile, for example, has scored higher than 90 percent of his peers.

HOW IS THE TEST ADMINISTERED?

Three types of NAEP tests are used to measure academic achievement at the national and state levels. They are administered using separate examinations, samples of students, and data collection procedures.

  • The "Main" NAEP and "Long-term" NAEP are administered to national samples of students. Generally, subjects are not tested more often than every four years. The Main national test measures academic achievement nationwide based on current trends in curricula and education practices according to the National Assessment Governing Board. The Long-term NAEP does not change from year to year because its purpose is to show trends spanning the 30 years of the program; thus, questions on this test are the same every year it is administered.

  • State NAEP tests are given to samples of students within participating states. The sample size is 2,500 students per subject per grade. Although the test is paid for with federal funds, states pick up the extra cost to train teachers and bring in additional personnel. State NAEP tests allow states to compare themselves with other states using a uniform test developed by the NAGB. In 2000, 41 states participated in the state NAEP testing program.

NAEP state testing was introduced in part because state assessments revealed inconsistent data, making comparisons difficult. Over the years, the NAEP has influenced in-state tests, which have moved toward greater alignment with the NAEP in content, rigor, and testing methodology. In addition, poor NAEP results have triggered teaching changes in some states. In 1994, for instance, California ranked last in 4th grade reading proficiency. The legislature responded by enacting the California Reading Initiative, which replaced discredited whole language reading instruction with research-based teaching methods.

WHO RUNS THE NAEP?
The NAEP is administered by the National Assessment Governing Board. Created by Congress in 1988, the board is comprised of a bipartisan group of governors, state legislators, local and state school officials, educators, business representatives, and members of the public. The 26 members are appointed by the Secretary of the U.S. Department of Education after nomination by the board. The term of service is three years, and members may not serve more than two terms.

According to the authorizing statute, the Improving America's Schools Act of 1994 (Public Law 103-382), the duty of the board is "to develop assessment objectives and test specifications through a national consensus approach which includes the active participation of teachers, curriculum specialists, local school administrators, parents, and concerned members of the public." Groups that represent the categories specified in the law (for example, the Republican Governors Association and the National Education Association) make recommendations to the board, as do Members of Congress, education policy organizations, and others.

In addition, to ensure diversity of membership, the statute states that "the Secretary and the Board shall ensure at all times that the membership of the Board reflects regional, racial, gender, and cultural balance and diversity and that the Board exercises its independent judgment, free from inappropriate influences and special interests."

The authorizing statute includes several provisions designed to ensure that the NAEP is not used as a federal test or to undermine student privacy. The public has access to data, questions, and test instruments. It is illegal to disclose assessment data that would identify individuals or individual schools.

The standards that describe what students should know and be able to do are designed to constitute a nationally accepted base of necessary knowledge and skill. The framework and the assessments are developed through a consensus approach involving teachers, curriculum experts, policymakers, business representatives, and members of the public.

HOW WOULD THE PRESIDENT'S PLAN CHANGE THE NAEP?President Bush would use the NAEP to confirm the progress shown in state tests. In order to perform the task envisioned in his education plan, there would have to be changes in the way the NAEP is administered. For example:

  • The Bush plan calls for the NAEP to be administered every year and for the results to be reported annually. Currently, it takes around 18 months for results to be released. The cost of expansion could raise the NAEP's overall cost from $40 million to $110 million per year.

  • Because the NAEP uses sampling techniques, much as an opinion poll does, there is a margin of error of two or three scale points. On an annual test, small achievement gains could be within the margin of error and therefore not a reliable short-term indicator of progress by a state. This problem would have to be addressed.

  • Concerns about the NAEP's indirect influence on the content of state tests also would have to be addressed. Critics of the NAEP and other national tests worry that linking test results to federal funds would push states toward a national curriculum.

  • The NAEP is not completely independent of the Department of Education, and this raises the risk of politicization. While the National Assessment Governing Board sets policy, NAEP operations remain within the department. For instance, the selection of contractors to write and administer the test is the responsibility of the Commissioner of Education Statistics. To preserve the NAEP as an independent test, the NAGB would need to be given authority over operations.

CONCLUSION
Improving achievement has been the unrealized goal of federal education programs for decades. Under President Bush's education plan, No Child Left Behind, federal funding would be linked to whether states actually succeed in this endeavor, as measured by the NAEP test. Ronald Reagan once said, "Trust but verify," and the Bush Administration is seeking to use the test to confirm state achievement trends.

The NAEP currently provides a dynamic source of state and national achievement data. Consistency in sampling, privacy protections, security against fraud, and comparability make it a useful testing instrument. However, the test has its limitations, and there are concerns about its possible impact on teaching and local control. To restructure the NAEP to fit its proposed new role, lawmakers must address the limitations and concerns.

Krista Kafer is an Education Policy Analyst at The Heritage Foundation.