Comparing Math Scores of Black Students in D.C.'s Public and Catholic Schools

Report Education

Comparing Math Scores of Black Students in D.C.'s Public and Catholic Schools

October 7, 1999 29 min read
Visiting Fellow
Kirk Johnson
Former Visiting Fellow
Kirk is a Former Visiting Fellow.
Do children attending private and parochial schools score higher than their counterparts of similar background on tests that measure cognitive skills? Numerous studies show that minority children in public schools are not making substantial gains in achievement levels on standardized tests.2 Clearly, the widely accepted belief that nonpublic schools produce students with superior cognitive skills3 --thus higher scores on achievement tests--is partly fueling the movements in Milwaukee, Cleveland, and Florida to offer school vouchers and greater educational choices to parents in poor school districts.4 Should a critical study comparing students in public and parochial schools find no difference in achievement levels on standardized tests, then the argument for vouchers and educational choice would be weakened.5

To test whether differences in academic performance exist between public and private schools, this report analyzes the math scores from the 1996 National Assessment of Educational Progress (NAEP) test taken by African-American fourth- and eighth-grade students in the District of Columbia's public and Catholic schools. The District of Columbia's NAEP database provides researchers with the only statistically representative, city-level data that include socioeconomic characteristics along with the test scores. At the same time, African-American children comprise the only ethnic sample in D.C. large enough to sustain a statistical analysis. Catholic schools were chosen because they make up the largest single group of private school students in D.C.

This is the first study of its kind on students in a major U.S. city.6 It extends the work of other researchers in the field of academic achievement by analyzing a sizable minority population residing in the District of Columbia.

Major Findings

  • The typical, or average,7 African-American eighth-grader in a D.C. Catholic school performs better in math than 72 percent of his or her public school peers.

  • Both fourth- and eighth-grade Catholic school students outperform their public school peers in math achievement. However, the percent difference widens between the fourth and eighth grades. Catholic school students in fourth grade scored 6.5 percent higher, compared with a 8.2 percent higher score for eighth graders, than their public school counterparts. (See Charts 1 and 2.)

Background

Since the release of the well-known study on high school achievement by James Coleman, Thomas Hoffer, and Sally Kilgore (the Coleman report) in the early 1980s,8 both academics and policy analysts have vigorously debated the inherent differences in academic achievement attained by public and private school students across America. While advocates of private schooling argue that superior teaching yields superior students, opponents of choice contend that the intrinsic differences occur from self-selection. That is, parents of higher socioeconomic status (SES) may be able to better afford private schooling and are likely to be better educated themselves.9 One critic, former Wisconsin state school superintendent Herbert Grover, describes this sentiment succinctly: "Do private school children outperform children in public schools? It's hard to imagine that they wouldn't, given the initial advantages they enjoy from their parents."10

A growing number of researchers have addressed this criticism by integrating more SES and family background characteristics into their models.11 The Heritage model developed for this study is no exception. In the model described below, we analyze those factors that might explain a student's academic achievement. These include the stability within the home as measured by whether or not the child changed schools and if the child lives in an intact family. Also included is the amount of reading materials in the home and whether the child's mother attended college.12 Finally, the economic status of the child's surrounding neighborhood is analyzed, noting that achievement, on average, may be influenced by locational effects.

The following analysis is divided into three parts: 1) the rationale for choosing the specific geographic pool and database, 2) the methodology, and 3) the results of the statistical model.

Why Washington, D.C.?

Three factors make D.C. an important locale for this study. First, its public school system boasts one of the highest per-pupil expenditures and lowest student-teacher ratios in the nation.13 Second, its graduation rate for public school students is near the bottom of the list, compared with other states.14 Lastly, overall academic achievement of its public school students has been significantly low compared with other states.15 With all the educational resources flowing into the District, it would follow that students might do better than their counterparts in other parts of the nation, but typically they do not.

A second major reason that D.C. is an appropriate location for this research is its demographic composition. The nation's capital has a high percentage of non-white residents, offering unique insights into a school system that primarily serves minorities.

Finally, and most important, Washington, D.C., is the only U.S. city that may be analyzed in this fashion using the available data. The NAEP database is rich with information on both test scores and accompanying demographic and family-background characteristics. These socioeconomic variables are critical in exploring the issue of academic achievement. The Department of Education National Center for Education Statistics sharply restricts access to the NAEP data: Only licensed users who agree not to publish any results that might identify individual school districts are allowed access to the data.16 Thus, researchers are not permitted to subdivide state data by cities, even though it would be relatively easy to do so. But, because D.C. is considered on equal footing with the states in the NAEP data collection procedures, results are reported for Washington, D.C., and, therefore, can be analyzed by socioeconomic characteristics.

Why Catholic Schools?

Several factors make Catholic schools appropriate for comparison with public schools. First, they represent the single largest group of private schools in the nation, with about 7,000 elementary and middle schools educating some two million children.17 Second, Catholic schools educate a sizable number of non-Catholics. In Washington, D.C., for example, 52.1 percent of Catholic elementary school children are not Catholic.18 In inner cities, Catholic schools historically have helped underprivileged families through tuition assistance programs and similar initiatives. As a consequence, the proportion of minority students in these schools, especially Hispanics and African-Americans, has increased over time.19

Currently, black enrollments in Washington represent 79.5 percent of the total Catholic elementary school population.20 This, coupled with the demographic composition of Washington, signifies that Catholic school students are increasingly indistinguishable from their public school peers.21 This is an important factor to underscore in the analysis below. The similar demographic makeup of Catholic and public schools greatly facilitates the use of parallel comparisons. Because this paper is interested only in the differences between student achievement in Catholic and public schools, data for other nonpublic, non-Catholic schools were eliminated from the database.22

Data Selection Criteria

There was a conscious decision to limit the model to African-American children for two reasons. The first reason is statistical accuracy. The proportion of non-African-American children living in Washington, D.C., is small. The total number of children in fourth (age 9) and eighth (age 13) grades in Washington, D.C. (the grade levels analyzed below) is 5,860 and 4,995, respectively. Of those children, less than one in four are not African-American.23 Further, since this analysis is interested in comparing Catholic with non- Catholic school children, using other racial and ethnic groups might also prove statistically problematic. Only about one out of every five Catholic students is not African-American.

To bring this into perspective, non-African-American Catholic students barely represent 6 percent of the NAEP sample.24 However, there are some 2,300 African-American students in the fourth-grade NAEP District of Columbia sample, and well over a hundred of these children are in Catholic schools. Only 25 white Catholic school children in the fourth grade were sampled. Similarly schooled Hispanic students constitute only 20 of the children in the fourth-grade sample. Although these 25 white and 20 Hispanic observations may yield results, their statistical reliability would immediately be questioned if those results were released.

The second reason for limiting the sample is a data reporting constraint. The license authorizing The Heritage Foundation to use the NAEP data requires that we not release results from any research that identifies any individual student or school. Because the District is overwhelmingly African-American, the vast majority of observations in the database are from these students (over 75 percent for the eighth-grade database). This is obviously not a problem for the African-American sample, but it does raise serious statistical issues for all the other racial and ethnic groups. Unfortunately, this is particularly true for non-African-American Catholic school students (as noted in the discussion above). Reporting model output for whites, Hispanics, and Asians, puts Heritage at risk of violating its license. We have therefore chosen only to analyze African-American students.

Methodology

As noted above, the model described in this study relies on data in the National Assessment of Educational Progress District of Columbia database. The NAEP tests are given to students in the fourth, eighth, and twelfth grades biennially and alternately for math and reading. That is, math and reading skills are assessed on an alternate cycle: The math assessment was administered in 1992 and 1996; the reading assessment was administered in 1990, 1994, and 1998 (at the completion of this study, the 1998 restricted-use database had not been released). Test scores for D.C. public and nonpublic school students in the twelfth grade were not used in this study because data on twelfth graders are included in the national survey, but are not available for individual states, including the District of Columbia.

Since the early 1990s, NAEP's oversight committee, the National Assessment Governing Board, has gathered data on a state-by-state level to supplement the national test administration that has been collected since the 1960s. This information allows for improved analysis of differences in achievement levels across participating jurisdictions, so researchers can now make statistically valid inferences on a statewide as well as nationwide level.

The data used for this paper are from the 1996 Washington, D.C., NAEP math survey.25 The database not only contains information on test scores, but also includes questionnaire responses on family status and other characteristics, such as reading materials at home, time spent on homework, and whether or not students have changed schools recently.

This paper analyzes the differences in math test scores between Catholic and public schools by examining the composite NAEP math score for each sampled child. The following attributes of the child are held constant: attendance at a Catholic school, education of child's mother,26 family status (one or two parents in the home), number of reading materials in the home (such as newspapers, magazines, books, and encyclopedias), median income within the school,27 and whether or not the child has changed schools within the previous two years.28 Employing these factors in the model addresses the concerns raised earlier in this paper and allows for a fair and balanced comparison of children across the District.

Results of the Study29

Chart 1 shows that with other relevant factors held constant, fourth-grade African-American Catholic school students in D.C. score 6.5 percent higher on the NAEP math test than their public school counterparts. Although impressive on its own, this percentage gap expands to 8.2 percent for eighth graders, shown in Chart 2. What is observed, then, is an expanding difference in outcomes. This finding is consistent with the educational choice literature, which suggests that over time students in private and parochial schools may continue to outperform their public school equivalents. For example, researchers Paul Peterson of Harvard University and Jay Green of the University of Texas in Austin noted that in Milwaukee, "attendance at a choice school for three or more years enhances academic performance, as measured by standardized math and reading test scores."30

Even more striking is the relative impact of Catholic schools compared with the other variables in the model. Taken separately, the statistical effect of Catholic schooling is more important than a family's income or belonging to an intact family.

Charts 3 and 4 demonstrate the difference in achievement across several factors that affect math scores for the median child.31 For fourth-grade students in the nation's capital, attending a Catholic school has almost four times the effect on standardized math scores than living in an intact, two-parent home and ten times the effect of attending a slightly more affluent public school.32 For eighth-grade students, being in a Catholic school has nearly twice the effect of having a mother who attended at least some college.

The effect of staying at the same school is also interesting. Because moving from school to school tends to disrupt a child and his or her learning routine, it is not surprising that there is a drop in the NAEP math score for children who move. On average, changing schools within two years before the exam leads to a 4.2 percent and 3.5 percent drop in math scores for fourth and eighth graders, respectively. This positive achievement gap has serious implications for the relative success of Catholic schools versus other school reform projects, including such well-known factors as class size.

Eighth-Grade Catholic School Students Outscore 72 Percent of Public School Students

Academics often use statistical techniques in order to estimate the relative effects of independent factors in their quantitative models. In doing so, the effect of a factor in one study may be compared with a different factor in another study. For example, Frederick Mosteller released a groundbreaking study in 1995.33 Its purpose was to determine the effect of reducing classroom sizes for first graders from a student-teacher ratio of 25:1 to a ratio of 15:1. He determined that in mathematics achievement the average first grader in the 15:1 classroom will outscore nearly 63 percent of his or her peers in 25:1 classrooms.

Our study shows that the effect of Catholic schooling is more dramatic. The average Washington, D.C., eighth-grade Catholic school student outscored nearly 72 percent of his or her public school peers in mathematics achievement in 1996. Performing a similar analysis on the fourth grade results demonstrates that the average Catholic student outscores some 65 percent of his or her public school peers.34

In short, the outcome of Catholic schooling on achievement cannot be understated. Catholic schooling in Washington, D.C., has a greater effect on students than decreasing class size. Researchers may make these claims by investigating test score distributions and performing standard deviation analyses.35

CONCLUSION

This analysis demonstrates the differences in mathematics achievement between African-
American Catholic and public school students in Washington, D.C., in 1996, using the database of the National Assessment of Educational Progress. The major finding of this study is that as compared with public school students, Catholic students perform better over time. The statistically significant difference in fourth grade math scores between African-American Catholic and public school students expands over time. Such results are consistent with other Catholic school research over the past few decades.36 Possibilities for future research include developing a model for NAEP reading data. As of the printing of this paper, the restricted-use state NAEP reading files had not been released.

Kirk A. Johnson, Ph.D., is a Policy Analyst in the Center for Data Analysis at The Heritage Foundation.


APPENDIX A
Results of the Statistical Models

Table 1 reports the results of the fourth- and eighth-grade models, respectively. As shown in the table, the model variables are statistically significant,37 with the exception of the mother's education variable in the fourth-grade model and the two-parent family variable in the eighth-grade model.38 The results reported here are consistent with theory and empirical methods that demonstrate increasing effects over time on test scores. Fourth-grade Catholic school students score, on average, 6.5 percent higher than their public school peers, a percentage that increases to 8.2 percent in the eighth grade.

In this analysis, there are two statistical issues to confront. First, the NAEP exam is a long test, and it is therefore not administered in its entirety to all children. Rather, different parts are given to different children. Certain students will do better on certain portions of the test than others. Consequently, a "true" score must be estimated, or imputed, from the incomplete information. NAEP estimates five plausible composite math scores, and recommends that researchers use all five in any analysis. The Heritage model here follows the guidelines specified by the Educational Testing Service (that works closely with the National Center for Education Statistics in developing the file) for incorporating all five math scores into the analysis.39

Furthermore, NAEP utilizes a complex sample design, which oversamples children with certain characteristics.40 Each child, then, has a unique weight assigned to him or her, which is calculated from the probability of being selected from the population at large (in this case, public and Catholic students in Washington, D.C., who are in the fourth or eighth grade). NAEP's sample design requires a complex modeling technique, which the Heritage model employs.41

APPENDIX B
Prediction Model for Mother's Education

As noted above, the model might have a specification error if some measure of parental education were missing. An operational problem for the researcher is that up to half of the observations for the mother's education variable are missing for one reason or another. Instead of leaving the variable out of the analysis, it was decided that the missing values would be imputed and the corrected variable subsequently entered into the model.42

The NAEP database was obviously not designed to do this, since its point is to provide information about children in the fourth and eighth grades. However, its background questionnaire provides indirect insight into the characteristics of parents. The following variables provide the basis for explaining and predicting a mother's education. The dependent variable may identify the mother as:

  1. A high school dropout;

  2. A high school graduate;

  3. Having some college, but no four-year degree;

  4. A bachelor's or higher degree.

The model is then used to determine the relative likelihood that the mother had at least some college education (the variable in the final math achievement models). While useful for imputation purposes, this model should not be used for explanatory purposes, in part because of the indirect nature of its generation and because of its relatively low R2. The results of the imputation model are reported in Table 2.




1. The author would like to thank David Armor of George Mason University, Paul E. Peterson of Harvard University, and the Archdiocese of Washington Catholic Schools Office for their helpful comments on this paper.

2. U.S. Department of Education, Office of Educational Research and Improvement, NAEP 1998 Reading Report Card for the Nation and the States, NCES 1999-500, March 1999, at /static/reportimages/6B93A7914368FC4B190AA8AF7B7B90B6.pdf.  Achievement is generally defined as a student's score on a standardized test of an academic subject. See U.S. Department of Education, Office of Educational Research and Improvement, NAEP 1996 Mathematics Report Card for the Nation and the States: Findings from the National Assessment of Educational Progress, February 1997, at /static/reportimages/DD1DDDAA6135A3608BF713328A2341A0.pdf

3. See, for example, James S. Coleman, Thomas Hoffer, and Sally Kilgore, "Cognitive Outcomes in Public and Private Schools," Sociology of Education, Vol. 55 (1982), pp. 65-76; and K. Alexander and A. Pallas, "Private Schools and Public Policy: New Evidence on Cognitive Achievement in Public and Private Schools," Sociology of Education, Vol. 56 (1983), pp. 170-182. Cognitive ability and achievement, as understood in the social sciences, are related in that individuals with higher cognitive abilities tend to achieve better scores on standardized tests.

4. See Nina Shokraii Rees and Sarah E. Youssef, School Choice: What's Happening in the States 1999, a Heritage Foundation online publication at http://www.heritage.org/Research/Education/BG1246.cfm

5. A good deal of variety exists in these reform proposals. For an overview of the major ones, see Linda Morrison, "The Tax Credits Program for School Choice," NCPA Policy Report No. 213, March 1998, at http://www.ncpa.org/studies/s213.html

6. For a national comparison of public and Catholic school students, see Thomas Hoffer, Andrew Greeley, and James Coleman, "Achievement Growth in Public and Catholic Schools," Sociology of Education, Vol. 58 (1985), pp. 74-97.

7. The typical Catholic eighth-grade African-American student achieved an average (mean) score on the 1996 NAEP math test, holding family background characteristics constant. Those factors include a two-parent family, neighborhood income effects, reading materials in the home, and whether or not the mother has at least some college (see the Methodology).

8. James S. Coleman, Thomas Hoffer, and Sally Kilgore, High School Achievement (New York: Basic Books, 1982). The Coleman study demonstrated that private school children, after controlling for social, economic, and demographic factors, had higher levels of academic achievement than their public school peers.

9. For further discussion, see Richard Murname, Stuart Newstead, and Randal Olsen, "Comparing Public and Private Schools: The Puzzling Role of Selectivity Bias," Journal of Business and Economic Statistics, Vol. 3 (1985), pp. 23-35.

10. Paul E. Peterson, "Vouchers and Test Scores," Policy Review, January-February 1999, at http://www.hoover.org/publications/policyreview/3908666.html

11. A new study on the School Choice Scholarship Foundation deals with this issue by including only low-income families. See Paul E. Peterson et al., "The Effects of School Choice in New York City," in Susan Mayer and Paul E. Peterson, eds., Earning and Learning: How Schools Matter (Washington, D.C.: The Brookings Institution, 1999).

12. See the methodology section for an explanation.

13. The District had per-pupil expenditures in the 1996-1997 academic year of $9,123; the U.S. average for the same period is $6,057. In 1996, D.C. schools had a student-teacher ratio of 13.7:1; the national average was 17.1:1. See American Legislative Exchange Council, Report Card on American Education, 1998, at /static/reportimages/37256063F5B4295492ACCCBD6C02F1D4.pdf

14. Ibid.

15. Thomas N. Edmonds and Raymond Keating, D.C. By the Numbers: A State of Failure (Lanham, Md.: University Press of America, 1995).

16. National Center for Education Statistics, Restricted-Use Data Procedures Manual, Appendix F, Section III(B), 1996.

17. National Catholic Educational Association, "Fact Sheet: U.S. Catholic Elementary Schools," available at http://www.ncea.org/PubRel/Factsheets/elemfct.htm

18. Archdiocese of Washington, Catholic Schools Office. Data are for the 1998-1999 academic year.

19. John J. Convey, Catholic Schools Make a Difference (Washington, D.C.: National Catholic Educational Association, 1992).

20. Archdiocese of Washington Catholic Schools Office, op. cit.

21. This is, of course, absent the conscious decision of individual parents to send their children to Catholic school.

22. Data analysis using other private school data would be tenuous at best, since these other private schools did not participate in the test at the same level as public and Catholic schools. Participation rates for Catholic and public schools in Washington, D.C., in 1996 were very close to 100 percent.

23. U.S. Census Bureau, "Estimates of the Population of States by Age, Sex, Race, and Hispanic Origin: 1990-1997," September 4, 1998. Available at http://www.census.gov/population/estimates/state/sasrh/sasrh96.txt

24. To analyze the different non-African-American races (white, Asian, American Indian, etc.), we would need to develop estimates on a further subdivision of the 6 percent Catholic sample. Doing so would produce results that would be suspect at best and completely meaningless at worst.

25. The survey is a statistically valid one, meaning that it administers the exam to a sample of fourth- and eighth-grade students from different schools across the District. From this sample, we can reasonably infer academic achievement for the population as a whole, but not necessarily for all subdivisions of the population for which we might have an interest.

26. Many researchers have noted a link between a child's educational achievement and the education level of his or her parents. Other things being equal, parents who are, say, college educated might be better equipped to help children with their homework and understanding of concepts than those with less than a high school education. At the same time, since a mother's education level is typically very similar to the father's, only one should go into the model. The problem is that within the Washington, D.C., NAEP database for fourth and eighth graders, a number of students did not answer the question of their mother's educational attainment, either because they did not know or simply declined to state. In the case of the fourth grade sample, nearly half of the records are missing mother's education level. Because researchers view the mother's education as such an important variable in the model, the missing values were imputed using family background characteristics as indirect proxies for mother's education. Appendix B contains the results of the prediction model for mother's education. The addition or deletion of this particular variable has little impact on the others, demonstrating the relative insensitivity of the other variables to small changes in the model. The only major difference in the model is that the explanatory power of the model improves slightly with the addition of the variable (as enumerated in the R2).

27. The income characteristic within the model represents a median schoolwide income. The median school income across the schools in Washington, D.C., is $24,615. Within the database, there are about two dozen different values possible for this variable, based on the number of different schools NAEP sampled.

28. Whether or not the child changes schools is important because changing schools may have an adverse impact on achievement, since the child must become acclimated to the new environment (this also may be considered the disruption effect). According to the D.C. Archdiocese Catholic Schools Office, it is not uncommon to see students enter their Catholic schools after kindergarten or first grade, and these children are enrolled if space is available (which is typically available in most of the schools in the District).

29. Information presented in this section is based on the statistical regression model described in Appendix A.

30. Jay Greene, Paul Peterson, et al., "The Effectiveness of School Choice in Milwaukee: A Secondary Analysis of Data From the Program's Evaluation," Paper presented before the Panel on the Political Analysis of Urban School Systems at the August-September 1996 meetings of the American Political Science Association, San Francisco, California.

31. NAEP data indicate that the median fourth-grade student in the District attends public school, has attended the same school for (at least) the last two years, has three of the possible four reading materials at home (defined as a newspaper, magazine, encyclopedia, or 25 or more books), attends a school where the median income is $24,615, and is in a one-parent home.

32. That is, attending a school where the median family income is $1,000 more than the average school in Washington.

33. Frederick Mosteller, "The Tennessee Study of Class Size in the Early School Grades," The Future of Children, Vol. 5 (1995), pp. 113-127.

34. Results calculated from the normal distribution probability table in Appendix D in Damodar N. Gujarati, Basic Econometrics (New York: McGraw-Hill, 1995), Table D.1, p. 808.

35. The average math test score for African-American eighth-grade students in Washington, D.C., is about 235 points, with a standard deviation of 34.1 points. Discussing test scores in terms of standard deviations allows researchers to compare different groups (in this case, Catholic school students and public school students) in an easily understood fashion. Holding other factors constant, 68 percent of students will, on average, score within one standard deviation above or below the mean, and 95 percent will score plus or minus two standard deviations. Comparing effects of certain factors (defined as independent variables such as attendance at a Catholic school or living in a two-parent home) in terms of a percent of standard deviation allows researchers to compare results from different achievement studies. For example, Frederick Mosteller in his landmark 1995 study noted that the policy Tennessee enacted to reduce student-teacher ratios from 25:1 to 15:1 had the effect of improving math scores 0.32 standard deviation in the first grade. Here, the Catholic school variable in the eighth grade model has a much higher 0.58 standard deviation impact.

36. Convey, Catholic Schools Make a Difference.

37. Usually pegged at a 5 or 10 percent level. See Michael Lewis-Beck, Applied Regression: An Introduction (Beverly Hills, Cal.: Sage Publications, 1980). From Sage Publication's Quantitative Applications in the Social Sciences, Series No. 07-022.

38. This means that these variables have no statistically discernable difference between the coefficient value and zero, so there is no effect.

39. From a multivariate regression perspective, the model below must be replicated five times using each of the plausible values individually, and then averaging the resulting coefficients to yield the final model results. In technical terms, this process corrects for measurement error in the math score variable, since the test administrators do not actually observe the test score from taking the exam in its entirety.

40. For example, NAEP typically oversamples for race and type of school attended.

41. A procedure called a jackknife must be employed to correctly assess the variance of each variable's coefficient, and the NAEP Washington, D.C., database has a series of 62 "replicate weights" to aid in this task. These 62 jackknifes must be applied and the variances of each coefficient averaged for each of the five plausible-test-score models above (yielding a total of 315 models compiled for the purpose of this research). Much of this replication work was done by the WesVar Complex Samples software (produced by SPSS, Inc.), without which this project might have been untenable. Using the jackknife results with the five plausible-values models allows for a variance correction mechanism. The purpose of the jackknife is to estimate a true sampling error. Correcting for the two types of error (measurement and sampling) allows for the most accurate estimates possible. See Bradley Efron, The Jackknife, the Bootstrap, and Other Resampling Plans (Philadelphia: Society for Industrial and Applied Mathematics, 1982); and Jun Shao and Dongsheng Tu, The Jackknife and Bootstrap (New York: Springer Verlag, 1995) for a more complete discussion of how this jackknife technique works.

42. Some basic work on the various methods for imputing missing data, including this method, may be found in W. Frane, "Some Simple Procedures for Handling Omitted Data in Multivariate Analysis," Psychometrika, Vol. 55 (1976), pp. 409-415; see also R. Little and D. Rubin, "The Analysis of Social Science Data with Missing Values," Sociological Methods & Research, Vol. 19 (1990), pp. 292-326.

Authors

Visiting Fellow
Kirk Johnson

Former Visiting Fellow