Opinion
Education Opinion

Assessing Student Performance in Charter Schools

By Paul T. Hill — January 11, 2005 7 min read
  • Save to favorites
  • Print
The studies now available—even the ones that are well done—fill only a few cells in a much bigger matrix.

In recent months, researchers have rushed into print studies claiming to assess charter school effectiveness. The net effect of this barrage of findings is that we know little more than we did before. Results depend heavily on the schools studied and the methods used.

Everyone wants to know whether students in charter schools are learning more or less than they would have learned in conventional public schools. This is a reasonable question, but it is easier to ask than to answer. The answer is complicated for two reasons.

First, it is impossible to observe the same students simultaneously in both charter schools and the schools they would have attended if charter schools had not been available. Thus, it is necessary to create a “counterfactual” by comparing students in charter schools with other students who are similar in some ways but do not attend charter schools. Making appropriate comparisons is not so easy, as we shall see.

Second, there are many kinds of charter schools—some serving the poor and disadvantaged and others serving the advantaged; some receiving the same amount of money as nearby public schools and others much less; and some in supportive local environments and others constantly fighting off attacks from their local school districts and teachers’ unions. The results of studies focusing on one kind of charter school can’t be generalized to all charter schools.

No researcher starts a study with enough time or money to compare charter school students with every group of public school students similar in some way or another, or to study every possible kind of charter school. Researchers must make choices depending on the time, money, and data available. In doing so, they inevitably limit what their studies can conclude.

Depending on the data they have available, researchers typically make one of five kinds of comparisons meant to estimate the difference between charter school students’ measured achievement and the achievement levels they would have attained, had they not attended a charter school. Charter school students are compared with:

• Students in the public schools that charter school students had previously attended;

• Students in public schools that are like, but not necessarily identical to, the public schools that the charter students would otherwise have attended;

• Students similar in age, race, and income level to charter school students, but not necessarily from the same or similar schools that the charter school students would have attended;

• Students who applied to the charter schools but were not admitted because all the seats had been taken; or

• The students’ own rates of annual growth before and after entering charter schools.

Every one of these comparisons has its advantages and disadvantages. For example, students who left particular public schools might not be at all like the students who stayed behind in those schools. Students change schools for a reason—whether because their prior school was too easy for them, or because they were doing badly in it—so a comparison with former schoolmates could be misleading. It makes sense to compare public school and charter school students from similar racial and income backgrounds, but there is no guarantee that one group’s attendance at charter schools is the only difference between them. There is nothing wrong with making such comparisons—sometimes they are the only ones feasible—but they have their limits.

The same is true of comparisons between charter school students and children who applied to the same schools but lost out in a lottery or were placed on a waiting list. This approach factors out any self-selection bias by holding it constant. Parents of all the children in the study will have sought admission to the same charter schools, so there should not be differences in motivation or other hard-to-measure attributes between students attending the charter schools and those who did not get in. Even these comparisons, however, can be tainted. Children not admitted to one charter school can end up in other charter schools, or in public school classrooms different from those they would have attended had their parents not sought admission to a charter school.

Comparing students’ current rates of learning growth with their own past growth rates eliminates the inevitable differences between students who do and do not attend charter schools. However, this method is seldom feasible because of the absence of complete student records containing comparable test results for different grade levels.

The point here is not that such comparisons should be avoided, but that each type has its flaws. In an ideal world, all of these comparisons would be made, and if the results were similar on all of them, we could have great confidence in the findings. In the real world, however, particular studies can make only one or two of the comparisons, and the results often differ. We are then forced to find out why the results differ—tedious work, but the only way to answer a hard question.

Even if good comparisons can be made, so that we can say with confidence whether or not students in a particular school learned more than reasonably comparable students did elsewhere, it is often wrong to generalize those findings to all charter schools. As noted previously, charter schools serve very different student populations and operate under very different circumstances. Positive student-achievement results for charter schools serving low-income students don’t necessarily apply to schools serving less disadvantaged groups, and vice versa.

Multiple outcome measures would surely strengthen judgments about charter school performance. But these measures are often hard to obtain. For the present, most studies must rely on test scores.

Similarly, results for schools that are well financed and strongly supported by their authorizers—for example, Chicago’s or Massachusetts’ charter schools—don’t necessarily apply to schools that receive less funding or must cope with a hostile local environment. Likewise, findings about former public schools that have been converted to charter status probably don’t generalize to newly formed charter schools.

Even the method used to create comparable groups of students can limit the generalizability of study results. For example, some prominent new studies are comparing the test scores of children in charter schools with those of children who applied to the same schools but were not admitted because of a lack of space. This approach focuses on schools with good enough reputations to generate student waiting lists, and the results probably will not apply to the large number of charter schools that are not overenrolled. This is a best-case approach to charter school research: It studies the schools most likely to have positive results. It’s useful, but its implications are limited.

The new National Charter School Research Center, directed by Robin Lake, is analyzing all studies on charter schools to see what kinds of comparisons have been made and what kinds of schools have been studied. We are using a big matrix, with separate rows for student populations served; new vs. converted public schools; length of time the school has been in existence; generosity of funding received; and supportive vs. hostile local regulatory environment. The matrix also has separate columns for six different comparisons that can be made to judge whether students benefit from attending charter schools. The matrix contains 48 cells, and could have a lot more if we distinguished grade levels served.

Most charter school studies fit into only one or two of these cells. Even if they are well designed, studies that fit into a particular cell don’t necessarily represent schools in other cells. This matrix of possible comparisons, contributing factors, and combinations thereof offers a clear, graphic representation of the fact that no one study focusing on a particular type of charter school or using one method of comparison can be a definitive test of charter school student performance. A balanced assessment of performance would look across as many of these cells as possible.

Existing research on charter school performance is also limited in the range and quality of outcome measures available. Test scores are one sort of outcome, of course, but there are others.

It matters, for example, whether students attend school and persist until they complete a course of study. So, in judging a school’s performance, it makes sense to ask what proportion of its students persist to graduation. Other performance measures could include: the rate at which students pass key “gatekeeper” courses; whether or not they are able to pass core courses at the next level of education (if graduates of an elementary school, for example, take and pass algebra by the end of the 9th grade); and rates of completion of the next higher level of education.

Multiple outcome measures would surely strengthen judgments about charter school performance. But these measures are often hard to obtain. For the present, most studies must rely on test scores.

Everyone interested in the charter school debate is desperate to know whether children benefit from attending charter schools. But good answers take some care. The studies now available—even the ones that are well done—fill only a few cells in a much bigger matrix. Only one thing is clear: Any claim that one study proves or refutes the claims of charter school proponents is surely wrong.

A version of this article appeared in the January 12, 2005 edition of Education Week as Assessing Student Performance in Charter Schools

Events

Artificial Intelligence K-12 Essentials Forum Big AI Questions for Schools. How They Should Respond 
Join this free virtual event to unpack some of the big questions around the use of AI in K-12 education.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Harnessing AI to Address Chronic Absenteeism in Schools
Learn how AI can help your district improve student attendance and boost academic outcomes.
Content provided by Panorama Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Science Webinar
Spark Minds, Reignite Students & Teachers: STEM’s Role in Supporting Presence and Engagement
Is your district struggling with chronic absenteeism? Discover how STEM can reignite students' and teachers' passion for learning.
Content provided by Project Lead The Way

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Quiz Education Week News Quiz: Dec. 19, 2024
Test your knowledge on the latest news and trends in education.
1 min read
TIghtly cropped photograph showing a cafeteria worker helping elementary students select food in lunch line. Food shown include pizza, apples, and broccoli.
iStock/Getty
Education The Education Word of 2024 Is ...
Educators, policymakers, and parents all zeroed in on students' tech use in 2024, which prompted this year's winner.
5 min read
Image of a cellphone ban, disruption, and symbol of AI.
Laura Baker/Education Week via Canva
Education Opinion The Top 10 Most-Read Opinions on Education of 2024
Look back at what resonated with readers the most this year.
1 min read
Collage illustration of megaphone and numbers 1 through 10.
Education Week + Getty
Education Quiz Education Week News Quiz: Dec. 12, 2024
Test your knowledge on the latest news and trends in education.
1 min read
Sets of hands holding phones. Scrolling smartphones, apps mail, applications, photos. cellphone camera.
Vanessa Solis/Education Week + iStock/Getty Images