Saturday, December 10, 2011

Stanford Study Is a Great Improvement in Analyzing Charter School Performance, But Inconclusive

Defenders of the education status quo point to the conclusions of a Stanford study to assert that charter schools are bad for students.   “Multiple Choice: Charter School Performance in 16 States”, Stanford University, June, 2009.

The study is probably the best out there, in that it takes into account the confounding factors of the demographics of the students, as well as focusing on student growth in learning – probably the most important factor. By using the “virtual twin” concept, it comes as close to a “control” group as one can hope for in education research (as long as the twins have been selected at random and not selected to achieve the results desired). This analysis is far better than the biased and incomplete statements of “Why uncap the lower quartile?” that I have received from traditional public school superintendents and ISD’s which considers none of the above.

With that being said, the results are not determinative of what makes sense as the best public policy in Michigan. Too much emphasis is placed on the “37 percent of charter schools deliver learning results that are significantly worse than students would have realized had they remained in traditional public schools.” That may seem to settle the question, but a thorough and critical analysis of the study raises more questions. For example, one could just as easily quote the study’s conclusions of

“We see positive results for charter school students in poverty – these students realized statistically superior learning gains in reading compared to their TPS peers, as shown in Figure 6. The magnitude of the difference was about the same as was seen for the overall reading effect, .01 standard deviations, though here the sign is positive. While significant, the effect is small. The same relative outcome was realized in math learning gains; students in poverty who attended charter schools see superior results over their TPS counterparts.” At page 27.

Or

“[W]hen a state elects to eliminate its cap, it can expect a gain in academic achievement growth of about .04 standard deviations.” At page 40.

Removing the cap is just what SB 618 would do, so if the study is correct, we should see a gain in student achievement.

Further, the entire conclusions of the study might, in fact, be discounted when you consider this admission in the study:

“[S]tudents generally experience a significant negative impact on learning in reading in their first year of charter enrollment, in the range of ‐.06 standard deviations. By the second year of charter school enrollment, students get a positive and significant impact on learning, but the magnitude is quite small at .01 standard deviations. Greater gains in reading are realized after three years; the average student with three years of charter schooling has a .02 standard deviation gain in learning.

These results help us to further understand the overall pooled effects for charter schools. Because the number of students attending charter schools grows each year, the experience of charter school students reflected in each state’s data is skewed toward first‐year charter students. More than half of the records in this analysis capture the first year of charter school experience. Given the improvement trends shown in Figure 10, the overall charter school effects would be expected to improve if the same cohort were followed for additional years.”At page 32.

If this first year factor were controlled for in the study, the negative conclusions might have disappeared. The “37%” finding was based upon the following:

“The national pooled analysis of charter school impacts showed the following results:

• Charter school students on average see a decrease in their academic growth in reading of .01 standard deviations compared to their traditional school peers. In math, their learning lags by .03 standard deviations on average. While the magnitude of these effects is small, they are both statistically significant.” At page 6.

By the time you offset the improvement over the years by charter schools with the lag in the average, you end up with nothing of statistical significance.

So, the bottom line is quoting the Stanford study in part and overlooking other parts could be viewed as just another example of the defenders of the education status quo cherry picking their data to make their case. That is not surprising, as research shows that we all tend to find data and studies that confirm our prior inclinations and screen out disconfirming data. Nonetheless, as policy makers, we should be willing to look at all data as objectively as we can.

I look forward to the results of the future studies of Stanford with regard to characteristics of charter schools that affect the student performance. Perhaps we can glean some useful information with regard to discerning which of the charter authorizers are doing the best job and/or which of the for profit education service providers are doing the best. Hopefully we can use that to guide us to develop criteria on who should be allowed to provide charter school opportunities to the thousands of students who are on the waiting lists for spots in charter schools, as well as know which traditional public schools to close.

No comments:

Post a Comment