Wednesday, November 11, 2015

Value-Added Models Aren’t Settled Science No Matter What Ed Leaders Say

The American Education Research Association (AERA) has released a new statement about the use of value-added models (VAMs) in educator evaluations and to evaluate educator preparation programs. It is no secret that as an experienced educator of twenty-six years, I do not find VAMs very useful or fair indicators of teacher effectiveness. The results are often only available three or four months into the school year, so even if they were in a useful form that could inform specific classroom instruction, they arrive much too late, at least for first semester students to be of use.

But the AERA clearly points out that using VAMs as indicators of teacher effectiveness are still too deeply flawed to be used in that manner. (See the AERA statement for yourself here.) Many states, including North Carolina, have charged full speed ahead after being blackmailed by the Obama administration into adopting VAMs. This has occurred in spite of concerns over the limitations and flaws with their use.

There is certainly no disagreement from me that there is always room for improvement in teacher effectiveness, but I also think our false faith in the objectivity of value-added models sees these statistical models as some kind of “savior of public education” for which they are not, nor will ever be. Their limitations are too great to be useful for anything except as a small piece of data schools can consider about how their students are doing.

Here’s limitations outlined by the AERA statement:

  • Current state tests are too limited to measure teacher effectiveness, and most were not designed for that purpose anyway. They cover only a limited amount of content teachers teach and they are too imprecise to be used in determining teacher quality. They also only measure grade-level standards so they fail to measure the growth of students above or below those standards.
  • VAM estimates have not been shown to effectively isolate estimates of teacher effectiveness from other school fators or outside of school factors. To expect VAMs to do this entirely is unrealistic and foolhardy.

As usual, the adoption of VAMs illustrates one very bad flaw education leaders and education policy makers have: they adopt what they see as “common sense” measures without conducting critical and empirical explorations about whether the policies will work as they intend them. The history of public education is littered with these actions, and you would think a wise education leader would learn that what just seems “common sense” or “conventional wisdom” is perhaps nothing of the sort.