Jesse Rothstein
2011-01-13
http://susanohanian.org/show_
Here is a wowser of a review of the Gates Foundation project of scoring videotapes of teachers at work--in an attempt to evaluate teachers' causal effect on student standardized test scores, i.e., teachers' value added scores.
Gates Report Touting: http://susanohanian.org/show_research.php?id=393
Susan Notes:
I posted the J. Rothstein wowser review which finds "troubling indication" that the Gates Foundation report conclusions were "predetermined" here, with my summary. But the report reads better in the original at the National Educational Policy Center site.
Press Release
"Gates Report Touting "Value-Added" Reached Wrong Conclusion... Re-examination of results finds that the data undermine calls for the use of value-added models for teacher evaluations
http://nepc.colorado.edu
BOULDER, CO (January 13, 2011) --A study released last month by the Gates Foundation has been touted as "some of the strongest evidence to date of the validity of 'value-added' analysis," showing that "[t]eachers' effectiveness can be reliably estimated by gauging their students' progress on standardized tests."
However, according to professor Jesse Rothstein, an economist at the University of California at Berkeley, the analyses in the report do not support its conclusions. "Interpreted correctly," he explains, they actually "undermine rather than validate value-added-based approaches to teacher evaluation."
Rothstein reviewed Learning About Teaching, produced as part of the Bill & Melinda Gates Foundation's Measures of Effective Teaching (MET) Project, for the Think Twice think tank review project. The review is published by the National Education Policy Center, housed at the University of Colorado at Boulder School of Education.
Rothstein, who in 2009-10 served as Senior Economist for the Council of Economic Advisers and as Chief Economist at the U.S. Department of Labor, has conducted research on the appropriate uses of student test score data, including the use of student achievement records to assess teacher quality.
The MET report uses data from six major urban school districts to, among other things, compare two different value-added scores for teachers: one computed from official state tests, and another from a test designed to measure higher-order, conceptual understanding. Because neither test maps perfectly to the curriculum, substantially divergent results from the two would suggest that neither is likely capturing a teacher's true effectiveness across the whole intended curriculum. By contrast, if value-added scores from the two tests line up closely with each other, that would increase our confidence that a third test, aligned with the full curriculum teachers are meant to cover, would also yield similar results.
The MET report considered this exact issue and concluded that "Teachers with high value-added on state tests tend to promote deeper conceptual understanding as well." But what does "tend to" really mean? Professor Rothstein's reanalysis of the MET report's results found that over forty percent of those whose state exam scores place them in the bottom quarter of effectiveness are in the top half on the alternative assessment. "In other words," he explains, "teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations."
Put another way, "many teachers whose value-added for one test is low are in fact quite effective when judged by the other," indicating "that a teacher's value-added for state tests does a poor job of identifying teachers who are effective in a broader sense," Rothstein writes. "A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective." If those value-added results were to be used for teacher retention decisions, students will be deprived of some of their most effective teachers.
The report's misinterpretation of the study's data is unfortunate. As Rothstein notes, the MET project is "assembling an unprecedented database of teacher practice measures that promises to greatly improve our understanding of teacher performance," and which may yet offer valuable information on teacher evaluation. However, the new report's "analyses do not support the report's conclusions," he concludes. The true guidance the study provides, in fact, "points in the opposite direction from that indicated by its poorly-supported conclusions" and indicates that value-added scores are unlikely to be useful measures of teacher effectiveness.
No comments:
Post a Comment