Document Type

Article

Publication Date

3-2020

Abstract

Research indicates that instructional aspects of teacher performance are the most difficult to reach consensus on, significantly limiting teacher observation as a way to systematically improve instructional practice. Understanding the rationales that raters provide as they evaluate teacher performance with an observation protocol offers one way to better understand the training efforts required to improve rater accuracy. The purpose of this study was to examine the accuracy of raters evaluating special education teachers’ implementation of evidence-based math instruction. A mixed-methods approach was used to investigate: 1) the consistency of the raters’ application of the scoring criteria to evaluate teachers’ lessons, 2) raters’ accuracy on two lessons with those given by expert-raters, and 3) the raters’ understanding and application of the scoring criteria through a think-aloud process. The results show that raters had difficulty understanding some of the high inference items in the rubric and applying them accurately and consistently across the lessons. Implications for rater training are discussed.

Copyright Statement

This is an author-produced, peer-reviewed version of this article. © 2020, Elsevier. Licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 license. The final, definitive version of this document can be found online at Studies in Educational Evaluation, https://doi.org/10.1016/j.stueduc.2019.100827. The content of this document may vary from the final published version.

Share

COinS