This study describes the development and initial psychometric evaluation of the Recognizing Effective Special Education Teachers (RESET) observation instrument. The study uses generalizability theory to compare two versions of a rubric, one with general descriptors of performance levels and one with item-specific descriptors of performance levels, for evaluating special education teacher implementation of explicit instruction. Eight raters (four for each version of the rubric) viewed and scored videos of explicit instruction in intervention settings. The data from each rubric were analyzed with a four facet, crossed, mixed-model design to estimate the variance components and reliability indices. Results show lower unwanted sources of variance and higher reliability indices with the rubric with item-specific descriptors of performance levels. Contributions to the fields of intervention and teacher evaluation are discussed.
This article is protected by copyright and reuse is restricted to non-commercial and no derivative uses. Users may also download and save a local copy for the user's personal reference.
Crawford, A.R.; Johnson, E.S.; Moylan, L.A.; and Zheng, Y. "Variance and Reliability in Special Educator Observation Rubrics", Assessment for Effective Intervention, 45(1), pp. 27-37. Copyright © 2019, Hammill Institute on Disabilities 2018. Reprinted by permission of SAGE Publications. https://dx.doi.org/10.1177/1534508418781010
Crawford, Angela R.; Johnson, Evelyn S.; Moylan, Laura A.; and Zheng, Yuzhu. (2019). "Variance and Reliability in Special Educator Observation Rubrics". Assessment for Effective Intervention, 45(1), 27-37. https://dx.doi.org/10.1177/1534508418781010