Publication Date

12-2013

Date of Final Oral Examination (Defense)

8-20-2013

Type of Culminating Activity

Dissertation

Degree Title

Doctor of Education in Curriculum and Instruction

Department

Curriculum, Instruction, and Foundational Studies

Major Advisor

Keith W. Thiede, Ph.D.

Major Advisor

Michele Carney, Ph.D.

Advisor

Richard Osguthorpe, Ph.D.

Advisor

Jonathan Brendefur, Ph.D.

Abstract

Is formative assessment observable in practice? Substantial claims have been made regarding the influence of formative assessment on student learning. However, if researchers cannot be confident whether and to what degree formative assessment is present in instruction, then how can they make claims with confidence regarding the efficacy of formative assessment? If it is uncertain whether and to what degree formative assessment is being used in practice, then any claims regarding its influence are difficult to support. This study aims to provide a vehicle through which researchers can make stronger, more substantiated reports about the presence and impact of formative assessment in classroom instruction. The ability to visually distinguish formative assessment during instruction would enable researchers to make such reports; therefore, this dissertation finds an appropriate method for identifying the presence of formative assessment to be an observational instrument.

In this study, a Formative Assessment Observational Instrument was developed for identifying formative assessment use in classroom instruction. The instrument was constructed around five components of formative assessment: understood learning targets, monitoring student learning, feedback, self-assessment, and peer assessment. Each component contained 3-5 scales for observation, each rated on a 1-5 Likert-type scale, totaling 20 items. Pairs of trained raters used the instrument to observe and rate 47 elementary mathematics instructional sessions, evenly divided between 16 teachers, of up to 30 minutes in length. Using the results of these observations, the instrument was evaluated on the basis of reliability across time, reliability across raters, and reliability of scale. Based on these criteria, the instrument was found to be reliable for the purpose of identifying formative assessment in practice, and the instrument identified varying degrees of formative assessment use in terms of item, scale, and teacher.

As a result of examining the literature on formative assessment and utilizing this instrument in practice, it was proposed that in order for formative assessment to become a more quantifiable factor in researching influences on student learning, a narrowing and focusing of its definition was in order. Consequently, a more focused definition of formative assessment was suggested, defining formative assessment as a dynamic interchange between teacher and student in which instruction is adapted continuously based on student learning status. This definition narrowed formative assessment to what happens within instruction, calling for outside of classroom uses of assessment to be treated as separate factors in instruction. The definition also affirmed the first three components of formative assessment as comprising the essential nature of formative assessment. It distinguished self-assessment and peer assessment as methods for accomplishing those components, rather than as components themselves.

Share

COinS