In the shadow of high-stakes state testing and accountability, sometimes “assessment” can feel like a four-letter word (test). There is a lot of weight attached to an accountability rating, and the direct relationship between that rating and student performance on a state test can make that test into the boogie man.
Outside of this context, however, I have never met a teacher that would question the importance of measuring student learning as a part of their regular instructional practices.
I think this is an important distinction we should all recognize as educators. When we feel some kind of way about assessment, we are really feeling something about specific usage of assessment results. I do not intend to get into the nuances of “why do we accountability” in this post, but I do want to call out an accountability rating for what it is. It is a label with a descriptor designed for a summative purpose.
Summative and Formative. You won’t hear one without the other. In my experience, once words make it into regular professional rotation, this can sometimes confuse their meaning. (See also the usage of “PLC” as a verb.) A test cannot, in itself, be summative or formative. Instead, these are words that define the usage of the assessment results.
Formative uses of assessment results jive with the regular, everyday social science of teaching. It’s what we do with our district benchmarks, unit tests, vocabulary quizzes, projects, performance tasks, tickets in and out, casual checks for understanding, independent/group work assignments, etc. We figure out what we want students to know, figure out what they do know, and then plan how to intervene for or enrich that knowledge and learning. Formative practices are all about applying scientific method to the art of teaching so that we can refine and focus our practices to facilitate student learning.
The summative reporting of various numbers, or A-F letter grades, might make us have some feels. Our art is a continuous cycle of planning, instructing, learning, evaluating, intervening, and enriching. Having something that is a “final” score doesn’t always feel right for a process we consider to be constant and ongoing. Reporting has its purposes, but rarely do those purposes coincide with something useful for us and what we are always in the process of doing. Reporting is for transparency to others not involved in our process. It has its place, but we are not its audience.
An assessment used for a summative report also has data that can be used to inform our instructional practices.
As we design our own local assessment practices, it is important to know our answer to “why do we assess”. I suspect that the answer is far less often “to practice getting summative reports” and far more often “to inform pedagogy.” This will lead to conversations about assessments designed with the conclusions we need to be able to make about learning in mind, which will, in turn, embed practitioner validity into those tools.
Back when I was a teacher in the nineties, I attended staff development on a regular basis. When I became an instructional coach, we called it professional development.
I want to talk about the hot topic of Student Growth, but I’m going to take the long way around. So let's begin with a little pop quiz:
The easiest method to determine growth is to take a measurement, take a second measurement at a later time, and subtract the results.