What does student growth mean in practice?

July 14, 2023
 | By 
Ed Cunningham

I want to talk about the hot topic of Student Growth, but I’m going to take the long way around. So let's begin with a little pop quiz:

Which of these statements is more meaningful?

  • Kamala earned a scale score of 1621 on Grade 4 STAAR Math this year.
  • Kamala scored within the Meets Grade Level performance level on Grade 4 STAAR Math this year.

How about these statements?

  • Julio earned Approaches Grade Level on Grade 3 STAAR Math last year, and the difference between his Grade 4 STAAR Math and Grade 3 STAAR Math scale scores is 105.
  • Julio’s Progress from Grade 3 to Grade 4 on STAAR Math was Expected.

As educators, we can’t do much with a 1621 or a 105. We need the words and descriptors that go with these numbers to help us figure out what each of our students might need from us.

Often in local assessments, we assign percent scores to student results based on the number of items they get correct. Then we attempt to associate meaning to these results. Since 70% of the questions were correct, Maria gets a passing grade in the grade book (and can take part in extracurricular activities). But this is a summative use of a score. If we are trying to inform potential intervention or enrichment needs, what does this 70% mean? If the assessment were a vocabulary quiz, can we consider 70% to be “passing”? What if it were an end-of-unit exam? How many items on the assessment were DOK 1, DOK 2, or DOK 3?

So now we are on a slippery slope. We hear someone ask the question, “What percent scores on STAAR equal the different performance levels?” We do a few conversions and end up with some cut scores and try to call them “approaches, meets, and masters” for our local tests. We have ended up in a space where we are treating our local fractions unit tests as though they are scaled with the state’s Assessment of Academic Readiness scale scores. We might even find ourselves subtracting some percent scores to try and identify something that might look like “growth” methodologies.

A Celsius temperature reading cannot be subtracted from a Fahrenheit one because they are not using the same scale. To compare them, you have two options. If you need to know the actual temperature difference, you need to convert them to the same scale. If you just want to know if you need to bring a sweater or a jacket, labels like cold, cool, warm, and hot will suffice. It was cold this morning, it is warm right now, but it will cool down some at sunset, so I still need at least a sweater for later. I am using measurements to draw conclusions and then make an action plan. It is the action plan that ultimately matters to me.

The same is true with measurements of student learning. We as educators assess student knowledge and skills through a variety of formal and informal methods. We do this to draw conclusions about learning to make action plans for first-time instruction, intervention, and enrichment. It is those plans that matter for our pedagogical practices.

Our challenge: how can we make our varied assessment data have meaning for drawing conclusions? 70% on a vocabulary quiz does not exist on the same scale as 70% on an end-of-unit exam or 70% on STAAR. So why would we treat these numbers as though they do? Instead, why not treat student performance on our local assessments as something that needs performance levels with descriptors? Maybe something like a vocabulary quiz, with its focus on DOK 1 items, has two performance levels: Satisfactory and Needs Review. A unit exam might need more nuance and could have three levels like STAAR: Approaches Unit, Meets Unit, and Masters Unit. A student who consistently earns “Satisfactory” on quizzes and “Meets Unit” on bigger exams might benefit from practice in DOK 3-level assessment tasks. One who often earns “Approaches Unit” might have deficits in the application of their knowledge.

Now we can discuss student growth. Continue reading about student growth.

More like this

More like this