Data… oh data…
Schools, teachers, administrators, school boards, are being asked left and right to “use data” to drive the school improvement process. I put “use data” in quotes because that is term… “use data”… is becoming about as commonplace and vague as the term “school improvement” itself.
Perhaps if we had a better understanding of what it meant to “use data”– personally, I’d like to add a few words in there. How about instead of “use data”, we focus on “using GOOD data WELL“?
Here’s what I’m saying: teachers all over Michigan are being told that they are being evaluated, in part, on how well they “use student achievement data to make instructional decisions” (or something like that…). So, that means… what, exactly?
Are we talking about class averages on summative assessments driving comparisons between two instructors teaching the same course?
Are we talking about teachers deciding to slow down and reteach because of low formative assessment data?
Are we talking about teachers making decisions about what to do with the last two weeks of the semester because their students’ grades are lower than they’d like and they need to do something to inflate them?
All of those examples are decisions made using student achievement data. Are they all effective uses? Are they all using good data well?
I’ll tell you when this hit me. I was preparing a written report summarizing the summative assessments at the end of a geometry unit (a requirement in our district) and I was describing what I thought was contributing to my students’ low unit test scores. In general, this particular test is usually a tough one for the students. It is the first time they’ve seen a math tests completely devoid of number-crunching (gotta love the proving part of geometry!) and that leads to some fairly predictable avoidance behaviors. That is, students avoid practice AND lack of focus on the feedback on their formative assessments, two things that are going to compound the frustration on what is already a frustrating unit. But, alas… I have yet to provide any data to support this conclusion. (Remember, in Michigan the evaluations are becoming more and more focused on how teachers use data to make decisions.)
So, I thought of something. I embrace the practice of allowing students to retake assessments. Now, the nice thing about this potential data set is the retake process is VOLUNTARY! So, the frequency of retaken assessment can give us some indication as to how engaged the students are in one non-mandatory achievement support mechanism.
So, how many formative assessments were retaken during the unit leading into the test that demonstrated the low results? 1.1% (4 retakes out of 360 student-assessments).
Is that a meaningful data set? Well, the students performed a ton better (on average) on Unit 1 test, and the retake rate was up over 12%. Not as well on Unit 2, retake rate 6-ish%, and now quite poorly on Unit 3 Test with a retake rate 1.1%.
There appears to be a correlation (although with only three data pairs to consider, it isn’t really something worth talking about), but which came first? Is the material more difficult, so it drove down engagement in the retakes? Or did the reduced engagement in the retakes drive down achievement?
Quite frankly, I don’t know. But, I have data.
And I know this: My conclusion and subsequent instructional changes are going to depend quite heavily on how I answer the questions I just asked. If I feel like low engagement in the retakes is a cause of the low achievement, then my changes are going to be motivational and structural, with the goal of getting more students to use the feedback on the first-try formative assessments to prepare for a second-try.
If I feel like low engagement in the retakes is a symptom of my instruction being poor during the unit, then I will need to create/steal new activities to drive my instruction.
Oh, and none of that answers (perhaps) the first important question: Is formative assessment retake rate even a useful data set? (I don’t know the answer to this question either, by the way.)
There, I’ve used data to drive my decision-making… sort of.
Question: is this really the work we want our teachers doing? I can see the possibility for a variety of valid arguments from that question. If so, what guidance can we give them deciding what data sets are effective? And how to use them?