In the last couple decades, baseball has gone through a statistical revolution because of a fairly simple question: “How good is that baseball player?” For the previous 8 or 10 decades, there were a limited number of metrics that got used to evaluate ballplayers. Hitters, for example, were evaluated by how often they got a hit (batting average), how often their at bat produced a run (RBI), how many home runs they hit. Pitchers were evaluated by the number of batters they struck out, how many runs the other team scored that the pitcher “earned”, and how many games the pitcher started in which the pitcher’s team won (or thereabouts… the pitcher “win” is actually a pretty odd (and seemingly useless) stat…).
What made those statistics appealing is that they were fairly easy to compile and communicate.
But there was a problem: traditional metrics didn’t tell enough of the story. Perhaps a pitcher earning a win had more to do with them pitching for a team that scored a lot of runs. Perhaps a hitter with a lot of home runs played in a home ballpark with shorter fences. Perhaps a hitter with a higher batting average rarely walked and hit into a lot of double plays. The traditional metrics became difficult to trust (especially to an owner deciding to commit tens of millions of dollars to a player). So, new statistical measures were developed that attempted to factor in all of the nuanced information that baseball can provide. (Read all about it…)
Education is dealing with a similar issue. We are trying to become a data-driven. We want to use data to tell us what the education world is like. So, we started measuring things. Important things… like literacy.
Well, this floated across my Facebook wall…
One can fairly easily derive the intended meaning of this meme. It would seem like “ParentsForLiberty.org” would like us to think that in 1850, the world was a much more educated place because 98% (of… something…) were literate.
But let’s examine the statement “literacy was at 98 percent.” Talk about a loaded statement! 98 percent of students could decode a text? 98 percent of eligible voters could read the ballot? 98% of families owned a book? What does “literacy was at 98 percent” mean?
Maybe there was a test, and 98% of the kids passed it, which is good, except perhaps before Massachusetts made education compulsory, only kids who could read went to school.
Literacy is complicated. There are some parts of education that are easy to measure. (For example, attendance, homework completion, correct multiple-choice answers, grade-point average). We’d like literacy to be easier, so we invest in tests like DIBELS that attempt take a student’s literacy and work it down to a set of ratings that are easy to communicate. The ACT does the same thing with college readiness. College readiness is complicated, too, but reading an ACT score isn’t complicated.
We’ve tried to quantify as much as we can. We’ve tried to quantify student performance, teacher performance, curriculum performance. We want to know how well they are working. We want to know where we are being successful and where we are letting our students down. That’s a good thing.
The problem is that there are some incredibly important parts of education that are very difficult to measure. Like, impact of an individual classroom management strategy on student achievement, student engagement, scheduling classes to optimize student achievement or the role of extra-curriculars. These are HUGE questions with answers that are not easily quantified. And most school districts are without the means (time, money, qualified personnel) to do an in-depth analysis necessary to achieve a well-rounded look at a complicated issue like overall student achievement for each student each year. So we substitute with some easier to achieve metrics like DIBELS, an ACT score, and grade point average.
And those have become our Pitcher Wins, RBI, and Home Run. They don’t tell us nearly enough of the story.
Where’s our sabermetrics? Where can education go to see the stats that can combine to provide a more three-dimensional look at our system, our teachers, and our students? I understand why baseball got the first turn with the statisticians. There’s way more money in it. Maybe some of you stats folks who have decided that your financial future is secure wouldn’t mind e-mailing me. We’ll sit down. I’ll share with you the data I have (a ton) and we can develop some formulas that produce some metrics. Maybe you can tell me how well that curriculum program is working? How about what kind of environment a particular student performs best in? Which types of literacy patterns are strong predictors for future struggles in mathematics or science?
I look forward to hearing from you.