By Peter Greene
If I told you that my student had achieved great things in school this year, what would you imagine I meant?
Maybe she started reading longer books with heavier vocabulary and deeper themes. Maybe she not only read them but spent time thinking about the ideas they contained. Maybe she improved her technical facility and musicality when playing her flute. Maybe she conducted an impressively complex and ambitious physics experiment. Maybe she created a beautiful and useful website. Maybe she progressed to more complex problems in algebra. Maybe she completed some impressive in-depth research on a particular historical period. Maybe she passed welding certification tests. Or maybe she packed away some chunks of learning that won’t really come to life for her until years from now.
But we have a problem in current education policy discussions; when we say “student achievement,” we usually don’t mean any of those things.
One of the great central challenges of education in general and teaching, in particular, is that we cannot read minds. We cannot see inside a student’s head and see what has taken root and what has taken flight.
So part of the gentle art of teaching involves the creation and deployment of performance tasks designed to get us at least a peek inside the student’s brain to see if they have in fact mastered what we tried to get them to master. It is an ever-evolving challenge, made complex by the many types of students and the many levels of learning, further complicated by the fact that the best assessment is never as accurate as it was the first time you used it (unless you believe that students never talk to each other).
Some pieces of learning are easy to measure (does the student know her timetable) and some are much more challenging (does the student have nuanced insights into the psychological aspects of Hamlet).
So to measure student achievement, we depend on various proxies. Once we start doing that, we are in danger of mistaking the proxy, the symbol, for the actual thing. If we’re using high-quality assessments for low-complexity learning, there’s not much danger of inaccuracy in confusing the two; if Pat scored 100% on the timetable quiz, it’s probably safe to say that Pat really knows the times tables.
But if the assessment is not high-quality, and the learning is high-complexity, we can jump to unsupported conclusions. If Chris scored 80% on a five-question multiple-choice quiz about Hamlet, we cannot safely say that Chris has a solid grip on the deeper nuances of the play.
And that, unfortunately, is where we are at the moment. Since the launch of No Child Left Behind, we have gotten in the habit of using a single multiple-choice test of reading and math as a proxy for student achievement.
The tests, like the PARCC, SBA and other newer assessments, have a host of problems of their own. For instance, studies keep finding issues with inappropriate reading levels on passages. There have been incidents like the infamous talking pineapple questions, and the poet who discovered she could not correctly answer test questions about her own poems.
But there’s an even bigger issue, and that’s the continued unquestioning use of these test scores as a proxy for the larger picture of student achievement and teacher effectiveness. It’s a mistake repeated by countless education journalists, researchers and policy wonks. It’s a quick and easy shorthand, but it’s inaccurate and misleading.
We should just stop. Instead of saying, “Strategy X was found to have a positive effect on student achievement,” we should say “Strategy X helped raise test scores.” Instead of saying, “Technique Z led to improved reading by third graders,” we should say, “Technique Z led to improved reading test scores for third graders.”
It’s not that we shouldn’t discuss standardized test results, but we should stop pretending that they represent some larger truth. We should call them by their name — not “student achievement” or “effective instruction” or “high-quality school” but simply “scores on the standardized test.” By using lazy substitution, we end up like a tourist sitting beside the Grand Canyon looking at a handful of pebbles and imagining that those pebbles tell us everything we need to know about the vast beautiful vista that we are not bothering to see.
After all, if I told you that my child achieved great things in school this year, your first thought would not be, “Oh, good test scores!” Let’s use words to mean what they actually mean.