In student assessment, who gets to be “right?”
If you’ve been in education leadership for more than 15 minutes, you’ve found yourself in the middle of a go-nowhere argument: educators, district leaders, and state accountability folks all insisting that They Know how well your students are really doing.
So what do you do with these 3 Blind Monks of education? Make them work for you.
Below we’re going to briefly explain the problem (in fun parable form!), meet the characters, and then help you use their insights to serve your students.
Ready to solve one of education’s most aggravating problems?
The Three Blind Monks
As the story goes, three blind monks were brought an elephant. Each was asked to described to the others what they thought an elephant was.
- The first reached out and touched a leg, saying “This is round, tall and stout. It must be a tree!”
- The second grabbed the trunk and replied “No, no! It is long and slithery. It is definitely a snake.”
- The third felt the elephant’s side and insisted “You’re both wrong. It is broad and sturdy. It is clearly a wall.”
And on they argued, each believing only what they’d felt for themselves.
The Blind Monks of School Accountability
In the eternal argument of accountability, we’ve got three main positions. Each has validity, but as with the elephant, none gives us the complete picture of student and school performance.
Monk 1: State Exam Advocates
These folks argue that the state test is the only one given to every students across the region, so it’s the only fair way to compare schools and classrooms.
District-based and classroom assessments have two flaws as school-level assessments:
- it would be like making the prosecutor into the judge;
- the sample sizes are way too small to give real insight anyway.
District & classroom tests have their place, but the “real” accountability meat is in the state exams.
Monk 2: District Exam Advocates
A smaller voice in the debate, but powerful within their own districts, these advocates see their exams as the best of both worlds. They argue that the state exam has many flaws and is out of context, while teachers are too close to the students to be objective. They like district exams because they’re both tailored to local kids & curriculum and given to enough kids to make valid comparisons.
Monk 3: Classroom Assessment Advocates
There’s a very strong voice in the debate arguing that we should leave it up to the teachers, who are with the kids every day. These folks say it’s pretty simple: if you want to know what kids have really learned, skip the standardized tests and go talk to the teacher.
All three are, in some ways, right. There’s power in each type of assessment. The trick is in two things:
1. Matching metrics to goals
I hate to be a broken record, but the whole process starts with a question. What do you need to know?
Once you have a clear, well-defined question, you can build the metric to answer your question. Some metrics will require classroom data, others will require district data, and some are best done at the state exam level.
District and school personnel have a wonderful point –state assessments paint, at best, an incomplete picture and, at worst, an inaccurate one. District and school data can really help to round out the views of performance.
For example, if student results in a school are chronically underperforming, is it because the school’s lack of quality, or did the student begin several years behind?
District and teacher assessments can pick up on some of these fine points when the state exam fails to do so.
But if you’re looking to decide how well your students are doing as a whole, state data can be the best way to benchmark against the big picture.
2. Triangulating to answer hard questions.
I knew a teacher with a wonderful reputation. She taught middle school math and was revered throughout the district. As the district pedagogy coach, I often heard stories about her exceptional skills.
One year, as the state exam cycle began, she let loose her frustrations to me. She thought the exams were pointless and completely invalid. “After all,” she said, “these exam results show that my students don’t know how to do order of operations problems. I taught them order of operations for weeks. Every single one of these kids passed my exam. They know order of operations. I’m sure of it.”
I looked at her exam results, and, sure enough, all of her students had gotten an “A” or a “B” on that unit exam. I also looked at the state exam results, and the average student had gotten only one or two of the order of operations problems right. Clearly these results were in conflict. But I knew it was going to be a tough conversation.
Instead of taking sides, I focused the conversation on the strange results. Finally, we agreed that we should give the kids a third assessment to figure out what had happened. I would write the test, she would administer it, and the results of this exam would help to settle the question.
This sort of thing happens all the time. And when it does, we tend to believe our own exams. Instead of this, we need to develop a culture of healthy questioning. We need to learn to dig deeper and find out more.
As a result of our digging, the teacher and I discovered a blind spot in her teaching. As a dedicated professional, she embraced the opportunity to improve her practice and her students’ learning deepened significantly.
Like the blind monks, there’s no single “right” answer to the complex questions of school and student performance. The benefit we get from different perspectives is the invaluable opportunity to discover our own blind spots and improve our schools for all kids.