Friday, February 1, 2013

My colleagues and I gave a common assessment!

Leafstrewn is renowned for its independent and distinctive teachers.  Or, some might say, we're notorious for our lack of consistency.  In any case, every teacher has her own way of doing things.  We can’t agree on common policies and practices, let alone enforce them. If ten people discuss an issue, there will be passionate defenses of twenty different opinions.  Our English department chair uses the phrase “herding cats” more frequently than she probably realizes.  So when the ninth grade teachers—all ten of us—decided to make half of our midyear exam a common assessment, I wasn’t sure if it would actually happen.

Amazingly enough, we did it.  Every freshman at Leafstrewn was given the same two-page passage from a Sherman Alexie story and the same prompt for an analytical paragraph, and every paragraph was graded with the same 5-category rubric.  To make the grading more objective, we graded the exams, not of our own students, but of our colleagues’.

We will eventually sit down and consider the numbers, but I have a few reactions now:

(1) reading a passage and writing an analysis of it is an extraordinarily complex task, which is great, but it’s pretty hard to assess in an objective way;
(2) the strength of rubrics is that they are specific and explicit, but this is also their weakness;
(3) everyone got a B;
(4) that’s okay!
(5) I could have done a much better job of preparing my students, and that preparation would have been better not just for this assessment, but in general.

(1) Reading a passage and writing an analysis of it is an extraordinarily complex task, which is great, but it’s pretty hard to assess in an objective way

There is an incredible amount to keep track of when you’re reading anything closely—emotions, connotations of particular words, figures of speech, intertextuality, patterns (like repetition and contrast) within the text, etc.  Writing, too, is really, really complicated—you have to master grammar, ideas, structure, logical arguments, relevant evidence, and so on.  Skilled readers and writers do all this unconsciously, and we sometimes forget that it is amazingly complicated, and our brains, even the most “limited” of them, are quite incredible.

This incredible  complexity becomes much more visible when you start talking about how to judge the quality of student reading and writing.  Different teachers have different ideas of which pieces of this incredible complexity to focus on.  It’s like the story of the elephant: I’m looking at the elephant’s legs, one colleague is looking at the trunk, another is bumping into the tusks, and so on.  Designing a rubric is tough, because there are always things that you’re leaving out, or looking at from only one side.

(2) The strength of rubrics is that they are specific and explicit, but this is also their weakness

Much of the trouble in grading the assessment came in using the rubric.  A rubric is intended to make the grading more transparent and clear, and, most crucially, more specific.  If a student is told, “Your essay is bad,” the student will want to know what in particular was bad about it.  A rubric is supposed to offer the kid that specificity.  What was interesting about the grading process was that the specificity of the rubric was usually exactly what caused trouble in the grading.

For example, if we judged that a conclusion is not good enough for the “Good” category, we had to circle the box on the rubric for a conclusion that “Needs Work.” That box reads: “Brings paragraph to a finish that repeats previous ideas.”  The problem is that there are many ways for a conclusion to be bad, and repeating previous ideas is only one of them. I ran into this problem of overly specific descriptors in every single category on this rubric.

(3) Everyone got a B
Either because our rubric was too easy at the low end, too tough at the high end, or because our students are all pretty good, or because we did a good job of preparing our students, most of the grades fell in the B range. 

(4) That’s okay!
I think one of the lessons here is that actually for all our hand-wringing, our kids are really quite competent.  They can read a passage and write a reasonable paragraph about it.  They are not illiterate.  Almost all of them managed to come up with identifiable topic sentences, evidence that more or less supported their main ideas, and a conclusion that in some way related to what they were saying.  This is no mean feat for a fourteen year old, and I wonder, I admit, if it has something to do with MCAS.  Maybe, as our department chair says, MCAS has really improved kids’ ability to write these kinds of paragraphs.

(5) I could have done a much better job of preparing my students, and that preparation would have been better not just for this assessment, but in general!

I think this common assessment was a great thing to do.  Having students read something and analyze it in a disciplined way is worthwhile, and doing it as a group certainly made my own teaching better.  I was more focused, my students were more motivated, and it took some of the dissonance out of the grading.  (Normally, when we grade our own students’ work, there is an uncomfortable dissonance.  It is as if Bela Karolyi were to judge his own gymnasts’ routines, or as if a soccer coach were training his team for a game against herself.)  In a fairly short and stress-free preparation, I think I did a reasonably good job.

Nevertheless, although my students' performance was fine, there was a lot of room for improvement. How could I have helped them more?  What could I have done better?  A bunch of things, but here's one: I didn’t train my students well enough in coming up with a good main idea.  They tended to say something like, “The impression the author creates in this impression is of a family that is struggling.”  That is pretty obvious, and I need to help my students learn how to go deeper. To take an obvious thought and push it deeper one may:

  * Explore the why of the obvious thought (e.g. the family struggles because they’re in denial).

  * Consider ways in which the opposite is true as well, and craft a semi-dialectical topic sentence of the "Although A, nevertheless B" type--and then by the end of their paper they may arrive at the synthesis of C.  Later in life they can worry, Mr. Ramsay-like, about getting to Q or Z.

  * Explore the how of the obvious thought (e.g. the family struggles ineffectually, trying the same things over and over again even though they produce no results (father looking in wallet over and over, son dreaming over and over, etc.))

* Are there other ways?  Applying a schema? Making a connection?  What else? 

Teaching my students to push their thinking further would be useful not merely for the exam, but in general.  This is something that would be useful to focus on explicitly, and that I somehow overlooked.  That is one of the virtues of this common assessment--it makes the whole process more conscious and transparent, and so lets us see things that we should have seen before.

In the end, though, we and our students did a fine job.  Now if we could just get them to like to read...

No comments:

Post a Comment