Our district purchased a new writing program last summer. I was on the adoption committee and I’m also on the committee charged with figuring out how to best implement the program in the 20-or-so elementary schools within our district. So it made sense for me to choose writing scores for my TPEP Student Growth Goals. Right?
But there’s an inherent problem with using teacher-generated writing scores for evaluative purposes. No matter how well-trained we are in the use of a scoring guide, writing is a notoriously hard subject in which to remain objective. Consider this element, which comes from the fourth grade scoring guide for informational writing:
“Includes different kinds of facts and details such as numbers, names, and examples.”
There are dozens of ways to interpret this, and just as any clarifying questions, beginning with “what exactly is a fact?” And remember, I’m sitting on the committee that studied this program intensely before we bought it, and continues to study it for two hours once a month!
So when I sit down to score student writing, it’s hard to stay objective. A lot harder than math, where the answers, for the most part, are cut and dried. And it’s even harder when I know the data will come up again in my TPEP conference when we talk about Student Growth Goals. After all, it’s in everyone’s best interest to give the benefit of every doubt to the student: I look good, the student looks good, the parents are happy, the students are happy, my evaluator is happy, and ultimately, I’m happy. It’s a win-win!
First of all, successful elementary writing programs depend on consistent implementation. My twos, threes and fours need to look like everyone’s two threes and fours. Next year my fourth graders will be in fifth grade, with a teacher using the same program. The last thing that teacher needs is a bunch of kids writing at a level two – but used to getting threes – suddenly getting the twos they deserve.
But more importantly, the whole point of teaching writing is to develop competent writers; writers who understand where they are and where they need to go. A scoring guide is designed to facilitate that. Used properly, a scoring guide will tell the student exactly which writing elements have been mastered and which are still in development. Used improperly, a scoring guide provides false information; the student thinks a skill is mastered when it isn’t. We owe it to our students to use scoring guides properly.
I’ve talked to a lot of college teachers and they implement a practice of inter-rater scoring that leads to increased reliability. High school and elementary teachers do this as well, and it’s probably the best way to make sure scores are consistent. The problem, of course, is time and the lack of it. Not only that, but most grade level teams have a hard time staying in sync. You’d think it would be easy, but all three fourth grade teachers in my school are in three different places in pretty much every subject.
That said, inter-rater scoring seems like the best way to make sure we’re giving our students the accurate scores they need. And that’s where I think I’ll try to steer my team, although I’m not sure how to go about it.
But I am sure about one thing: I’m not using writing scores for my Student Growth Goals again. The temptation is just too great. (Rest assured; I was as objective as possible while scoring my students’ writing. They all showed growth, which would be expected in any situation, but especially in a year when we’re transitioning to a new program.)
How about you? How do you insure accurate writing scores? And have you had any success using writing for Student Growth Goals?