Nevertheless, the subject matter interested me so I read the essays at the link above. There were some reasonable points made, but I thought the treatment inadequate in sum. So I will cover here the issues as I see them, both for assessment of writing on a standardized exam, like the SAT, and assessment of writing during regular instruction, as well as the relationship between the two.
Let me begin with what we mean when we use the word "assess." The following is taken directly from Dictionary.com.
[uh-ses] Show IPA
verb (used with object)
1.to estimate officially the value of (property, income, etc.) as a basis for taxation.
2.to fix or determine the amount of (damages, a tax, a fine, etc.): The hurricane damage was assessed at six million dollars.
3.to impose a tax or other charge on.
4.to estimate or judge the value, character, etc., of; evaluate: to assess one's efforts.
Definition #4 seems most apropos. I like the use of the word estimate in the definition. It suggests some lack of precision, such as that different people assessing the same work might come to somewhat different conclusions. It also suggests that some variation across works may nonetheless produce the same assessment. I will concentrate on this second issue here and ask whether variation in the works is a good or bad thing.
Let me start with a very mundane example to get the ball rolling and take the issue outside of the school setting so as not to make it too difficult to consider. When somebody has a life event, Facebook friends will often post something to that person's Wall. My experience with reading these is that there is very little meat to these postings and most say essentially the same thing. I'm going to assume most of my readers will have observed something quite similar. I want to note that as a writer of such Wall posts, I treat a birthday quite differently than I treat a death of a loved one, though most other people don't make this sort of distinction. My general view is that the writer's job is to make a novel contribution rather than echo what everyone else has said. If I know the person well enough that I'm confident they won't think ill of me for trying, I will deliberately attempt to differentiate my message from the rest of the crowd, with a pun, a stab at something clever, or just being silly. On more solemn occasions, this sort of playfulness doesn't seem appropriate, and here the message is simply showing support, so is much more alike the other posts.
This, then, is the first issue with assessing writing. Is variation across writing pieces produced by different students desired or not? I have never queried my Facebook friends on this so I don't know the answer, but let me suggest there are two possible ways to consider the birthday messages. One is that most other people think that too is a time to show support and that is what they are doing with their posts. I disagree with that, but surely it is one possible explanation for what is observed. This brings to mind a recent piece in Slate, Inside the Box: People don't actually like creativity. A different possible explanation is that most people would prefer to generate a witty, original post, especially if they could do so in the time it takes to write the post that they end up with, but they don't practice doing such writing so view it out of reach.
It would be absurd, of course, to assign letter grades to posts in Facebook on somebody else's wall (or on one's own wall, for that matter). But let's note that if we took this counter factual as the norm, then two different grading schemes would emerge and which is the right one depends on whether variation is perceived as a good or bad thing. If it is bad then on those birthday posts most everyone else gets an A, while I get a B or C. That I do generate such variation signals I believe the other grading scheme to be the better one, in this case.
Now let me elevate from the mundane to newspaper writing and note that as a reader with straight news you'd expect little variation from one reporter to the next writing the piece, with what variation is observed explained perhaps by variation in the legwork done to generate the story or by modest stylistic differences from one reporter to the next that the editor will tolerate as it adds flavor to the piece. In contrast, you'd expect much more variation from reading Op-Ed pieces, such as the essays in this Room for Debate discussion on assessing student writing. Each essay reflects the author's worldview, which in turn depends on the author's prior thinking and relevant experience. This makes the writing much more idiosyncratic, a good thing if you like to read Op-Ed, as I do.
Presumably we'd like students to be able to write in both ways, as a neutral observer and as one who has a considered opinion on the matter. I have not reviewed sample writing questions from the SAT in preparing to write this piece, so I could be wrong here, but I suspect there is bias in the testing in favor of the neutral observer type of writing. This is particularly problematic for the teaching of writing, since the book report is an early form that students produce in elementary school, and in the conclusion the student is to say whether she liked the book or not, presumably with a sentence or two of justification as to why. To be a neutral observer and yet care about what is being observed takes sophistication and a lot of practice. What I suspect happens all too often is that the student is tasked with being a neutral observer but has only extrinsic motivation, provided by the assessment itself. That is not a good way to teach.
Let me turn to the next issue, suggested in the previous paragraph, which relates the assessment of writing to the assessment of reading. The SAT and other standardized tests treat these as separate assessments. This means it is possible for a student to do well on one and poorly on the other. Abstracting from English as a second language issues, and possible other explanations that can rationalize such a negative correlation, how is one to make sense of somebody who seems to be a good writer but a poor reader or vice versa? To me, there should be a strong positive correlation between the two. And even if on occasion one finds a student writes well but reads poorly, shouldn't reading and writing be assessed holistically rather than separately?
This amounts to asking the following. Is it intellectually the same thing to give an interpretation and a conclusion of what a reading passage says, writing up both the interpretation and conclusion, as it is selecting one answer in a multiple choice question, where each answer proffered provides an interpretation and conclusion? I don't know but this is what I surmise. Writing up the answer there is far less doubt about what the student is thinking - as long as the student is comfortable expressing herself in writing. With the multiple choice question, the student's thinking is masked. For right answers we can't tell whether the student knew it or got lucky while for wrong answers we can't tell whether the student thought it was the right answer or merely guessed. (I should add here that asking the student to explain aloud the interpretation and conclusion from the reading passage is similar if not identical to getting such an explanation in writing and may be preferred to writing for on the spot diagnosis of any fundamental misunderstandings.) I am quite fearful that there are many good test takers who nevertheless can't provide good written explanations of what they've read.
This gets me to the final point. One larger goal for K-12 education is that students learn to read and write for themselves, meaning the students find these activities engaging so participate in them willingly. This means the student will do these activities in and outside the school setting. It also means the student will engage in these activities to promote her own personal growth. The standardized testing might be neutral on this front if it weren't so heavily emphasized. But with it's emphasis, we likely are getting perverse negative reactions - these activities become for school only, so students plateau far too early in their skills as both readers and as writers.
One wonders whether school might provide a different sort of assessment that is not test prep, with weekly reading and writing projects, for which the primary goal is that students do them. Feedback would be important to encourage student involvement and suggest ways for improvement. But letter (or numerical) grades could quite possibly be counterproductive, producing the same sort of reaction that the standardized tests produce. The focus would be on student motivation for the doing. There is an approach now to keep students in lock step in their reading. That would have to be discarded in favor of an individualized approach. The latter would be much better to sustain student interest.
It seems to me we could do much better this way. But then, what I find obvious doesn't seem to have as much appeal for the crowd.