Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.
It didn’t go as well as planned.
To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.
In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.
Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.
One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.
Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.
One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.
To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.