Rubrics – Not the Cube

Rubrics – Not the Cube

November 8, 2018 1 By JR

There have been a lot of great posts over the last few weeks in the 9x9x25 challenge about feedback. I had a couple of other ideas brewing for my post this week, but it turns out they need more time to ferment. Instead, taking inspiration from what I’ve read in this RSS group’s posts, and work I’ve done over the past few years, I thought I’d take this opportunity to throw my 2 cents in regarding feedback. I guess nowadays my 2 cents has to be digital as we’re in Canada…

When I was progressing through the B.Ed program here, rubrics were a big focus for us. Make your matrix. Choose categories. Write performance criteria. Rinse, and repeat. It makes sense right? This is an efficient way (at least after you’ve spend hordes of time creating the thing) to grade students’ work and provide feedback while you’re at it. I used these in my own teaching practice, carefully crafting rubrics for each assignment (as an PAA teacher this meant projects).

After I left teaching and began my ID journey I continued my work with rubrics. Helping instructors create rubrics for their class assignments. Providing PD sessions all around rubrics. Working with some instructors on co-constructing rubrics with their students for more wicked assignments. But you know what I really noticed in all of that time? The feedback, the most important part for learning, steadily fell by the way side as more and more instructors brought rubrics into their practice. What I started to see all around me (didn’t matter if it was instructors, or other ed specialists) was that rubrics, operationally, were just rating scales that took forever to create. The pessimist in me started wondering what the point even was.

That is, until I read an article called Your Rubric is a Hot Mess. Let’s give props to the author for that title. In the article, the author describes a single-point rubric. This style of rubric still details criteria for performance (as rubrics are meant to), but feedback is better integrated into its design. It calls the instructor – actually we’ve used it in peer review settings as well – to write more feedback.

For the Communication OER project I developed with a team at Olds College, we used both analytical rubrics and single point rubrics. The single point rubrics were used for formative assessment on class activities such as the Using Visuals in a Document, and Performing an Audience Analysis activities. Summative (what we anticipated would be graded) assessments used more details analytical rubrics (what you likely imagine when you hear the work rubric).

I have one instructor that comes to mind in particular, who was fairly new to teaching, ask me about how we could best assess some of the assignments in his course. We talked about the single point rubric and he took off with the idea. After the first term he implemented these he came back and raved about the feedback he got from students. They commented on how much they liked this format. How many times can you think of that students made positive comments about a rubric? He now uses these for many assignments across the courses he teaches, providing students with the feedback they need to progress in the courses.

Do you have any activities where this might be useful for you?


Feedback flickr photo by Skley shared under a Creative Commons (BY-ND) license

This attribution brought to you by the Flickr CC Attribution Helper, by CogDog