We Underestimate the Testing Effect
The opening plenary session of EDEN was mostly about assessment. Although they call it the testing effect, a number of the tools the presenter showed included assessment techniques beyond what we would normally assume to be straight up MCQs in perpetuity.
(e-)assessment – exam – evaluation – certification
The presenter began with the Ebbinghaus’ forgetting curve which “…hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it.” (Ebbinghaus 1850-1909. I’d heard of it loads before, but didn’t realize it was that old. I wondered if it still held up given the age, as those in edtech tend to either blatantly ignore history, or just get it plain wrong. Turns out a replication study was done in 2015 that turned up similar results.) If you’ve read Make it Stick then the presenters mention of distributed practice or spaced effect should sound familiar. As well as interleaving.
The presenter defined the “Testing effect”: Testing or evaluation itself has a major and very positive effect on learning or memorizing information (emphasis mine). Memorizing is fine I suppose if that’s the only goal. I get that it can be a building block towards application, but to focus solely on that is a mistake in my opinion. I find myself currently studying for martial arts exams lately, and some of what I need to know is translations, but also just matching body motions to commands that are given to me (commands are given in Japanese, and I only know the words, a basic translation, and kinesthetic relationship, so in this case memorization is a building block).
osmosis.org has videos explaining the testing effect
One thing that often gets overlooked in formative assessments that really lean into the testing effect is feedback. On the contrary this presenter strongly advocated that testing needs to include feedback IN the test. Immediately.
“Examinations are the most effective way to learn” Bert Wylin
— Dr. Lisa Marie Blaschke (@LisaMBlaschke) June 17, 2019
This this tweet relates to the kitchen metaphor the presenter used (although he talked about a marketing kitchen). Assessment is about options, and choosing the right ingredients at the right time.
A new addition to the toolset the presenters’ company was working on is Peer feedback. Peer feedback, once the error is identified, then a pop up menu as spears to choose additional info, meta data, on the type of error etc. The revision memory of the system reports back on frequency of error types. The technology also searches across reviewers for the same or similar error and applies grading penalties uniformity. Also enables export to excel, demonstration or reviewing teacher’s scoring behaviour (turned out to be inconsistent in how error penalties were scored).
his three key takeaways (side eye)
- As a company we are your friend
- Exams are good for you
- Technology can do that
So at the end of this session I’m feeling left with little inspiration or new information, and I have a sense that many in the audience felt the same. I’ve complained before about going to edtech conferences and going to paper, demo, or workshop sessions that are really just disguised product placements. The fact that this plenary session was given to a completely captive audience and was given by one of the conference sponsors really left a foul feeling in me. I think it is one thing to thank a sponsor in opening statements, but entirely in poor taste to give them such a prominent session to the entire conference audience.