Designing useful feedback in H5P
This is the third in a short series of posts I’m making as we lead up to a kitchen party with the folks running the H5P PB Kitchen. The other two posts in this short series are H5P and Pressbooks, a Brief Highlight Reel and Which H5P Type is Right for You?
This post is not going to focus so much on feedback in discrete or objective assessments/content types in H5P such as the multiple choice content type. If you would like a great breakdown of how to approach that as well as instructions on how to do it in H5P check out Going Beyond Correct/Incorrect with Feedback .
What is Feedback?
Feedback is a pretty broad term in and of itself. A microphone placed poorly near a speaker is one example of feedback, where the sound enters the mic, is amplified, sent through the speaker, back through the mic, and hearing pain ensues. Another example is as you’re biking, you get immediate feedback through sensory input based on how you are pedalling, moving your arms, distributing your weight etc. Broadly it can be described as a process involving information received as a consequence of monitoring output, to be used, cause, or inform changes (Costa A., & Garmston, R. 2017; Dann, R. 2018).
Wiggins (1997) asserts that feedback is not about assessing correctness, providing praise, blame, approval or disapproval (that’s more inline with evaluation). Feedback describes what a learner did or did not do in terms of their goal. This is based more on a constructivist viewpoint than an instructivist one, and it’s the latter that we often design for. Unless the student is actively constructing their own, we are required to consider what ‘success criteria’ are, as a means of providing relevant information in relation to the learning objectives. Considering the learning objectives, and how they link and stack together we can also set out feedback to help students understand where they are in relation to the objectives set out in the course (Nicol, D. & Macfarlane-Dick, D. 2006).
Effective feedback is critical for helping students learn, and can be provided by all kinds of instructional strategies and materials from teacher based feedback, peer feedback, self-reflection, books, computers, apps, etc. This feedback could be provided in any number of patterns: corrective intelligence; an alternative strategy; clarification of ideas; encouragement or evaluation of the correctness of a response. (Hattie, J. & Timperley, H. 2007).
Feedback in Long Form Assessment Items
Martin Weller had posted a video presentation by Denise Whitelock that I haven’t been able to get out of my head since, Automating assessment to understand assessment. Denise describes a few projects she has worked on over the years that have three things that underpin the approach to the feedback:
- Maintain empathy with the learner
- Soco e-emotive content
- Advice for action
She goes on to describe how she employed ideas from Dweck, Bale, and Pask. From Dweck, the projects imagined ways not only recognize effort to increase motivation and retention, but also to “induct students to the rules of the game“. From Bale, the adoption of four main categories of feedback: positive reactions (e.g. agreement), attempted answers (e.g. gives suggestion), questions (e.g. asks for suggestion), and negative reactions (e.g. antagonism). Finally from Pask, the effect of summarization (e.g. repeating back a summary to check for correctness). Now Denise’s work has a number of different tools that they developed, but the ideas behind these tools I thought were interesting.
One of the recommendations Denise outlines is outlining acknowledgement of effort, and providing suggestions. This helps to establish the rules of the game as well:
- Detect Errors (incorrect inferences and causality; recognize effort and encourage) – “You have done well to start answering this question but perhaps you misunderstood it. Instead of thinking about X which did not… Consider Y”
- Reveal Omissions (praise what is correct and point out what is missing) – “consider the role of Z in your answer, now consider what role x plays…”
- Request Clarification of Key Points
- Request Further Analysis of Key Points (repeat 3 & 4 for each key point)
- Request the Inference from the analysis of key points if missing
- request the inference from the analysis of key points if incomplete
- Check the causality
- Request all the causal factors are weighted
So, coming back to H5P and feedback. There are courses I’ve worked on where the instructor uses quite a few short answer responses in their midterms and final exams. The rest of the course is based on a project, but they still have some of these other assessment tools in place (mostly MCQ and short answer items). In order to acclimatize students to the format of the short answer questions we added accordion content types to each module’s review quiz in addition to the MCQs. For example,
(Incase the embed is not showing here is the content)
From reading a bunch of websites, it seems really important that I carefully decide which cultivar of apple I want, and not to just buy one that happens to be on sale. Why is this so important?
It is important and there are several things you need to consider. It is a perennial crop, so take into account the hardiness zone rating for the cultivar. Because it is a tree, you will be harvesting the fruits for the next twenty or more years, so make sure the cultivar is something you will enjoy and actually use. Lastly, check to see if the cultivar is on a rootstock or its own roots, as this will impact the potential hardiness, size and disease resistance of the cultivar. If you are growing annual vegetables, you will be replanting next year, so no big deal if you get something wrong this year. Such is not the case with apple trees.
So for this particular course, some of the short answer questions were presented with a sample answer hidden. The idea was so have the slightest barrier to seeing the answer immediately, with the hope that the reader would pause and consider the question, even writing down a sample response, and then compare their own answer to the sample. The communication of the course is such that students are encouraged to contact the instructor about any of the practice exercises.
Thinking back to Andy Gibbons’ idea about Design by Default the accordion content type has gotten a lot of use in my projects. In mid-late 2018, however, H5P introduced the essay content type which I think gives us further opportunity to provide feedback to students.
The example from H5P’s examples and downloads page actually starts to demonstrate some of these approaches using the essay content type. Give it a few tries and see what comes up.
If we start to look under the hood we start to see a lot of fields we can use to set up this type of question.
The way the accordion had generally been used previously now goes into the Sample Solution Text box. The reminder provided in the introduction is a nice reminder that other answers may vary, but stops short of explaining the rules of the game. So, an enhancement I might suggest here is to elaborate that a satisfactory summary of the book would include information about the main character, naming secondary characters, and detailing the journey/plot with maybe an example of a key event. Then, “see below an example”.
Next in the essay content type, you get to set parameters for keywords. The list provided in the sample covers a number of characters, but you can see where it could get to be quite a long list depending on the ask. For example, all the the dwarves names would include: Dwalin, Balin, Kili, Fili, Dori, Nori, Ori, Oin, Gloin, Bifur, Bofur, Bombur, and Thorin. Interestingly, the content type allows you to create wildcards or variations, see “*ri” which I believe would cover three of the dwarves names.
Now towards the bottom are two features that are really important and provide an opportunity to put some of the lessons from Denise’s talk to work: Feedback if keyword included, and Feedback if keyword missing. Just below this section is what to display if the keyword is entered and what to display if it is not entered. That shows the learner the keyword or its alternative.
Steps 1 & 2 I think are pretty well positioned in this area. With regard to detecting errors, you can set up a keyword to be worth 0 points. You can then frame the feedback if the keyword appears in the format “You have done well to start answering this question but perhaps you misunderstood it. Instead of thinking about X which did not… Consider Y”. For example, if memory serves me correctly, Freudo does not appear in The Hobbit, but instead in Lord of the Rings. The feedback could steer the learner back to the hobbit while not falling into Bale’s negative comment format.
Step 2: Reveal Omissions, is already set up in our example. If you mention Bilbo but not Gandalf the feedback asks about a “certain wizard”. So we have the format of “consider the role of Z in your answer, now consider what role x plays…” but also Bale’s third category of “asking a question” to the learner to prompt further examination.
One thing I wish this section had is a wayto customize feedback based on the score on a specific word. The way it is currently set up of an all or nothing, but if someone mentions Bilbo twice they still get the feedback of “keyword included” without the chance to also show some of the corrective feedback. Now, onto these settings for the whole activity below.
This is where I think some fancy substitutions and spices are needed (it took me this long to get the kitchen metaphor in here). As you can see, the feedback does not allow you to couple the overall feedback to keywords by name or by frequency. That would also be a great feature. Instead we have point ranges. The general feedback provided works for demonstration of the mechanics of the activity, but does not employ the ideas from Dweck, Bales, or Pask as we’ve briefly looked at so far. The way it might currently display is as follows:
While we cannot address what key terms were or were not included in the Overall Feedback we can still use some of the feedback guidance particularly from Dweck and Bales. For example, we could use some generalized language for the recognize the work so far by pointing out “you have mentioned some of the key characters or events in the book, as noted above…” then move onto revealing omissions through asking questions or giving suggestions, “…take a look at some of the keywords that have been missed. What does the feedback read for those items? Now, overall, consider not only the characters or events that you mentioned but those that were not mentioned. How might those two interact with one another? When influence did one have on the other?…” Something like that.
This is a bit of an off the cuff walkthrough of the essay content type and how feedback might be constructed, but hopefully it illustrates enough to get us thinking about the fundamental principles of feedback.
Note: one thing that bothers me a bit about the essay type is that I can’t download the answer I, as a student, submitted. Another content type that doesn’t have all of these feedback features, but that would address the download problem as well as the “students setting their own goals” problem identified way at the start with the Wiggins (1997) reference, is the documentation tool.
Costa, A. L. & Garmston, R. J. (2017). A feedback perspective in I. Wallace and L. Kirkman (Eds.), Best of the best practical classroom guides: Feedback (pp.19-27). Carmarthen: Crown House Publishing.
Dann, R. (2018). Developing feedback for pupil learning: Teaching, learning and assessment in school. Abingdon: Routledge.
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.
Nicol, D.J. & Macfarlane-Dick, D. (2006). Formative assessment and self- regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199–218.
Wiggins, G. (1997). Feedback how learning occurs. AAHE Bulletin, 50 (3), 40-41.