Week in Review | 30-23
I’m excited to share with you my weekly review, where I get to talk about all the awesome articles, books, podcasts, and music that I’ve been enjoying lately. It’s a great way for us to connect and maybe even discover some new things together. So, let’s jump right in and see what I’ve been seeing this week!
What I’m Listening to
What I’m Reading
Generative AI and the Near Future of Work: An EdTech Example – M. Feldstein
One of the key examples for AI in edtech Michael writes about here is fixing QTI. If you’ve ever moved LMSs you’ve probably encountered this, where the import process does not go smoothly. I encountered a similar situation/ask in the past year or so. I was testing an AI course generator which would make H5P activities out of content. One of the tests I did was to load an OER test bank (word doc, non-QTI formatted) into the platform to see what would happen. Much to my disappointment it didn’t just generate a quiz. But honestly that type of tool would push OER along in a way it desperately needs.
Given that detectors claim the Declaration of Independence was written by AI I’d take any claim from this company with a salt lick. The part that caught my attention was, “Almost 98% of education institutions using Turnitin have enabled the AI writing detection feature within their workflows, said Annie Chechitelli, chief product officer at Turnitin.” This is truly disappionting. Related: OpenAI Quietly Shuts Down Its AI Detection Tool , “Half a year later, that tool is dead, killed because it couldn’t do what it was designed to do.”
The truth about ChatGPT’s degrading capabilities The study suggests that as we continue to build software systems that rely on LLMs, we need to develop new practices and workflows to ensure reliability and accountability. The researchers recommend that users and companies using LLM services should implement monitoring analysis for their applications. Additionally, the study highlights the need for transparency in the data and methods used to train and fine-tune LLMs to build stable applications on top of them.
Sajja, R., Sermet, Y., Cwiertny, D., & Demir, I. (2023). Platform-Independent and Curriculum-Oriented Intelligent Assistant for Higher Education. arXiv preprint arXiv:2302.09294. The research team developed an automated system for answering logistical questions in online course discussion boards, third-party applications, or educational platforms that can aid in the development of virtual teaching assistants. VirtualTA can be integrated with third-party applications to enable access from a variety of intermediaries and it enables any number of users enrolled in the course to access the system. While chatbots can be helpful in reducing the workload of instructors and TAs in education, it is important to acknowledge that they cannot completely replace human interaction and support. These kinds of systems have been in development for 30 some years, but what makes this study interesting to me is that the ITS can be integrated into a variety of third-party communication platform and not held captive within the LMS.
I’ve been paying a bit more attention to the microcredential space over the past year, and if you’re new to it this report and summary might be a good starting place. The main finding seems to be a gap between rhetoric around what employees and employers want vs what’s happening on the ground (employees not as engaged in upskilling as predicted, employers having trouble even looking at MCs as evidence of capability). It’s a space that will be interesting to watch move along its path.
This methodology is interesting, if not ready for prime time yet. The CLASS framework empowers Intelligent Tutoring Systems (ITS) with two critical capabilities: step-by-step guidance and natural language conversations. It employs two synthetic datasets, one for problem-solving strategies and the other for simulated student-tutor conversations. The approach facilitates seamless integration of user feedback and allows for continuous refinement and improvement of the system. A proof-of-concept ITS called SPOCK was trained using the CLASS framework for introductory college level biology content and received favorable remarks from experts in the field. The approach of having the system include it’s “reasoning” may be helpful for human validation in setting up few shot prompts.
Zimmer, Bob (2008). Using the Interpersonal Action-Learning Cycle to Invite Thinking, Attentive Comprehension. In: Luppicini, Rocci ed. Handbook of Conversation Design for Instructional Applications. Hershey, Pennsylvania, USA: Information Science Reference (an imprint of IGI Global), pp. 264–288.
In this chapter, the author explores the idea of using the interpersonal action-learning cycle (IALC) to encourage meaningful discussions and improve learners’ understanding. The IALC is explained in detail, including its mechanics and the advantages of using it purposefully. The chapter also covers potential linguistic barriers that may hinder successful implementation of the IALC, with helpful strategies for overcoming them and creating a reliable IALC. Consistently using the IALC can lead to positive outcomes like enhanced teaching, learning, assessment, course evaluation, and professional development. The chapter presents this framework for human mediated conversations, but I am curious about what this might look like in an ITS.
xAPI (Ft. Steve Rick) – The Adapt Tips Podcast
Solve Knowledge and Learning Issues with Organization Enablement with Mike Simmons – #IDIODC. Big announcement in this one, maybe the end of #IDIODC and the beginning of something new.
A Fascination with Failure / Death on the Dance Floor (classic) – Cautionary Tales