I’m sure you’ve been there. Legal takes your well-crafted multiple-choice questions and adds just enough words for clarification to the correct answer to make it four lines longer than any of the distractors. Knowledge checks with four answers generally test the participant’s ability to pick their 25% chance anyway. Ideally, you would want them to do more: think about the answer and formulate it in their own words. The more brainwork they do, the more chance for retrieval. However, that’s a nightmare to automate online.
Chatbots and AI may help but until they arrive for L&D, let’s look at an alternative. This prototype below won’t solve all your problems but forces humans to use their brain a little more, and that may lead to more retention.
So, let’s say you’re checking your ability to match banking terms with their proper definition (this would obviously followed up with some other activity using those terms in context). For now, the task is simple: customers may ask about these terms and you need to be able to tell them the definition.
In the traditional multiple-choice way, you would include the correct answer and three other distractors. Lots of work for 178 terms, little brainwork ROI.
With a little bit of automation, you may show a term and select random distractors from the other 178 definitions. Still a choice of four but less work once automated.
This method below adds a twist. Let’s call this the HHNYA (the Host Has Not Yet Arrived) approach. What’s HHNYA? It’s the lonely feeling being in a webinar before the host arrives. You know it’s coming, but you’re just not there yet.
For testing the prototype below, you may use the following site with banking terms and definitions.
First, HHNYA asks you for the definition of Authorization. Now, you either know the definition or not. If you don’t, you may be able to guess, based on prior knowledge. Or, you can skip because you’re not there yet (hhnya).
Let’s say this is how much you remember of the definition: “The issuance of approval.” The process of digging into your memory and trying to retrieve this information already increases your chance of retention for the future. In a traditional multiple choice approach you’d be already figuring out which option looks best.
This is where the magic happens. The moment you submit your guess, HHNYA takes your entry and runs an algorithm to find the Sørensen–Dice coefficient of your text entry and each definition for the 178 terms. This index shows how “similar” the two text are. So, in other words, HHNYA finds the four most similar definitions, based on your input as multiple choice options. Remember, the similarity is based on your input, therefore, it might be that none of the answers are correct.
Clicking on any of the choices, it gives you the full definition. You also see a number associated with each choice. Think about it as points you can gain or lose. If the answer is correct, you would gain 510 points for the first one. If the answer is incorrect, you would lose 510.
Where do the points come from? Well, from the Sørensen–Dice coefficient. The more similarity HHNYA finds between your text and the actual definition, the higher the number. Here’s what this means for learning: the more accurate you are with your answer, the more points you get for the same answer. However, if your guess is incorrect, you may lose a lot of points. In the traditional multiple choice approach, you get the same point for each correct answer.
Let’s say you like this answer below best, based on the four choices presented.
And you’re correct. You gained 410 points. As you noticed, this was not the top choice! This is because, based on your entry, another definition actually ranked higher (was more similar) even if it wasn’t correct. Now, you also see a performance on the feedback slide. When you answer a term incorrectly, or skip it, the term comes back again. The 33% means that this term has been shown three times, and you had one correct match so far (maybe previously, you picked the wrong answer twice).
Now, you may decide that the term is so unfamiliar that there’s no point in guessing and risking losing points. You can skip a term. This is the HHNYA feeling. The host has not yet arrived. That’s fine. The host (knowledge) is on the way, we’re just not there yet. Therefore, you lose only 5 points, and gain a chance to look at the correct definition.
Lastly, here’s an example of a false memory. Let’s say you THINK you know what Escheat is (but you’re wrong). You type in your sure answer. You get four choices with solid numbers. You’re totally convinced this is about mortgage payments.
And there it is. Very close to what you typed in. Mortgage payment. You select the answer.
And you lose 530 points. HHNYA remembers your incorrect choices for the overall performance on the term itself. It also makes sure the term comes back again, so you can improve your performance.
This is just a prototype, part of an actual solution, which is more elegant and sophisticated but the core idea remains: the more you make humans think, the better chance they will retain the information. What else can you use this approach for? Maybe matching needs and products? You tell me.