I had a chat with Miles Corbett (managing director) the other day about scoring in elearning. He seemed very agitated about it so I sat him down with a cup of tea and got him to talk.
So, MC, what's this all about?
SCORM has a tendency to make people think passing an assessment means you've 'completed' the course. But if all that the assessment questions are doing is checking whether you've recalled a piece of data, that's all the score is going to tell you.
You don't actually know what's going on with your learners. You don't know what they don't know, you don't know how to remediate that or even what to remediate, you don't know what the shortcomings of your practice will be. This is valuable knowledge that facilitates improvement, and you're not getting it. All you know is who passed and who failed.
I see how this could cause problems! Is there a solution? I think we need to get under this whole issue and ask something fundamental: what should completion be? The answer to this question directly impacts the data we collect and the way we think about score.
So, what should completion mean? That you've answered a bunch of factual questions ten minutes after finishing the course? Or should it mean you have increased capability and changed perceptions?
Kirkpatrick's Four-level Training Evaluation Model states that at level 2, you measure how much your learners' knowledge has increased. But the next level up evaluates how far learners have changed their behaviour - how they apply what they have learned. In most circumstances the objective of learning is to enable you to do something better, to build capability, to prevent you from doing things wrong and get you to do them right. Surely we should be aiming for level 3 rather than level 2 in order to fulfil these objectives?
So I think we should be changing what completion means, and therefore the data we're collecting. Surely it's more valid to look at people's perceptions and behaviours before and after the course and see how they've changed, rather than looking at score.
This is why xAPI is so important. It gives us the ability to do this, where SCORM simply can't.
So what does this actually look like? Instead of focusing on yes/no answers, we could conclude the course with thoughtful questioning. We could survey people after the event and look at what solutions exist now that didn't before, how the culture and environment has changed, and what learners feel could be done to improve things further.
Can you give examples of where you've changed what completion means? OK. Let's start with the Red Cross.
We're teaching things here that don't have straight yes/no true/false answers. Red Cross volunteers require a deep understanding of the subject (safeguarding vulnerable adults, caring for young refugees, and so on), so that they can apply their knowledge in the real world. The value of the questions in these courses is in helping people think.
We decided that the most valid definition of completion here was the student feeling comfortable that they'd completed their learning. They are after all the person who should be most concerned about what they do or don't know. So, the course gives feedback, and they can't complete the course until they have answered all of the questions; but those answers don't affect whether they've completed or not. Completion isn't about pass or fail. Instead, completion is something subjective.
Example 2… let's go with the ACCA. In our course about business valuations, we give people challenges which they must develop a level of competence to complete. They can only pass challenges if they develop that competence. After all, there's no such thing as a 'right' business valuation; the only test of whether what you think a business is worth is actually correct is for you to sell the business and see if it reaches that price! So how could a pass/fail score, a right/wrong answer, actually benefit anyone here?
In this context, all we can say is that somebody has been through the course. We're saying whether they comprehend. We can deduce exactly what they comprehended based on their responses to certain activities, and we can take from that ways to improve our teaching.
Alright, MC. I'm running low on tea; bring us in to land.
To go back to the beginning: what is our data actually proving? What's it there for? If the data is only there for the regulator, then fine, give them what they need. But if we want people to reach a greater level of competence and understanding, we need to think much more deeply about what we're asking. We need to get back to instructional design basics. Don't ask, 'What do they need to learn and how do we present it?' Ask the deeper, more fundamental questions. What outcome do we want to achieve? What change or impact do we want to see? What should completion look like, here? And as we work from there, it automatically changes the data we collect, and makes our 'score' - whatever that ends up looking like - something meaningful.