4 min read
Michelle Thorne posted a great round up of curriculum testing.
Friction Free Assessment
She asked about friction-free assessment. They don't exist. We would be chasing a unicorn named oxymoron if we spent too much time looking for friction free assessments,
As Dan Hickey likes to remind us the introduction of assessments fundamentally alters the motivation of learning. Yet we need to assess learning.
So I guess we shouldn't look for friction free assessments but well oiled assessments, and I always thought badging was the major mechanism to ensure evidence of learning was collected. Digital badges create wells of evidence that can be mined for recognition, motivation, and credibility. You just have to drill down behind the badge (Did I take the metaphor too far?).
I have been fascinated to watch the diverse perspectives towards assessment in Mozilla Learning. You have analytics and design teams who will run a statistical test on a hex color A/B test to increase unique visitors (not sure why) and we also support the largest open badging platform.
When we were drafting the questions for testing the curriculum some wanted to get at the learners. Some wanted to get at the mentors. Some wanted to get at the curriculum. Not sure we did any of these.
The short answer (but not easy) answer, "Choose assessments that align to your learning goals and philosophies."
The three open ended questions we asked more measured the mentors expectations and bias of what was learned. The analysis of these questions would also be quite time consuming. For example how does the question, "What are the top three strategies you think will be used by your learners to see if information on a website is credible?" capture growth? Would you count up the frequency of different strategies and run statistical tests to see if the top three strategies changed?
If you want it the assessment to be fast you have to use likert scales, have enough items, and then treat your ordinal data like numerical data (a step many in measurement disagree with).
If you want the assessment to be fast and reliable you would have to spend anywhere from 10K to millions to develop measures that act like traditional tests. This could be done for reading and writing, but participation would be hard. Furthermore many, in the connected learning camp, would argue that these assessments measure very little (Ian and I do have credibility assessments and UCONN has made their online reading and research assessment available) and the variance and noise in scores is where real learning happens.
What are we to do?
Align assessments strategies to our philosophy and it may become evident that the only metric that matters is the number of makes submitted. Then encourage club mentors to have club members submit evidence for badges.
You then to ensure, credibility of badges, could audit a random set of submissions and the evidence included with the credentialing. We have to ensure that the badges get external recognition. I don't think a sampling of badge applications would involve that much more work than the coding and analysis of three open ended questions.
The problem I see with badges and the curriculum is I, as an issuer, do not feel that the activities lead to a preponderance of evidence that would leave me comfortable with issuing a badge. It may take multiple activities.
We may need more light weight or level up badges that mentors can give (but only the final web literacy badges go to backpack?).
Instead of friction-free assessments on a global scale we will need to provide mentors, especially the majority who do not come from education, on the principles of formative assessment. They need to know how to take the learning goal, teach the curriculum that elicits evidence of growth towards that goal, be able to analyze that evidence while they facilitate learning, and then adjust instruction. If you can figure out easy ways to teach these better practices please tell me so I can steal them.
It is up to us as a community to ensure the badges have value and weight. It is up to us to ensure they are baked into the ecosystem and allow for individual learning pathways.