top of page

Other Perspectives

How reliable are calibration values and other montioring accuracy measures?

One of the major researchers in the field weighs in on this concept in a peer-reviewed analysis regarding measures of metacognitive monitoring (including our beloved calibration) and admits three issues that can affect monitoring accuracy data while explaining the difference between reliable and unreliable data in each given circumstance. So, what is important to keep data reliable?

1.  Having multiple questions on the same concept
2. Force the judgements to be made before or during question consideration

Research with these considerations in mind have been shown to be able to produce significant results and repeatable outcomes, as with all scientific research, definitive conclusions should always be met with skepticism and a suspicious mind (cue Elvis!)

Full reference: 

Schraw, G. (2009). A conceptual analysis of five measures of metacognitive monitoring. Metacognition and Learning, 4(1), 33–45. doi:10.1007/s11409-008-9031-3

Do calibration values really measure metacognitive ability?

...or does it just measure test-taking ability? - Or in other words, is this the ability to make metacognitive judgements context-bound in multiple-choice testing environments. Do gains in calibration based on this testing actually extend to other areas of the students life? How about their study habits? These are all important questions and once that researchers will continue to try and answer in future research.

Think you have the important concepts of this tutorial down? How confident would you be to complete a review quiz on them? Continue to the next section of the tutorial by clicking "Review Tutorial" or, as always, move to the next button in the navigation menu at the top of the page!

bottom of page