Encouraging Students to Monitor and Control their Learning

Instructors can encourage students to monitor and control their learning at different stages in the learning process.  In this section of the guide, we focus on monitoring and control during three stages of learning: (a) while a student is preparing for an exam, (b) while a student is taking an exam, and (c) when a student is evaluating their preparation after taking an exam.  Each subsection on monitoring and control summarizes relevant literature pertaining to these three stages.

While preparing for an exam
  • Students who set concrete goals (and evaluate their progress toward meeting them) are more effective than are students who do not set concrete goals.
  • As students prepare, they can monitor their ongoing progress to provide formative evaluation about what they know well versus know less well.
  • The accuracy of monitoring is essential for making effective monitoring-informed control decisions.
  • It is critical to distinguish between two kinds of judgment accuracy.  Relative accuracy (also known as resolution) is the degree to which a student’s judgments discriminate between what they have learned well versus have learned less well.  Absolute accuracy is the degree to which the absolute magnitude of the judgments matches the absolute magnitude of performance; thus, to measure absolute accuracy, students must make the judgments on the same scale as performance is measured.
  • Students have difficulties accurately monitoring their progress in many domains and may require support to accurately monitor their progress.
  • To help students accurately monitor, consider providing appropriate practice tests (and feedback) that are administered at a delay after study (i.e., when the studied content must be retrieved from long-term memory).
  • Students can control their studying by (a) choosing which strategies to use while studying (see Supporting Student Learning Strategies node for summary) and (b) deciding when to begin and to terminate studying.
  • Overconfidence can lead to premature termination of study (see Definitions, Underpinnings, Benefits node for summary).

Thiede, K. W., Anderson, M.,Therriault, D. (2003).  Accuracy of metacognitive monitoring affects learning of texts.  Journal of Educational Psychology, 95, 66-73. Will improving the relative accuracy of monitoring judgments also lead to improvements in the efficacy of subsequent restudy and performance?  College students studied six text passages and judged how well they had learned each text.  After judging the texts, an initial test was administered so as to measure judgment accuracy.  Next, students chose which texts they wanted to restudy.  They then restudied those texts and a final test across the content of all texts was administered.  To manipulate judgment accuracy, the authors had the students generate six key terms for each text prior to judging their learning, and some students generated these key terms immediately after reading whereas others generated the key terms after a delay.  Relative judgment accuracy was measured by correlating each student’s judgment with performance on the initial test, and as expected, judgment accuracy was substantially greater for students who made judgments when keywords were generated after a delay than immediately after study.  Regardless of their judgment accuracy, most students appeared to use their judgments to select texts for restudy (by choosing those texts that were judged as less well learned).  Importantly, performance on the initial test was not different for the two groups.  Final test performance was higher for students who made the more accurate judgments (with delayed keywords), and this boost in performance was linked to being better able to select texts for restudy that were less well learned (and hence could benefit from restudy).  Instructors should understand that students use their monitoring (whether it is accurate or not) to guide their learning, so improving monitoring accuracy can improve regulation.

van Overshcelde, J. P., & Nelson, T. O. (2006).  Delayed judgments of learning cause a decrease in absolute accuracy (calibration) and an increase in relative accuracy (resolution).  Memory & Cognition, 34, 1527-1538. Two measures of judgment accuracy – called relative and absolute – are psychologically and statistically distinct.  Measures of relative accuracy (also known as resolution) estimate the degree to which a student’s monitoring judgments discriminate between pairs that are more (vs. less) likely to be recalled on a final test.  Measures of absolute accuracy estimate the degree to which the absolute magnitude of the judgments matches the absolute level of performance.  The present article is one of many that demonstrates that these two kinds of accuracy can be dissociated – that is, they do not measure the same aspect of judgment accuracy.  In the present case, 62 college students studied 66 Swahili pairs (e.g., Ardhi – soil) and then made a judgment of learning (i.e., What is the likelihood that you will recall the English word when shown the Swahili word in about 10 minutes from now?) for each pair.  This judgment was prompted with each cue (i.e., Ardhi -?) and was made either immediately after studying a pair or was delayed for several minutes (in which other items were studied and judged).  Relative accuracy was measured by correlating each participants’ judgments with his or her own final recall; for absolute accuracy, each participant’s mean judgment magnitude was compared to his or her mean level of recall.   Relative accuracy was substantially higher when the students made delayed rather than immediate judgments, whereas absolute accuracy was higher when the students made immediate rather than delayed judgments.  Instructors who are interested in helping students improve their monitoring should be aware that the amount of time between when students study and when they make judgments about their learning can affect the accuracy of their judgments. Students may be more accurate at deciding what topics require further study if they judge their learning of the topics after a delay.

Rawson, K. A. & Dunlosky, J. (2007). Improving students’ self-evaluation of learning for key concepts in textbook materials.  European Journal of Cognitive Psychology, 19, 559-579. Students have difficulties accurately monitoring their progress, and tasks that help students more accurately monitor their learning have one dimension in common – the tasks produce cues that reveal (even if indirectly) how well the judged material has been learned.  Given that students are often preparing for exams (and hence accurate monitoring would mean being able to predict exam performance), then tasks that simulate the demands of the target exam presumably will produce predictive cues.  For instance, Rawson and Dunlosky (2007) had 56 college students study science definitions of key-term concepts (e.g., Adaptive thermogenesis refers to when the body expends energy to produce heat in response to a cold environment or as a result of overfeeding) in a laboratory setting.  After studying them, they were shown each key term (i.e., adaptive thermogenesis) and they practiced trying to recall the meaning of the correct definition. Students then judged the likelihood they would recall the correct meaning on the final test, which was administered at the end of the experiment.  In this case, the practice test was identical to the final test.  Even so, the absolute accuracy of students’ judgments was poor, and in particular, they showed overconfidence when they produced commission errors on the practice test – that is, when they did respond with an answer, but that answer was totally incorrect.  The reason for such overconfidence presumably occurs because students could not directly evaluate the quality of their (complete incorrect) answers.  By contrast, when students were provided feedback (i.e., the correct answer) while they were making their judgments, their overconfidence in the answer they produced was reduced. The feedback also improved subsequent recall of the concepts.  Based on these outcomes and rationale, instructors can help their students monitor their learning by (a) providing them with tasks that simulate how students will be tested over their knowledge (i.e., provide relevant practice tests) and (b) provide appropriate feedback so students can better evaluate the quality of their answers.

  Gagnon, M., & Cormier, S. (2019).  Retrieval practice and distributed practice: The case of French Canadian students.  Canadian Journal of School Psychology, 34, 83-97. How often do undergraduates use practice tests to prepare for exams?  And, when they do take practice tests, why do students think they are useful?  To answer these questions, 1371 undergraduates completed a survey about how they studied for exam.  Consistent with prior surveys, many (about 62%) but not all of the students reported using tests (e.g., taking quizzes at the end of chapters) while they studied.  For those students who did use practice tests, do they view tests as a monitoring tool or as a mnemonic tool (or both)?  The results from the survey confirm conclusions from prior studies that the majority of students who use testing largely use it as a means to monitor their progress (a monitoring tool), whereas few students prefer to use it as a way to learn new material (i.e., as a mnemonic tool).  Thus, many students understand the value of using practice tests for monitoring their learning, so developing relevant practice tests (with feedback, see Rawson and Dunlosky (2007) summary) will likely be well-received and used by students.  In general, however, instructors should realize that students may be under using practice tests to prepare for exams – many do not consistently use this powerful tool, and those that do, may exclusively use it to monitor their progress even though it can also boost their learning.
While taking an exam
  • Students can monitor and judge their performance at different levels: whether they answer a test question correctly, how well they performed on a set of questions that tapped a particular concept, and how well they performed across all questions on an exam.
  • The accuracy of students’ judgments of their performance is not perfect but can be improved.
  • Students use their item-level monitoring to decide when to change answers, so better judgment accuracy can lead to better test performance.
  • Changing answers (control of question answering) can improve exam performance, but doing so is not always beneficial.

de Carvalho Filho, M. K. (2009).  Confidence judgments in real classroom settings: Monitoring performance in different types of tests.  International Journal of Psychology, 44, 93-108. Although numerous investigations have focused on students’ confidence in their test answers, few have reported the relative accuracy of their judgments.  One exception is reported by de Carvalho Filho (2009).  College students enrolled in a developmental psychology course took four exams during the course, and each exam consisted of short answer and multiple-choice questions.  After answering each question on an exam, students judged the correctness of their answer on a scale from 0 (not sure at all) to 100 (maximum confidence).  Relative accuracy of the judgments was measured by correlating each student’s judgments with his or her own performance across exam questions (separately for the two kinds of test question).  Correlations close to zero would indicate chance levels of relative accuracy, whereas correlations closer to 1.0 would indicate close-to-perfect accuracy.   The mean relative accuracy (across individual correlations) ranged across exams from .38 to .57 for multiple-choice questions and from .62 to .71 for short-answer questions.  These outcomes are consistent with a great deal of laboratory research demonstrating that the accuracy of students’ confidence judgments are typically above chance (mean greater than 0) but still far from perfect.  Instructors should realize that students’ confidence in their answers will rarely be a perfect indicator of their relative performance across exam questions.

Händel, M., Harder, B., & Dresel, M. (2020).  Enhanced monitoring accuracy and test performance: Incremental effects of judgment training over and above repeated testing.  Learning and Instruction, 65, 101245. Can students learn how to improve the absolute accuracy of their confidence judgments? College students enrolled in a psychology course could also sign up for an additional course that accompanied the lectures.  Students who took this opportunity were assigned (based on their schedules) either to a version of the additional course that involved practice tests or to a version that involved practice tests and metacognitive training; students who did not sign up for the additional course were assigned to a control group.  All students received a pre-test prior to the course, and the final exam served as the post-test of learning gains.  On these exams, students judged their confidence (either correct or incorrect) after answering each multiple-choice question on an exam. All students enrolled in the additional course took three mock exams across the semester that targeted previous course content, and they later received feedback that included their answers, the correct answers, and the correctness of their answers.  For those who also received metacognitive training, the following procedure was used (for details, see Händel et al., 2020, p. 65).  In the first session, a lecture was given on the importance of accurate monitoring; students were also informed that overconfidence is common and were asked to elaborate on the dangers of overconfidence.  During the mock exams, they also made confidence judgments and received feedback about the accuracy of their judgments.  Performance and judgment accuracy on the pre-test was similar for all groups.  All groups enjoyed a similar increase in exam performance from the pre-test to post-test (about 22%), but differential improvement occurred for the measures of judgment accuracy.  In particular, whereas students in the control and testing practice groups showed an equivalent level of overconfidence (bias) on the final exam, students who had metacognitive training were more accurate and demonstrated low levels of overconfidence or no overconfidence.  Moreover, one measure related to relative judgment accuracy was better for those who received metacognitive training.  Thus, instructors should realize that with instruction and practice aimed at making more accurate confidence judgments, students can reduce their overconfidence about their exam performance.

Cogliano, M. C., Kardash, C. A. M., Bernacki, M. L. (2019).  The effects of retrieval practice and prior topic knowledge on test performance and confidence judgments.  Contemporary Educational Psychology, 56, 117-129. Cogliano et al.’s (2019) research is consistent with prior research demonstrating the benefits of retrieval practice in a classroom setting.  Students enrolled in educational psychology received low-stakes practice questions (five for each chapter) that were repeated on the high-stakes exams.  On the exams, students rated their confidence in each answer on a 0 to 100 percent (likelihood of being correct) scale.  The students were also prompted to discuss the potential impact of the practice tests.  Several outcomes are noteworthy.   First, practice items did not cover all content on the exams, so that the impact of the practice tests could be evaluated.  And, consistent with the literature on the testing effect, taking practice tests subsequently improved performance on the class exam, especially when students had little prior knowledge about the content.  Second, the absolute accuracy of students’ confidence judgments was better for exam questions that they had previously practiced than for those they had not practiced.  Thus, practice tests not only can boost exam performance but can also improve students’ ability to judge the quality of their knowledge.  Finally, all the students reported that the practice tests were helpful, and their explanations about why the practice tests were helpful resonate with conclusions from the larger literature: most (77%) noted that the practice tests helped them to monitor what they were understanding or not, some (41%) appreciated that the practice tests provided examples of the structure of the high-stakes exam, and a few (22%) believed the practice tests improved their learning.  Thus, instructors should know that administering low-stakes practice test questions during class promises to improve students’ performance and the absolute accuracy of their confidence judgments.

Koriat, A., & Goldsmith, M. (1996). Monitoring and control processes in the strategic regulation of memory accuracy. Psychological Review, 103, 490-517. Koriat and Goldsmith (1996) demonstrated that test performance is partly influenced by the accuracy of evaluating the quality of answers that come to mind as one is taking an exam.  To do so, they had students engage in a two-phase experiment: during the forced report phase, participants were forced to provide answers to general knowledge questions and made a confidence judgment about the correctness of each answer.  The second phase was free report: they were asked every question again, but now they could withhold responses if desired.  In one experiment, the authors manipulated the accuracy of the confidence judgments across groups (by varying the degree to which the general-knowledge questions were deceptive), and measured confidence accuracy by correlating each participant’s judgments with their own performance during the first phase.  Their manipulation worked well, with relative accuracy being substantially greater for one group (M = .90, which is close to perfect accuracy) than the other (M = .26).  As expected, when participants responded with answers during the second phase, those who were initially more accurate at evaluating their answers almost always responded with correct answers (about 75% correct), whereas those who were less accurate at evaluating their initial answers were less likely to respond correctly (21% correct) during this free-report phase.  Based on these outcomes, instructors should note that (a) students will use their confidence judgments to decide how to respond on exams, so (b) if their evaluations (i.e., confidence judgments) are not accurate, they will make poor decisions when taking tests.

Stylianou-Georgiou, A., & Papanastasiou, E. C. (2017). Answer changing in testing situations: The role of metacognition in deciding which answers to review. Educational Research and Evaluation, 23, 102-118. Students can control their test taking in numerous ways depending on the test format – for a multiple-choice format, they can elect to answer a question before examining the alternatives, and for a short-answer format, they could outline their answer before including details.  A common control decision that is likely driven in part by students’ confidence judgments pertains to whether to change an answer.  If students’ confidence judgments are highly accurate, then they may be able to use the judgments to identify incorrect answers that could be changed so as to enhance their performance.  Note that students’ judgments are not perfectly accurate, so changing answers may not boost performance.  Is changing answers a good test-taking strategy?  In a prior meta-analysis by Waddell and Blankenship (1994), on average students tended to benefit from changes, but the benefits were small.  Stylianou-Georgiou and Papanastasiou (2017) expanded on these earlier findings by examining college students’ answer changing behavior on a final examination in an educational psychology course.  The students first circled their answer for each multiple-choice question using a pen and judged their confidence in the correctness of each answer.  They then could then go back and change answers by crossing out their original one and selecting another.  The majority (84%) of the 120 students changed at least one answer, and the average number of incorrect to correct changes (M = 1.62) was slightly higher than were the number of correct to incorrect changes (M = 1.26).  In this case, changes overall did not significantly boost performance, but it did not reduce performance either.  Second, students were more likely to change answers that they were less confident in, which suggests that they used their confidence judgments to inform their changes.  Thus, one reason why the performance benefit for changing answers was small is that the relative accuracy of the judgments appeared relatively low, which would have constrained the benefits of using the judgments to decide which answer to change.  Based on these outcomes, and in contrast to a common misconception, instructors should realize that changing answers on an exam does not necessarily hurt performance, and it often improves it.

After taking an exam (Evaluating)
  • After completing a learning task such as an assignment or an exam, students can evaluate by monitoring the effectiveness of (1) their individual strategies for learning and (2) their overall study plans.
    • For example, a student may plan to use three individual strategies to prepare for an exam: typing class notes after each lecture, answering practice questions each week, and meeting with a study group the day before the exam.  After enacting this plan and taking the exam, through evaluation the student may conclude that the individual strategies of answering practice questions and meeting with a study group were helpful, while typing class notes was not.  While evaluating their overall study plan, the student may conclude that together these strategies helped with understanding concepts, but the plan did not allow for application of concepts.
  • Undergraduate life science students evaluate in response to novel challenges.
    • Students are not likely to use metacognition for easy assignments and exams.  Instructors who want to promote metacognitive development should consider offering challenging first assignments and exams to help students use metacognition early in a course.  Students may be more likely to use metacognition when they are being asked to do something new.  For example, students who are being asked to predict outcomes of experiments may use metacognition if this type of prediction is a new skill.
  • Students evaluate individual learning strategies based on what they want their strategies to allow them to do.
    • For example, a student who wants to recall information may evaluate typing class notes as effective, whereas a student who wants to apply concepts may evaluate this same strategy as ineffective.
  • When determining the effectiveness of individual strategies, advanced undergraduates may use their knowledge of how people learn to evaluate effective strategies, whereas beginning students may consider how well a study resource aligns with the exam .
  • Undergraduate students tend to evaluate overall study plans based on performance, and many use their feelings of confidence or feelings of preparedness to inform their evaluation.
  • After an assignment, students can evaluate their learning using enhanced answer keys and reflection questions if they are given direct instruction on how to use these tools to improve their evaluation.

Dye, K. M., & Stanton, J. D. (2017). Metacognition in upper-division biology students: Awareness does not always lead to control. CBE—Life Sciences Education16(2), ar31.  In this paper, the authors investigated metacognition in upper-division biology students to understand when, why and how undergraduate students use evaluation skills.  Evaluation includes the ability to determine the effectiveness of individual learning strategies and appraise and adjust overall study plans.  The authors analyzed data from two post-exam self-evaluation assignments (n=126) to identify upper-division biology students with high metacognitive regulation skills.  They collected data from those students using semi-structured interviews and analyzed the data using content and thematic analysis (n=25).  The authors found that students did not evaluate in high school because they performed well in their science classes without studying.  Students evaluated their approaches to learning when their undergraduate science courses presented novel challenges.  For example, in organic chemistry, students had to learn through non-math based problem solving for the first time.  Most students evaluated in response to an unsatisfactory grade, but some evaluated when they monitored their understanding using study tools such as practice exams.  Students evaluated the effectiveness of their learning strategies based on what their strategies allow them to do.  Some students wanted their strategies to allow them to obtain and recall information, whereas other students wanted their strategies to allow them to monitor understanding.  Other students evaluated their strategies based on their goal of being able to use and apply concepts.  Based on the results, instructors should note that: (1) many life science students come to college with little experience with evaluation, (2) students may benefit from a mini-exam early in a course to encourage them to evaluate their approaches to learning, and (3) students may evaluate the same strategy differently when they have different goals for learning.

Stanton, J. D., Dye, K. M., & Johnson, M. S. (2019). Knowledge of Learning Makes a Difference: A Comparison of Metacognition in Introductory and Senior-Level Biology Students. CBE—Life Sciences Education18(2), ar24.  In this paper, the authors directly compared introductory and senior-level biology students’ ability to evaluate as well as the reasoning behind their evaluations to gain insights into how metacognitive skills develop over time (n=315).  The authors coded student responses to post-exam self-evaluation assignments for evidence of evaluating.  They found that introductory and senior students did not differ in their ability to evaluate their individual strategies, but senior students were better at evaluating their overall plans.  The authors examined students’ reasoning and found that senior students use knowledge of how people learn to evaluate effective strategies, whereas introductory students consider how well a strategy aligns with the exam to determine its effectiveness.  Senior students consider modifying their use of a strategy to improve its effectiveness, whereas introductory students abandon strategies they evaluate as ineffective.  Both groups use performance to evaluate their plans, and some students use their feelings to inform their evaluations.  These results reveal differences between introductory and senior students, which suggest ways metacognition can develop over time.  Based on the results, instructors should note that: (1) introductory students may need help learning different ways to enact a strategy, and (2) students may need help evaluating their study plans based on criteria other than their performance or feelings.  Instructors can provide direct instruction on how to enact a strategy, such as specific activities to do in a study group.  Instructors can also provide students with questions to evaluate their study plans such as: “How well did your plan help you understand concepts?” and “How well did your plan help you apply concepts and make connections between ideas?”

Sabel, J. L., Dauer, J. T., & Forbes, C. T. (2017). Introductory biology students’ use of enhanced answer keys and reflection questions to engage in metacognition and enhance understanding. CBE—Life Sciences Education16(3), ar40.  How can we support students’ evaluation after they complete an assignment? In this study, the authors investigated the potential for enhanced answer keys and reflection questions to help introductory biology students learn after completing a graded assignment.  Enhanced answer keys included the best answers to questions and instructor’s explanation of the answers and common mistakes students made in answering the questions.  Reflection questions probed students about their understanding of the concepts and their perception of the intelligibility, plausibility, and wide applicability of the answers on the key.  The author also studied the role of direct instruction on the use of the enhanced answer keys and reflection questions.  Survey and interview data were analyzed in addition to course performance.  The authors concluded that enhanced answer keys with reflection questions can help students be more metacognitive and achieve higher course grades, if they receive direct instruction on how to use these tools.  Based on the results, instructors should note that: (1) enhanced answer keys with reflection questions can support student learning, but (2) students will likely require direct instruction on the value and use of these tools in order to productively engage with the tools.

Exam Self-Evaluation Assignments. Supplemental Materials for Dye, K. M., & Stanton, J. D. (2017). Metacognition in upper-division biology students: Awareness does not always lead to control. CBE—Life Sciences Education16(2), ar31.  Two exam self-evaluation assignments designed for undergraduate science students are freely available in the Supplemental Materials section of this paper.

Return to Map

Cite this guide: Stanton JD, Sebesta AJ, and Dunlosky J (2021). Evidence Based Teaching Guide: Student Metacognition. LSE. Retrieved from https://lse.ascb.org/evidence-based-teaching-guides/student-metacognition/
  • Your feedback helps improve the site