Instructional Interactions

  • To encourage participation via incentives:
    • Low stakes grading incentives, in which correct and incorrect answers receive equal or very similar credit, result in more robust exchanges of reasoning and more even contribution of all members of the group to the discussion, whereas high stakes grading incentives tend to lead to dominance of the discussion by a single group member.
    • Social incentives can also impact peer discussion; randomly calling on groups to explain reasoning for an answer rather than asking for volunteers increases exchanges of reasoning during peer discussion.
  • Implementation options during peer discussion:
    • Variation in instructor behavior provides students with opportunities to engage in a range of scientific activities.
      • Instructor can stay in earshot of students but not engage with them during peer discussion to promote autonomy, and at other times may answer student questions or discuss possibilities with small groups.
      • During the discussion of the solution, the instructor may sometimes describe the solution and, alternatively, may sometimes encourage students to jointly describe and evaluate the solution.
      • Instructors can also encourage students who have low self-efficacy to engage more fully in discussions, with the potential impact of increasing their confidence and improving learning. Instructor cues that encourage students to explain their reasoning influence classroom norms and student behavior, with such prompts leading to higher quality peer discussions. Such prompts can also be valuable during learning assistant interactions with student groups.
    • Showing the histogram of responses
      • Traditional implementation of PI involves displaying the histogram of student responses after students answer individually but before peer discussion. This practice, however, may bias students toward the most common answer and reduce the value of peer discussion.
  • Implementation options during whole group discussion
    • Instructor explanation of the correct answer after peer discussion produces additional benefits. The effect of peer discussion is greater for strong students, while the impact of instructor explanation is greater for weak students.
  • The use of personal response devices does not appear to impact students learning in PI, although students may exhibit higher participation and enjoyment when using clickers compared to handraising.

Accountability

James MC (2006). The effect of grading incentive on student dis­course in peer instruction. Am J Phys 74, 689–691. James examined the impact of grade incentives on student participation in two introductory astronomy classes: a “high-stakes” course in which clicker questions accounted for 12.5% of the course grade, with incorrect answers earning one-third the credit of a correct response, and “low-stakes” course in which clicker questions accounted for 20% of the grade, with correct and incorrect answers earning equal credit. Students discussed each clicker question with a neighbor before responding individually. James audiorecorded conversations between students in each course. In the “high-stakes” course, conversations tended to be dominated by a single partner, and that partner was typically the student who earned the higher course grade. Further, partners in the high stakes course gave the same answer more than 90% of the time. In the low stakes course, the discussion tended to be more balanced between the two partners, who gave different answers approximately 35% of the time. These results are consistent with the possibility that grading for correct answers on clicker questions incentivizes students to seek the correct answer rather than using discussion to engage in self-explanation and meaning-making.

James MC, Barbieri F, Garcia P (2008). What are they talking about? Lessons learned from a study of peer instruction. Astron Educ Rev 7, 37-43. The authors investigated whether grading incentive impacted clicker question-prompted peer discussion in two sections of an introductory astronomy course. During one section of this course, taught in the fall, the instructor used a high-stakes grading scheme in clicker questions accounted for 12.5% of the course grade and in which incorrect answers received 1/3 the credit of correct answers. During the spring, student responses to the clicker questions again accounted for 12.5% of the course grade, but incorrect answers received 90% of the credit awarded for correct answers. The same instructor taught the two sections of the course, and the same clicker questions were used. From each section of the course, selected student discussions were recorded and transcribed. In the high stakes semester, discussions tended to be dominated by a single partner; this bias was significantly reduced in the low stakes semester. Further, partners recorded different responses to the clicker questions significantly more often in the low stakes semester (17%) than in the high stakes semester (8%). These results may suggest that students in the low stakes class focused discussion on understanding rather than on finding a correct answer.

Willoughby SD, Gustafson E (2009). Technology talks: clickers and grading incentive in the large lecture hall. Am J Phys 77, 180–183. The authors investigated the impact of grading incentives on learning gains and discussion patterns in sections of a high enrollment astronomy class for nonmajors. Clicker questions were worth 4% of the course grade and were discussed by groups of four before students answered individually. In the high stakes sections, only correct answers were awarded credit, while in the low stakes sections, all answers were awarded credit. Learning gains were calculated using course grades and the Astronomy Diagnostic Test (ADT). In one semester, randomly selected student discussions were recorded for analysis. There was no difference in learning gains between the low stakes or high stakes sections. The recorded discussions, however, revealed that the high stakes section demonstrated significantly more block voting (i.e., instances in which all students in a group voted identically) and more correct answers, as well as less robust discussions. Since the larger fraction of correct answers did not produce evidence of learning gains by other measures, this result may indicate that students in high stakes classrooms are focused on identifying the right answer rather than using discussion to promote understanding. This result is consistent with other work from James (2006, 2008), but was not replicated in the second semester of this study.

Len P (2007). Different reward structures to motivate student interaction with electronic response systems in astronomy. Astron Educ Rev 5, 5–15. This study followed the impact on student attitudes of different reward structures for answering clicker question in class.  Students took pre-post attitude and concept surveys, and completed a Self-Report of Clicker Use Survey on the last day of class, categorizing themselves either as self-testers or collaborators.  Students answered two kinds of clicker questions during class: “introduction” questions, intended to elicit student responses with no grading, and “review” questions, which were also ungraded but offered students the potential to score double participation points if the class achieved at least 80% correct.  Thirteen students reported as self-testers and 23 as collaborators for introduction questions; all but one student reported as a collaborator for review questions. The self-testers and collaborators were not different from each other pre- or post- on the concept assessment. In general, students scored significantly higher on review questions than introduction questions, demonstrating an effect of collaborating, but no difference was found between self-tester and collaborator individual scores.  Self-testers did not change their attitudes from pre- to post.  However, collaborators had significant decreases from pre- to post on two of the attitude subscales.  On a post survey asking students to rate how course components helped with understanding the course material, self-testers gave higher ratings than collaborators to both introduction and review questions, and significantly higher ratings for lecture content. The authors suggest that since only students who identified themselves as collaborators had significant downward trends in attitudes, self-testers may be more comfortable with science (have more positive attitudes), and thus more likely to self-test than to collaborate.

Knight JK, Wise SB, Sieke S (2016). Group random call can positively affect student in-class clicker discussions. CBE Life Sci Educ 15, 1-11. In PI, students are often asked to share their reasoning about clicker questions after peer discussion. In this study, the authors investigated whether randomly calling on groups to explain their reasoning rather than asking for volunteers could provide an incentive to promote robust discussion in an introductory biology course for majors. Students received credit for participating, were reminded to explain their reasoning during peer discussions, and did not see a histogram of student responses until after whole class discussion of the credit. Discussions were recorded from six groups in each treatment (random call and volunteer call), transcribed, and analyzed for discussion characteristics (e.g., making claims, requesting reasoning, providing different levels of reasoning quality). Students in the random call section exchanged significantly more turns of speech per discussion, and groups in the random call condition were significantly more likely to use exchanges of quality reasoning, to request information, and to request feedback during peer discussion than groups in the volunteer condition.   There was no difference in performance on clicker questions or on attitudes between the two sections. These results suggest that use of random call after peer discussion may promote more robust exchanges of reasoning than requesting volunteers.

Chou C-Y, Lin P-S (2015). Promoting discussion in peer instruction: discussion partner assignment and accountability scoring mechanisms. Br J Educ Technol 46, 839–847. In this study, 84 computer science majors enrolled in a computer programming course answered questions in class, first individually, and then again following peer discussion, using a web-based classroom response system called ClassHelper.  For the first half of the study, students were assigned to random seats and different discussion partners each week. After peer discussion and re-voting, students were asked to report on whether they had collaborated with their partners.  Points were awarded for in- class participation based on the proportion of students within the group who answered the question correctly, weighted such that credit reflected both the correctness of the individual and the correctness of their partner.  Under these circumstances, students reported collaborating 82% of the time. For the second half of the study, students were allowed to sit anywhere and choose whether to collaborate; their scores were only individual. In this format, students reported collaborating only 60% of the time. Most students reported that the scoring system and the discussion prompting stimulated discussion, but fewer than half liked the collaborative scoring system, preferring the individual scoring.  Students also reported that discussions with classmates helped them clarify concepts and enhanced their understanding. The authors conclude that a scoring system which incentivizes all students getting the answer correct fosters positive interdependence among group partners that encourages them to work together and reap the benefits of peer discussion, even if they don’t like it.

Instructional Cues

Turpen C, Finkelstein ND (2010). The construction of different class­room norms during peer instruction: students perceive differences. Phys Rev ST Phys Educ Res 6, 020123. The authors investigated whether differences in the ways that instructors in three physics classes implemented PI led to discernibly different classroom norms. The authors used classroom observations to characterize instructor actions and instructor-student and student-student interactions around clicker questions. Each type of interaction provided the instructor and the students with different possible roles (e.g., rebutting a peer’s physics ideas, presenting a question, listening to the instructor’s explanation). One instructor varied the way he implemented clicker questions but typically promoted student-student discussion. The other two instructors used a more limited range of faculty-student interactions; one of them explicitly promoted student-student discussion while the other did not. Students in the more varied class reported more peer discussion and greater comfort and frequency of collaborating with the instructor. In addition, the researchers examined whether instructors’ implementation practices impacted the value students placed on explaining their reasoning and supporting their answers; greater instructor emphasis on sense-making enhanced students’ perception that this was important. The authors conclude that instructional practices influence the norms in a course, determining to what degree students perceive peer discussion and student-instructor collaboration as valuable means for making sense of course concepts. Their description of the seven types of interactions also serve as a valuable resource for instructors, providing concrete descriptions of several ways to implement clicker questions.

Knight JK, Wise SB, Southard KM (2013). Understanding clicker discussions: student reasoning and the impact of instructional cues. CBE Life Sci Educ 12, 645–654. The authors investigated characteristics of clicker question-prompted peer discussions in an upper-level developmental biology course by recording, transcribing and analyzing 83 small group discussions about 34 clicker questions. They focused particularly on student argumentation—making claims, providing evidence, and linking the two—and on the impact of instructor prompts on use of argumentation. In the discussions that were analyzed, approximately 40% of student comments focused on explaining reasoning and ~30% on making claims. This percentage was not impacted by the fraction of students who initially answered the question correctly and did not correlate with the fraction of the students who answered correctly after discussion. 78% of the discussions involved exchanges of reasoning, and higher quality discussions tended to produce a greater increase in percent correct responses after the discussion. Instructional cues varied, with the instructor asking the students to focus on their reasoning in ~60% of the discussions and on finding the correct answer in the remaining 40%. Importantly, when the instructor used reasoning cues, students engaged in significantly more higher-quality discussions. Thus, instructor prompts that focus students on explaining reasoning may have a positive impact on the quality of peer discussion.

Turpen C, Finkelstein ND (2009). Not all interactive engagement is the same: variations in physics professors’ implementation of peer instruction. Phys Rev ST Phys Educ Res 5, 020101. The authors investigate the implementation of PI in six high-enrollment introductory physics classes, developing a system for describing and measuring classroom practices that contribute to different classroom norms. They observed four types of questions, logistical, conceptual, algorithmic, and recall, but relatively little variation in the extent to which the six instructors used each type. They also observed that student-student interaction around clicker questions was consistent in the six classrooms. They did observe significant variation, however, in instructors’ interactions with students during the question response period, with some instructors remaining in the stage area and others moving into the classroom and interacting extensively with students. Further, they saw variation in the clicker question solution discussion stage in two ways: first, some instructors always addressed incorrect responses during the discussion, while others sometimes eliminated this step; second, student contribution to the class-wide explanation varied, with the average number of students contributing ranging from 0-2.4.  In addition, the way that instructors interacted with students varied significantly, resulting in different classroom norms around discussion of reasoning. They report that differences in instructor practice produces variations in students’ opportunities to practice conceptual reasoning, talking about the subject matter, agency and scientific inquiry.

Knight JK, Wise SB, Rentsch J, Furtak EM (2015). Cues matter: Learning assistants influence introductory biology student interactions during clicker-question discussions. CBE Life Sci Educ 14, 1-14. Instructor-student interactions around clicker questions impact peer discussion, with prompts to explain reasoning promoting richer discussions and enhanced learning. The authors investigated how peer coaches impacted students’ use of reasoning in their discussions. Peer discussions in an introductory biology course were analyzed for argumentation practices (e.g., use of analogy or questioning) in the presence and absence of peer coaches. Groups who interacted regularly with a peer coach used less reasoning and more questioning when compared to groups who did not interact with a peer coach during discussion, but they also spent a larger percentage of allotted time in productive discussion. When comparing discussions within a given group with and without a peer coach, the researchers observed that groups had longer, more productive discussions and requested more feedback but less information when the peer coach was present. An analysis of peer coach prompts revealed that direct requests for groups to explain their reasoning were effective, but that peer coaches often provided reasoning for the groups. These results suggest that peer coaches as well as instructors should use specific cues to prompt students to articulate their reasoning during peer discussions.

Brooks BJ, Koretsky MD (2011). The influence of group discussion on students’ responses and confidence during peer instruction. J Chem Educ 88, 1477–1484. This study related written student explanations for answers before and after group discussion to student performance and confidence. Additionally, the authors studied whether the display of the voting histogram for individual answers affected the answer choices of students after discussion.  Two cohorts of students in a chemical thermodynamics course answered the same 5 question pairs, each on a different topic, using the typical PI cycle.  After each vote, students were given time to write an explanation of their answer choice, and reported their confidence using a Likert scale of 1-5. In both cohorts, significantly more students changed from incorrect to correct than from correct to incorrect when the most common answer was correct, but not when the most common answer was incorrect. More students also changed their answer to match the consensus answer than from the consensus to another answer. This held true regardless of whether the consensus answer was correct, and whether or not the histogram of votes was shown.  In looking at student explanations of their answers, explanation scores increased from individual to after-discussion for students who answered correctly both times, and for those who went from incorrect to correct.  No significant correlation was found between confidence ratings and either correctness or consensuality. The authors conclude that group discussion during peer instruction helps students construct deeper explanations in all circumstances. Students who choose the correct but not consensus answer after discussion are less confident in their response than other students, but had the best explanations. The authors point out that instructor-led discussion may be critical for situations in which most people answer the question incorrectly.

Perez KE, Strauss EA, Downey N, Galbraith A, Jeanne R, Cooper S (2010). Does displaying the class results affect student discussion during peer instruction? CBE Life Sci Educ 9, 133–140. The authors investigated one element of PI, asking whether displaying a graph of student responses before peer discussion influences students to adopt a popular answer independent of the discussion. To address this question, the authors varied the display of a student response graph before peer discussion for eighteen clicker questions in eight sections of an introductory biology course. Students were significantly more likely to switch to the most common answer when the student response graph was displayed prior to peer discussion. These results suggest three possibilities: the graph biases students toward the most common answer and potentially reduces the value of peer discussion; the graph provides a talking point that enhances the value of the peer discussion; the graph prompts students to reevaluate and find flaws in incorrect answers. A post-hoc analysis suggested that the effect was enhanced with more difficult questions, supporting the possibility that it is due to bias. The authors therefore suggest care in displaying student response graphs before peer discussion.

Miller K, Schell J, Ho A, Lukoff B, Mazur E (2015).  Response switching and self-efficacy in peer instruction classrooms. Phys Rev ST Phys Educ Res 11, 010104. This study collected response data from 91 students in an introductory electricity and magnetism course, who engaged in PI for 83 different clicker questions over the semester.  Students were grouped into categories based on whether and how their answers changed from their individual vote to their vote after discussion.  In particular, the authors were interested in “negative” switching, in which students switched from correct to incorrect, or incorrect to another incorrect answer after peer instruction. Switching data were then correlated with the difficulty of the clicker question, and with the students’ scores on a self-efficacy survey. Students switch their answers 44% of the time, usually in a positive direction (73% from wrong to right). Students with low self-efficacy were significantly more likely to negatively switch their answers than students with high self-efficacy, even when controlling for incoming knowledge with a pre-test. Students are more likely to engage in a negative switch on difficult items than on easier items. In addition, females reported lower self-efficacy than males, and were more likely to engage in negative switching. The authors conclude that the strong correlation between switching and self-efficacy beg interventions to help students have positive experiences during peer instruction, including building mastery, providing modeling, and reducing stressful in-class situations.

Instructor Explanation & Modeling

Smith MK, Wood WB, Krauter K, Knight JK (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE Life Sci Educ 10, 55–63. The authors asked whether peer discussion, instructor explanation, or a combination of both led to more conceptual change in genetics courses. Student responses to paired, isomorphic clicker questions were used to compare the conditions. For each question pair, students voted on the first question individually. In the first condition, students then discussed the question with peers and revoted prior to learning the correct answer with no explanation. In the second condition, students volunteered reasons for their answers and heard the instructor’s explanation. In the third condition, students engaged in peer discussion, revoted, volunteered reasons for their answers and heard the instructor’s explanation. In all three conditions, students then voted individually on a second, isomorphic question (Q2).  Both peer discussion and instructor explanation significantly improved student performance on Q2, and the combination of peer discussion and instructor explanation produced larger learning gains than either alone. When the authors compared the effects of the treatments for weak, medium, and strong students, they found that strong students derived the greatest benefit from peer discussion while weak, nonmajor students benefited most from instructor explanation.

Zingaro D, Porter L (2014). Peer instruction in computing: the value of instructor intervention. Comput Educ 71, 87–96. This study was carried out in an introductory computer science course with 113 students.  The authors used isomorphic pairs of questions to determine whether students improved more using only peer discussion or peer discussion combined with instructor intervention (combined mode). They also asked whether gains were greater for harder or easier questions, and whether strong or weak students made higher gains.  They analyzed clicker performance data for 12 isomorphic questions in each condition for which initial performance was below 80%.  The results demonstrate that the combined mode condition promotes significantly higher score improvement than peer discussion alone.  In addition, strong students improve more than other students, but progress is made by all groups of students, even when the initial question is difficult.  When questions are more difficult, the combined mode is again more effective at improving student performance than peer discussion alone.  Thus, although peer discussion helps students learn, instructor-led discussion before a second question provides an additional bump to student learning for all groups of students.

Technology

Zayac RM, Ratkos T, Frieder JE, Paulk A (2016). A comparison of active student responding modalities in a general psychology class. Teach Psychol 43, 43-47. The authors asked if method of responding to in-class questions—electronic response systems, response cards, hand-raising, or no response—impacted student exam performance and if one of these methods was preferred by students a general psychology course. The study was implemented in four sections of the course, using an alternating treatment design. Specifically, for each unit of the course, students would use one type of response before taking an exam; for the next unit, they would shift to another type of response. The different ways of responding to questions did not result in a different in exam performance, although all three methods of responding resulted in significantly higher scores than when students did not respond to questions. 78% of students preferred the electronic response system, with 56% noting that it helped them understand and kept them engaged more than the other modes of answering. While this study does not address peer discussion and did not provide an opportunity for peer discussion around the questions, student preference for clicker response is notable.

Stowell JR, Nelson JM (2007). Benefits of electronic audience response systems on student participation, learning, and emotion. Teach Psychol 34, 253-258. The authors asked whether using clickers, hand-raising, and response cards to answer in-class questions led to different levels of participation and self-reported positive emotion. 140 undergraduates enrolled in an introductory psychology class were assigned to one of four conditions: standard lecture, which included open-ended, informal questions or conditions in which the standard lecture was complemented with MC review questions that students answered by raising their hands, raising response cards, or using clickers. No formal opportunity for peer discussion was included. Students completed the Academic Emotions Questionnaire (AEQ) before and after the class session as well as a 10-item quiz after the lecture. Class sessions were videotaped to allow assessment of participation and correctness for conditions in which answers were not recorded. The clicker and response card groups exhibited significantly higher participation than the hand-raising group.  Students in the clicker condition did most poorly on in-class review questions, but there was no significant difference in the post-lecture quiz performance by the different groups. Students in the standard lecture condition exhibited the lowest enjoyment and most boredom as measured by the AEQ, but there was no measurable difference between the other three groups

Return to Map

Cite this guide: Knight JK, Brame CJ. (2018) Evidence Based Teaching Guide: Peer Instruction. CBE Life Science Education. Retrieved from http://lse.ascb.org/evidence-based-teaching-guides/peer-instruction/
  • Your feedback helps improve the site