Question Structure and Purpose

  • Questions may be logistical, recall, algorithmic, or conceptual. Within the general category of conceptual questions, instructors may use application questions, case-study questions, or procedural questions, among others.
  • Questions are often one-best-answer multiple choice, but multiple true-false, free response, and questions that promote drawing can provide benefits.
  • Both questions that require lower order and higher order cognitive skills can promote robust peer discussion. The Blooming Biology Tool can help instructors characterize the cognitive level of their questions.
  • Questions that uncover misconceptions may have particular benefits, and common misconceptions may vary for biology majors and nonmajors.
  • Questions should be challenging enough to provoke interest and discussion. More difficult questions can lead to the greatest learning gains.
Turpen C, Finkelstein ND (2009). Not all interactive engagement is the same: variations in physics professors’ implementation of peer instruction. Phys Rev ST Phys Educ Res 5, 020101. The authors investigate the implementation of peer instruction (PI) in six high-enrollment introductory physics classes, developing a system for describing and measuring classroom practices that contribute to different classroom norms. They observed four types of questions, logistical, conceptual, algorithmic, and recall, but relatively little variation in the extent to which the six instructors used each type. They also observed that student-student interaction around clicker questions was consistent in the six classrooms. They did observe significant variation, however, in instructors’ interactions with students during the question response period, with some instructors remaining in the stage area and others moving into the classroom and interacting extensively with students. Further, they saw variation in the clicker question solution discussion stage in two ways: first, some instructors always addressed incorrect responses during the discussion, while others sometimes eliminated this step; second, student contribution to the class-wide explanation varied, with the average number of students contributing ranging from 0-2.4.  In addition, the way that instructors interacted with students varied significantly, resulting in different classroom norms around discussion of reasoning. They report that differences in instructor practice produces variations in students’ opportunities to practice conceptual reasoning, talking about the subject matter, agency and scientific inquiry.

Beatty ID, Leonard WJ, Gerace WJ, Dufresne RJ (2006). Question driven instruction: teaching science (well) with an audience response system. In Banks, DA (Ed), Audience Response Systems in Higher Education: Applications and Cases (pp 96–115). Hershey, PA: Idea Group Inc. This paper describes the organization of so called “question-driven instruction”, in which a cycle of questioning, discussion and feedback features prominently during in-class meetings.  The authors argue that using audience response systems (especially in a large course where individualized instruction is difficult) can transform the classroom dynamic, allowing for a student- and assessment-centered approach.  In addition, because students are frequently exchanging ideas with each other, this format encourages a community-based approach to learning that enhances student communication and understanding.  They point out that the act of articulating one’s thoughts to others has value both for oneself and for other students, and learning how to generate an argument to support one’s ideas is also highly valuable.  The authors go on to suggest that such instruction should address 12 habits of mind which include predicting, categorizing, planning and justifying, as well as more metacognitive tasks such as monitoring, refining, re-evaluating and reflecting. They suggest designing questions to explicitly address these habits of mind, focusing on skills that students can practice and demonstrate as a result of working on these types of questions.  The authors also mention that in-class questions can be ambiguous (while exam questions cannot), and that ambiguity, or multiple correct answers can stimulate discussion and thus learning.  The paper goes on to discuss suggestions for classroom management, and best practices for modeling and coaching.

Agile Learning. Derek Bruff’s blog on Teaching and Technology. This blog includes a section on question types, listed as Application Questions, Case Study Questions, Conceptual Questions, Free-Response Questions, Monitoring Questions, Multiple-Mark Questions, On-the-Fly Questions, One-Best-Answer Questions, Prediction Questions, Procedural Questions, Recall Questions, and Student Perspective Questions.

Turn to Your Neighbor: The Official Peer Instruction Blog. This blog includes sections on clicker questions written by Julie Schell.

Hubbard JK, Potts MA, Couch BA (2017). How question types reveal student thinking: an experimental comparison of multiple-true-false and free-response formats. CBE Life Sci Educ 16, 1-13. These authors compare student responses to matched sets of multiple true-false (MTF) and free-response (FR) questions to determine whether students answer the two formats similarly in terms of correctness, and in terms of the partial or mixed conceptions. A crossover design was used to administer either MTF or FR questions to 405 introductory biology students on 3 unit exams. Students also answered the same multiple choice and MTF questions (control questions) on each exam to provide a means of comparing the groups of students. In the MTF format, students responded correctly most of the time, while in the FR format, far fewer provided completely correct conceptions: 55% of students provided responses that were categorized as unclear, in that they did not address conceptions given in the MTF statements.  When students were divided into quartiles by overall exam performance, students with overall scores in the bottom quartiles were characterized by a significantly higher proportion of unclear responses.  In the MTF format, most students had mixed conceptions (answering statements both correctly and incorrectly), while in the FR format, more students had partial conceptions—i.e., correct conceptions with additional unclear conceptions. Thus, in both question formats, the majority of student answers reflected some type of incomplete understanding of various conceptions related to a question, differing in the way the incomplete ideas were discussed in their answers.  The authors suggest that the MTF format is better at diagnosing specific incorrect conceptions, while the FR format is a more authentic representation of student thinking, but does not necessarily reveal ideas about specific incorrect conceptions.

Quillin K, Thomas S (2015). Drawing-to-learn: a framework for using drawings to promote model-based reasoning in biology. CBE Life Sci Educ 14, 1–16. This essay summarizes previous studies regarding the benefits of drawing on learning.  Evidence suggests that generating an internal model of how something works benefits student learning, but there are contradictory studies regarding the additional benefit of generating an external, drawn representation. In part, the contradictions could be due to the increased cognitive load of drawing, or student unfamiliarity with using drawing to represent one’s thinking. The authors share a framework for distinguishing between pedagogical goals, including whether drawings are intended to be representational or abstract, and whether they are intended as formative or summative exercises.  Thus, drawing exercises can be used either as a way of helping students learn, or as a way of helping to communicate information, but likely not both at the same time.  The authors suggest and elaborate on three kinds of interventions using drawing. Each type of intervention is intended to improve a different element likely to impact student learning: affect (attitude, value, self-efficacy): visual literacy (facility with language and practice of language), and model-based reasoning (drawing, using, modifying and evaluating one’s models).

Knight JK, Wise SB, Southard KM (2013). Understanding clicker discussions: student reasoning and the impact of instructional cues. CBE Life Sci Educ 12, 645–654. The authors investigated characteristics of clicker question-prompted peer discussions in an upper-level developmental biology course by recording, transcribing and analyzing 83 small group discussions about 34 clicker questions. They focused particularly on student argumentation—making claims, providing evidence, and linking the two—and on the impact of instructor prompts on use of argumentation. In the discussions that were analyzed, approximately 40% of student comments focused on explaining reasoning and ~30% on making claims. This percentage was not impacted by the fraction of students who initially answered the question correctly and did not correlate with the fraction of the students who answered correctly after discussion. 78% of the discussions involved exchanges of reasoning, and higher quality discussions tended to produce a greater increase in percent correct responses after the discussion. They found that questions requiring either lower order or higher order cognitive skills had the potential to elicit discussions that involved exchanges of reasoning. Instructional cues varied, with the instructor asking the students to focus on their reasoning in ~60% of the discussions and on finding the correct answer in the remaining 40%. Importantly, when the instructor used reasoning cues, students engaged in significantly more higher-quality discussions. Thus, instructor prompts that focus students on explaining reasoning may have a positive impact on the quality of peer discussion.

Crowe A, Dirks C, Wenderoth MP (2008). Biology in bloom: implementing bloom’s taxonomy to enhance student learning in biology. CBE Life Sci Educ 7, 368-381. The Blooming Biology Tool (BBT) was designed and tested during the ranking of 600 science questions from life science exams and standardized tests. The authors established guidelines for placing questions into different categories, and for distinguishing between lower order (LOC) and higher order (HOC) questions. In addition to being a useful tool for instructors, the authors show that the BBT can also be used by students to identify the level of exam questions.  The authors created the Bloom’s-based Learning Activities for Students (BLASt), a tool that directs students on how to strengthen study skills at each Bloom’s level, allowing students to focus on improving their skills at particular levels of questions. The authors give examples of using the BBT in three different environments, a laboratory course, a large lecture course, and a workshop setting.  They demonstrate student improvement on HOCS questions as a consequence of specific interventions to address such skills, facility in student identification of different Bloom’s levels, and effective training of students to use the BBT. In summary, by clearly defining and giving examples of questions at each Bloom’s level, the authors provide a tool that can be by faculty to create questions at appropriate levels, and by students to identify what levels of questions are particularly challenging.

Modell H, Michael J, Wenderoth MP (2005). Helping the learner to learn: the role of uncovering misconceptions. Am Biol Teach 67, 20-26. This paper is a report of a meeting in which faculty from multiple science disciplines were convened to discuss the idea of misconceptions and how they can be addressed in courses. Instructors from multiple science disciplines agree that students have mental models that are often flawed—these models may result in students struggling to understand a discipline, or to add new knowledge on top of their flawed models.  Three examples are given in the paper that highlight the types of struggles students often display in thinking about a concept: absent or incorrect links between elements of a model, failure to examine implications of a model, and misunderstanding of biological language and/or system complexity.  Instructors are encouraged to explicitly address misconceptions by helping students recognize that their models are incomplete or incorrect.  If misconceptions can be uncovered as a first step in a diagnostic process, the instructor can gain insight into student thinking and can accordingly design material to help students overcome their misconceptions. Additionally, for the student, realizing they have a misconception is the first step in modifying his/her mental model.

Coley JD, Tanner K (2015). Relations between intuitive biological thinking and biological misconceptions in biology majors and nonmajors. CBE Life Sci Educ 14, 1–19. The authors explored non-biology and biology majors’ descriptions of misconceptions on six basic biological ideas, with the intent of understanding how students use cognitive “construals”: ways of thinking that are teleological, essentialist, or anthropocentric. Students read six misconception statements, each of which used one form of cognitive construals, indicated whether they agreed with the statement on a 1-5 Likert scale, and then provided a written explanation of their rating.  Student explanations were then coded for presence of the three types of construals. Students agreed most highly with misconceptions that were teleological statements, next most for anthropocentric statements, and lowest for essentialist statements.  Nonmajors had significantly higher mean agreement with all three types of misconception statements than did majors.  For biology majors, essentialist construals were most common, followed by anthropocentric and then teleological, while for nonmajors both essentialist and anthropocentric construals were more common than teleological construals. The only significant difference between majors and nonmajors was the use of anthropocentric construals, which were more common for nonmajors. For biology majors only, there was a strong relationship between their written use of construals and the misconception statements with which they agreed. The authors suggest this difference could be due to students experiences in secondary biology education which may drive or reinforce these relationships.

Smith MK, Wood WB, Adams WK, Wieman C, Knight JK, Guild N, Su TT (2009). Why peer discussion improves student performance on in-class concept questions. Science 323, 122–124. In PI, students answer conceptual multiple choice questions individually, discuss the questions with their neighbors, and then revote before the instructor explains the correct answer. Typically, the number of students providing a correct response increases after the peer discussion. These researchers investigated whether the peer discussion promotes understanding or whether students are persuaded to vote for correct answers by peers using sixteen paired clicker questions in a high enrollment genetics class. Students voted on the first question individually, discussed it with peers and revoted, and then voted on a second, similar question before instructor explanation. As expected, the percent of students answering the first question correctly increased after discussion. Importantly, the percent of students answering the second, similar question correctly was significantly higher than for Q1, indicating that students’ conceptual understanding increased. Peer discussion improved student understanding for questions of different difficulty levels, but the greatest benefit was observed for the most difficult questions. Statistical analysis suggested that the improvement extended to groups in which none of the students understood the concept when answering the first question. Further, students who answered Q1 incorrectly both before and after discussion demonstrated a better-than-chance probability of answering Q2 correctly, indicating that peer discussion had a delayed, unexpected benefit. Thus peer discussion is an essential element for deriving benefit from clicker questions.

Lemons PP, Lemons JD (2013). Questions for assessing higher-order cognitive skills: it’s not just bloom’s. CBE Life Sci Educ 12, 47–58. In this paper, Lemons and Lemons describe the discussions of biology instructors as they wrote and reviewed assessment questions. In particular, the study focuses on how instructors characterize higher-order questions, including using Bloom’s taxonomy, time required, question difficulty, and student experience.  Participants grappled with the relationship between higher order questions and level of difficulty, often assuming that a higher order questions should be difficult, and yet, under scrutiny, realizing that the two are not the same.  Another issue involved the assumption that questions should have a single correct answer, which led to a difficulty in characterizing some of the questions.  The authors generate a framework for further research, and propose that biologists need to consider their assumptions about student knowledge, as well as difficulty and Bloom’s taxonomy when writing higher order questions.

 

Return to Map

Cite this guide: Knight JK, Brame CJ. (2018) Evidence Based Teaching Guide: Peer Instruction. CBE Life Science Education. Retrieved from http://lse.ascb.org/evidence-based-teaching-guides/peer-instruction/
  • Your feedback helps improve the site