Measuring Graph Competence
Following the principles of backward design allows instructors to have clearly identified targets for student learning and define strategies for capturing evidence of that learning and development of competence. As such, articulating cognitive and non-cognitive learning outcomes and using assessments aligned with them allows instructors to then design learning experiences for students.
- Similar to concept-oriented learning objectives and other complex practices, instructors should clearly identify and articulate measurable learning outcomes for graphing skills that will guide assessment and teaching practices in development of student competencies. Defined learning outcomes are an inclusive means to explicitly communicate student expectations and unpack subcomponents of complex disciplinary practices such as graphing.
- Consensus documents (e.g., Vision and Change) and validated frameworks (e.g., BioSkills Guide) provide broad targets for a competency-based approach to the instruction of graphing skills from which instructors can develop more targeted objectives relevant to their course and student contexts
Clemmons, A. W., Timbrook, J., Herron, J. C., & Crowe, A. J. (2020). BioSkills guide: Development and national validation of a tool for interpreting the Vision and Change core competencies. CBE—Life Sciences Education, 19(4), ar53. The 2011 Vision and Change consensus report for undergraduate biology education articulated sets of core concepts and competencies in biology to guide instruction and assessment of student learning. The BioSkills Guide expands the broadly described Vision and Change core competencies by outlining what they mean to undergraduate biology majors (specific activities and knowledge) as well as providing measurable learning outcomes that students should be able to complete prior to graduation. The authors engaged in cycles of drafting, validity evidence collection, and revision to produce a set of competencies that reflected the priorities and practices of a diversity of undergraduate biology instructors and student populations, institution types, course level, and biology subdisciplines from across the United States. This validated competency set includes creating and using data visualizations, including graphs, in the areas of Quantitative Reasoning (create and interpret informative graphs and other visualizations) and Process of Science (analyze data, summarize resulting patterns, and draw appropriate conclusions). This is a valuable starting point for instructors and researchers who wish to document and improve student graphing practices in the context of biology.
Pelaez N, Gardner SM, & Anderson T (2022). The problem with teaching experimentation: Development and use of a framework to define fundamental competencies for biological experimentation. Trends in Teaching Experimentation in the Life Sciences, Editors, Nancy Pelaez, Trevor Anderson, and Stephanie M. Gardner, Springer publishing. Experimentation is one process by which new knowledge is generated and involves an ongoing and cyclical set of activities. In an effort to guide instruction and assessment of experimentation competence by undergraduate biology students, the ACE-Bio Network (Advancing Competence with Experimentation in Biology) has broken experimentation in biology into 7 competency areas each with several concept-skill statements. These competencies for biological experimentation emerged from the collaborative work between practicing basic science researchers and biology education specialists with expertise in biology and biology teaching and education research. The seven competency areas are (in no particular order): plan, analyze, conduct, question, communicate, identify, and conclude. The use of visualizations, including graphs, is woven throughout the competencies with particular emphasis in using data visualizations in the planning of investigations and during data analysis and communication. Within these areas are concept-skill statements that can be used to frame learning objectives to guide teaching and assessment. These concept-skill statements are at a fairly large grain size in need of elaboration, but serve as a valuable starting point for instructors and researchers involved in teaching and evaluating student competence with graphing.
Aikens, M. L., & Dolan, E. L. (2014). Teaching quantitative biology: Goals, assessments, and resources. Molecular Biology of the Cell, 25(22), 3478-3481. An overview for the teaching of quantitative biology using a backward design framework where overarching learning goals and assessment strategies are considered prior to the selection of instructional practices is the proposed framework provided by the authors of this essay. Aikens and Dolan begin by outlining goals for teaching quantitative biology, in which they argue for the importance of developing students’ cognitive quantitative skills in parallel with positive attitudes towards quantitative work in support of desirable behavioral outcomes (e.g., engagement in learning activities, completion of quantitative degree programs). Here, the authors highlight seven attitudinal goals that emphasize the promotion of students’ interest, perceptions (relevance, importance), self-efficacy, and emotional response in quantitative work as well as increased intentions to pursue further coursework or careers in the field. Next, the authors discuss the key role of assessment aligned with identified learning goals in documenting the impact of quantitative biology instruction, and then go on to highlight existing measurement tools as well as formative assessment techniques (e.g., metacognitive prompts) for this activity. Finally, the authors provide examples of varying published curricular approaches to teaching quantitative biology in introductory and advanced courses as well as web and journal resources for interested instructors. Aikens and Dolan conclude the essay by calling for collaborations between education specialists and quantitative biologists in designing and testing assessment tools and instructional resources.
Writing and Using Learning Objectives. This evidence-based teaching guide outlines a framework as to how instructors can design learning objectives in support of teaching and student learning.
AP Biology Course and Exam Description. The AP Biology curriculum framework explicitly focuses on six science practices to develop skills fundamental to the discipline (pages 190-191). Three of the practices (Visual Representations, Representing and Describing Data, and Statistical Tests and Data Analysis) highlight skills relevant to graphing (see pages 190-191), with clearly defined competency benchmarks provided as well as sample instructional activities to demonstrate how these practices can be integrated with general biology course content. This framework may be of particular use to instructors in designing introductory majors and nonmajors courses.
- Rubrics are effective tools to assess student graphing skills as well as to guide targeted instruction in the display and interpretation of graph data. An inclusive teaching practice, rubrics clearly communicate expectations and offer students the opportunity for self-assessment.
- Instructors should align their instructional goals with assessment design, which plays a critical role in the effective measurement of graphing skills.
- Closed response graphing tasks, such as multiple choice questions in concept inventories, are useful in measuring granular, targeted practices (e.g. identifying a data trend) and are easy to grade; however, they are limited in lending insight to students’ higher order graph thinking.
- Open-ended tasks, while more laborious to evaluate, can engage students in authentic graphing practice in the context of data analysis as well as readily reveal their decision-making and reasoning in the use and display of graph data.
- Authentic assessments provide students the opportunity to demonstrate their competence through “real world” tasks valued in their own right within the field of practice. Within biology, authentic measures of students’ graphing skills would often occur in the context of data analysis and communication during scientific inquiry or experimentation.
- Providing students with opportunities to reflect as they are graphing can reveal the underlying reasoning involved in their responses and products and help point instructors to important instructional areas of need. Reflections can be easily incorporated within formative and summative assessments. (cross reference Designs in Action and DU)
Wiggins, G. (1990) “The Case for Authentic Assessment,” Practical Assessment, Research, and Evaluation: Vol. 2 , Article 2. This brief essay outlines the importance of authentic assessment practices in measuring student competencies. While the article is written about general assessment practices in education, the discussion to the value and design of authentic assessment (direct measures of one’s performance in completing intellectually challenging tasks) compared to traditional assessment (those that rely on indirect “proxy” items to gauge proficiencies) is relevant with how instructors can effectively evaluate students’ graph competencies. In particular, the author highlights several technical considerations for developing authentic assessments, including: (a) requiring students to use prior knowledge (e.g., disciplinary, graphing, contextual) in completing tasks, (b) engaging students in tasks that align with the priorities and challenges of the discipline, (c) involving complex or “ill-structured” tasks comparable to or in practice of the types of real-world problems faced by practitioners, (d) providing students “open” space to provide thorough and justified responses, and (e) achieving validity and reliability by identifying and standardizing appropriate criteria for scoring varied responses (e.g,. rubrics). The author concludes the essay by providing an argument as to how the benefits of high-quality assessment (e.g., direct insight into student abilities to improve performance, increased “test validity”) offsets perceived limitations (i.e. time and energy).
Berg, C., & Boote, S. (2017). Format effects of empirically derived multiple-choice versus free-response instruments when assessing graphing abilities. International Journal of Science and Mathematics Education, 15(1), 19-38. In this paper the authors investigate the ways in which the format of an instrument to measure students’ graphing skills can influence the test’s validity. To address this question, the authors studied the responses of 736 7th-12th grade students to three multiple choice (M-C) or free response (F-R; graph drawing) items situated in common kinematic scenarios and data sets. Six versions of the instrument were used in the study, which, in addition to item format (open or closed), varied by the presence/absence of an included (a) drawing of the scenario, (b) written description of the scenario, and (c) instructions to add marks to demonstrate data transitions. Results indicated that the item format of the graphing instrument can significantly affect student responses, influencing measurement validity. Based on multiple lines of evidence, as well other preexisting work, the authors conclude F-R items that ask participants to represent data provide a more valid measure of graphing skills than M-C instruments, which were found to be highly limited in their ability to effectively assess graphing competence. The authors further point out that M-C measures lead to issues with testing fairness as “low classroom performers” are often primed to select responses associated with graphing misconceptions, and F-R assessments offer a more valid and productive measure of their graphing knowledge and competence.
Stanhope, L., Ziegler, L., Haque, T., Le, L., Vinces, M., Davis, G. K., … & Overvoorde, P. J. (2017). Development of a biological science quantitative reasoning exam (BioSQuaRE). CBE—Life Sciences Education, 16(4), ar66. The effective assessment of quantitative skills is integral for guiding pedagogical and programmatic decisions to support students in honing and developing competencies essential for success in the sciences. This article focused on the development and validation of the Biological Science Quantitative Reasoning Exam (BioSQuaRE) assessment, a discipline-specific measure, designed in line with recommendations from national reform documents, to provide feedback on college biology students’ quantitative skills. The article reports on the BioSQuaRE instrument and details the multistep, iterative instrument design and testing process used to serve as a model for others hoping to develop similar tools. The final instrument, available to instructors upon request, consists of 29 multiple choice items, of which approximately half contain a graph in the task or response options. Analyzing student test data (n=555) collected from five different institutions of varying types (e.g., public/private, mission), the authors found the tool was able to assess quantitative skills in a biological context for students with a wide range of abilities. The authors conclude the instrument has a range of potential applications, including: measuring changes in introductory students quantitative skills to inform curriculum choices, as a diagnostic tool to identify student quantitative areas of improvement, and programmatic assessment.
Deane, T., Nomme, K., Jeffery, E., Pollock, C., & Birol, G. (2016). Development of the statistical reasoning in biology concept inventory (SRBCI). CBE—Life Sciences Education, 15(1), ar5. Statistical reasoning, the way one reasons with statistical ideas (e.g distribution, variability) and makes sense of numerical data, is widely recognized as a competency needed to succeed in scientific endeavors and to navigate varying data forms encountered everyday. Further, statistical knowledge and reasoning are important in the creation and reading of graphs of biological data. This article outlines the development and validation of the Statistical Reasoning in Biology Concept Inventory (SRBCI), an assessment which aims to capture undergraduates’ conceptions in statistical reasoning specific to biology experimentation. Contextualized in experimental scenarios, the 12-item multiple-choice tool was designed to permit instructors to easily characterize their students’ statistical reasoning skills to identify potential learning areas for targeted instruction and to document conceptual gains resulting from curricular innovations. Within an expert-novice paradigm, the authors focused the measure on four core themes that reflect non-expert-like conceptions in statistical reasoning commonly held by undergraduates, including: (1) variation in data, (2) repeatability in results, (3) hypotheses and predictions, and (4) sample size. The authors found that the SRBCI is effective for assessing students’ conceptual ability in statistical reasoning in populations of students at different stages of their degree program. Further, the authors contend that performance on the individual concept inventory items provide preliminary insight on students’ transition towards more expert-like thinking during their undergraduate training, which may be of interest to other education researchers as well as faculty and administrators seeking to assess programmatic effectiveness. The authors conclude that the SRBCI can provide instructors useful information relating to students’ scientific reasoning conceptions and inform the design of teaching interventions in promotion of statistical reasoning in biology, an important part of creating and reading graphs.
Angra, A., & Gardner, S. M. (2018). The graph rubric: development of a teaching, learning, and research tool. CBE—Life Sciences Education, 17(4), ar65. This article fills an assessment gap in graphing by describing the multi-step development of an evidence-based analytic graph rubric designed to facilitate the teaching and evaluation of graphs, provide formative and summative feedback to students, and allow education researchers to evaluate graphing artifacts. The rubric is informed by literature from the learning sciences, statistics, and representations literature as well as feedback, use, and validation of the rubric by a variety of users (undergraduate students, graduate students, education researchers, and biologists). The rubric consists of categories essential for graph choice and construction: graph mechanics (descriptive title, axes labels, units, scale, and key), graph communication (aesthetics and take-home message), and graph choice (graph type, data displayed, and alignment to the research question and hypothesis). Each category of the rubric can be evaluated at three levels of achievement (excellent, present but needs improvement, and absent or inappropriate). The graph mechanics are weighted less than the other two categories due to lack of cognitive difficulty. The rubric has the potential to provide formative feedback to students and allow instructors to gauge and guide learning and instruction.
McKenzie, D. L., & Padilla, M. J. (1986). The construction and validation of the test of graphing in science (TOGS). Journal of research in science teaching, 23(7), 571-579. This article explains the creation and validation of a multiple-choice test of graphing skills (TOGS), specifically line graphs, for science students in grades 7-12. Nine learning objectives were developed and included skills that are important for either construction or interpretation of line graphs. Examples of skills that learning objectives were aiming to assess were: selecting appropriate axes, locating points on a graph, drawing lines of best fit, interpolating, extrapolating, describing relationships between variables, and interrelating the data displayed on two graphs. Fourteen multiple-choice items were written for the five learning objectives aiming to assess graph construction. Twelve multiple-choice questions were written for the four learning objectives aiming to assess graph interpretation. The questions themselves did not include any scientific concepts or jargon, but everyday variables like the amount of gasoline used for a trip. Content validity was established by having experts review and score the questions. Reliability coefficients of 0.81 and 0.83 indicate that TOGS is a reliable and valid test.
Gormally, C., Brickman, P., & Lutz, M. (2012). Developing a test of scientific literacy skills (TOSLS): Measuring undergraduates’ evaluation of scientific information and arguments. CBE—Life Sciences Education, 11(4), 364-377. This resource is an article on the development, testing, and validation of the Test of Scientific Literacy Skills (TOSLS). We recommend this article for readers who are interested in assessing introductory students’ skills related to major aspects of scientific literacy (recognizing and analyzing the use of methods of inquiry that lead to scientific knowledge and the ability to organize, analyze, and interpret quantitative data and scientific information), which includes several graph-based tasks.