Using Formative Assessments to Individualize Instruction and Promote Learning

Like most terms in education, assessment brings to mind a wide range of conceptions and emotions. Today the trend toward using summative testing for accountability points out the disparate understandings of assessment, learning, and the purpose of schooling. If you ask a middle school student why they are learning about electricity, the response may relate to passing a test or doing well enough to move on to the next grade. Although it is important that students are able to demonstrate the knowledge they have gained from classroom instruction, the larger implication of focusing solely on this kind of response is that learning is for something outside of themselves. To bring the focus back to learning for understanding, formative and informal assessments need to be part of the instructional process.

Each learner brings different experiences and expertise to the classroom. How can one teacher respond to such diversity? How can one teacher probe different interests when they have summative assessments scheduled by the district? Is it even worth the effort to try? The exploration of one teacher’s use of formative assessment may shed some light on these questions.

Background

Gathering information on student understanding, analyzing that information, then using it to guide instruction is a teaching practice promoted by National Science Education Standards (NSES) (National Research Council, 1996, p. 33, also see Trimble, Gay, Matthews, 2005). Although the NSES (1996) called for assessments that probe students understanding, this has not been shown to be the norm (Duschl & Gitomer, 1997). Daily worksheets and quizzes have been used to gather points for grades rather than for daily feedback that teachers give and get from their students to determine their focus for the next day. Formative assessment is a tool teachers can use to probe student understanding, inform instructional decisions, and develop relationships. Bell and Cowie (2001) defined formative assessment as “the process used by teachers and students to recognize and respond to student learning in order to enhance that learning, during the learning.” This rich description explicitly states the process and the players, but not the difficulty of its implementation when district calendars are promoting coverage of material.

Probing understanding
Recognizing and responding to learning is not as easy as it sounds. The video The Private Universe (Howard-Smithsonian Center for Astrophysics, 1987) shows the difficulties teachers have in trying to detect student understanding. Just because a student can answer a teacher’s questions does not mean they carry the understandings the teacher assumes. Access to student reasoning is essential to instruction (Duschl & Gitomer, 1997). Interactions between students, teachers, and content provide the catalyst for deepening understanding. Within these inter- actions, teaching, learning, and assessment inter- mingle. [Editor’s Note: See Bintz & Williams (2005). They address superficial student understanding from the perspective of teacher questioning.]

Pedagogy
Teaching that is guided by formative assessment is targeted on the learner and principles of learning. This learner-centered approach focuses on factors that are under the control of the learner, while taking into account their interaction with the environment and context. A detailed description of this process can be found at www.apa.org/ed/lcpnewtext.html. The pedagogy involved in implementing formative assessment is meant to motivate students to mastery goals as opposed to performance goals (Brophy, 1983; Dweck, 1986; Tunstall, 1996). Students motivated by mastery goals are interested in learning content, noting improvement, and acting on that information to learn more. Formative assessment helps students interpret feedback as a means of learning rather than as punishment or reward (Tunstall, 1996). Although we acknowledge the importance of performance, especially on standardized tests, student motivation for learning is more closely tied to formative assess- ment. Rather than performing to get a grade, the focus is on learning to understand. Dialogue plays a key role in determining student motivation (Tunstall, 1996). A teacher who responds with comments such as “good” or “nice picture” is focusing attention on approval. Responding to student work with comments such as “explain what you mean by …” or “describe in detail” focuses the students’ efforts on understanding the content more deeply.

Relationships
Through this dialogue, or conversation, relationships form. Students begin to trust that they do not need to copy from the book to match what the teacher wants to hear. They can write down their thoughts, however ill-fashioned, and know future comments will direct the focus of learning. Through these comments, students see that the teacher is interested in their thoughts and cares about them (Treagust, Jacobowitz, Gallagher, & Parker, 2001). By reflecting on comments given to individual students, the teacher can identify trends in misunderstanding and competence. This feeds into instruction. Students come to understand that their learning is a priority, because the teacher is listening and responding to their needs.

Formative assessment serves the dual purpose of giving the teacher information on the effectiveness of the lesson and giving students information on the current state of their learning. Such information can guide future instructional decisions.

Our Study

This research was part of a larger study on inquiry. Through the inquiry project, this particular sixth grade teacher identified the need to be able to better assess what students understood. Action research begins with a teacher’s practical question, proceeding in a spiral of planning, action, observation, and reflection to learn from personal experiences and make those accessible to others (Kemmis & McTaggart, 1988). Even though designing and conducting a study is time consuming, action research can be more time efficient than learning varied approaches and trying to apply them without knowing their effects. Action research has the advantage of being context specific and should be judged by its own standards (Zeichner, 2001). A university professor, a graduate research assistant, and the classroom teacher formed a collaborative inquiry team. This set the stage for solving the teacher’s practical problem and implementing a plan of action.

Recognizing learning
For this sixth grade science teacher, improving learning was a top priority. Understanding what learning looked like was an evolutionary process beginning with the examination of student work. The existing practice was to correct worksheets. If the worksheet was filled out correctly, the information was assumed to be known by the student. However, through student interviews, we discovered that students had difficulty explaining the information. They could copy information. They could insert vocabulary at times, but they had few connections with the bigger concepts.

Figure 1
First quiz showing student performance without understanding

Each week for nine weeks, students were inter- viewed over the concepts the teacher taught. The team met to discuss what the teacher thought the student work showed. The interview information was then revealed to the teacher and compared to her perceptions. The inconsistencies showed the teacher where she needed more information about student learning. For example, one student received 11/11 on her quiz (see Figure 1). The teacher assumed she held a solid understanding of landforms, elevation, relief, and spheres.

However, when asked in an interview what elevation was, the student could not respond. She also could not explain the connection between the pictures and the spheres. When the interviewer asked, “What do you think about when you hear the word elevation?” The student responded, “Like the stuff. … I forgot some of this.” When asked to explain the bottom section of spheres, the student responded, “This is water (pointing to hydrosphere). And this one is rock (pointing to lithosphere). No, the rock fits atmosphere better, but I’m not sure.” The interviewer then asked why balloons and a spider were included. The student responded, “I’m not sure. I don’t know.” Upon hearing this, the teacher saw that, just because students could match pictures with words, did not mean they understood that the spider represented all living things which made up the biosphere. The 100% showed performance without understanding.

Through reflection and collaborative decision making, the teacher continued to develop new worksheets until her perceptions of what she thought the students knew matched the information from the student interviews. Her final product provided enough structure to guide student thinking and enough openness so as not to stifle responses. Organizing it around the unit’s big ideas with a science vocabulary word bank proved to be a key decision in aiding the development of connections. Leaving a blank space next to the written section allowed students to add drawings to explain their ideas more completely. Because all of the big ideas were together on one sheet, the students could easily see the relationship between the different concepts and the vocabulary. Since the sheets were used throughout the unit, ini- tial conceptions, modifications, and expanded ideas could easily be seen. What began with adherence to ready-made materials, evolved into an empowering realization that the teacher could create more valid assessments of student learning (See Figure 2).

Figure 2
Big Ideas Worksheet—Teacher-made formative assessment

Responding to learning
Once the learning was being accurately measured, the teacher could then respond to the learning. Previously, lessons were driven by the district calendar. All sixth grade teachers needed to be done with their units on a particular date. The pressure to cover was constantly in the forefront. But now that the teacher had accurate information, she could use her time more efficiently. Instead of going back over an entire section, she could pinpoint where specific students had specific questions. She could re-teach specific concepts that the majority of the students did not understand. She could probe individual students at all different levels, because the process she defined involved an individual, ongoing conversation. For example, when one student answered that erosion is “moving by gravity,” the teacher responded, “Are there other causes of erosion?” Working on the same sheet, another student answered that erosion is “the movement by wind, running water, glaciers, gravity and waves.” The teacher responded with a different probing question: “What is created by erosion?” These two students expressed differing levels of understanding which were addressed individually.

The interaction
This individual, ongoing conversation began with the “Big Idea” open-ended worksheet. This sheet was used to record working ideas of the main concepts in the chapter. Students could record initial conceptions. By providing a word bank, students were encouraged to use the scientific vocabulary in their responses (Figure 3). The teacher would make individual comments such as, “Describe what you mean by…” or “Explain how this can happen.” This differed from assigning end of the chapter questions, because it met individual students where they were. End of chapter questions could be general starting points for oral discussions. This “assessment conversation” (Duschl & Gitomer, 1997) mirrored the student interviews conducted earlier by the graduate assistant. It enabled the teacher to identify individual misconceptions and probe more deeply. It enabled the teacher to reach individual students rather than teach to the middle. Instead of correcting all sheets looking for the same answers, surface and deep thinking were probed at different levels based on where the students were at the time of the teacher responses. This sheet became a written record of their learning and the teacher’s probing. It also became a review sheet for the district criterion-referenced test. Now instead of correcting worksheets, recording grades, and moving on, the teacher had an accurate idea of what the students did and did not understand.

Figure 3
An example of the interaction between student responses and teacher probing.

Costs and benefits
But this record did not happen immediately. There was a huge investment in time and trust for the first two months. Teaching students about formative assessment was crucial. Students were used to receiving a grade for things they turned in. Turning in a sheet with incomplete thoughts was a risk. They had to be taught that this sheet was their personal conversation with the teacher. No grade would be given. They also had to see it as worthwhile. Without it “counting” why should students invest the time? Once they took the risk, they found out that the teacher listened to them. A relationship was established between teacher and student because of this listening and trust; one that the teacher referred to as real. Students began to see learning as more than percentage points.

The teacher also had to take risks. An established routine, local support, student cooperation, and the investment of time were all great risks. She still had a district calendar of assessments to follow. But her concern for the students, her commitment to improving student understanding, and her openness to risk taking provided the impetus for implementing this new approach. The teacher gained a new definition of learning. Before, learning was seen through the answering of isolated recall questions. Now learning involved connecting concepts across the whole unit (see Figure 4). This process renewed her enthusiasm for teaching because she felt she was developing “real” learning. It fostered professional growth and a desire to assist others in using formative assessment.

Figure 4
A comparison of assessment practices

Concluding Thoughts

This action research process provided a structure for the teacher to critically examine her existing practices. Because she identified a need for change and could see the potential benefits, she was able to take the risk to improve learning. Her question about accurately assessing student understanding was answered through a collaborative, ongoing process of planning, action, observation, and reflection. She now saw good pedagogy not as linear starts and stops but as an intermingling of interactions between content, teacher, and student. Real learning for this teacher involved students defining, redefining, revising, and expanding on initial conceptions. It involved a mastery orientation rather than performance goals and an ongoing process rather than merely a product to grade and record. Although the teacher’s “Big Idea” sheet was used in conjunction with other publisher worksheets, the importance of continuing to develop ideas was now being emphasized. Formative assessments did not replace summative assessments but provided a tool to validly recognize and respond to student learning. Through the use of formative assessment, this teacher was able to meet students where they truly were.

References

Bell, B., & Cowie, B. (2001). The characteristics of forma- tive assessment in science education. Science Education, 85, 536-553.

Bintz, W. P., & Williams, L. (2005). Questioning tech- niques of fifth and sixth grade reading teachers. Middle School Journal, 37(1), 45-52.

Brophy, J. E. (1983). Fostering student learning and moti- vation in the elementary school classroom. In S. Paris, G. Olson, & H. Stevenson (Eds.), Learning and motivation in the classroom (pp. 283-305). Hillsdale, NJ: Erlbaum.

Duschl, R., & Gitomer, D. (1997). Strategies and challenges to changing the focus of assessment and instruction in science classrooms. Educational Assessment, 4(1), 37-73.

Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048.

Harvard-Smithsonian Center for Astrophysics (Producing Organization), & Shapiro, I. (Principal Investigator). (1987). A private universe [Video recording]. (Available from Annenberg/CPB, P.O.Box 2345, S. Burlington, VT 05407-2345).

Kemmis, S., & McTaggart, R. (1988). The action research planner (3rd ed.). Geelong, Victoria, Australia: Deakin University Press.

National Research Council. (1996). National science educa- tion standards. Washington, DC: National Academy Press.

Treagust, D., Jacobowitz, R., Gallagher, J., & Parker, J. (2001). Using formative assessment as a guide in teach- ing for understanding: A case study of a middle school science class learning about sound. Science Education, 85, 137-157.

Trimble, S. Gay, A., & Matthews, J. (2005). Using test score data to focus instruction. Middle School Journal, 36(4), 26-32.

Tunstall, P. (1996). Teacher feedback to young children in formative assessment: A typology. British Educational Research Journal, 22, 389-395.

Zeichner, K. (2001) Educational action research. In P. Reason & H. Bradbury (Eds.), Handbook of action research: Participative inquiry and practice (pp. 273-285). Thousand Oaks, CA: Sage.

Juliann M. Kaftan is a research assistant in the Department of Teaching, Learning, and Teacher Education at the University of Nebraska, Lincoln. E-mail: kaftan6@msn.com

Gayle A. Buck is an associate professor of teaching, learning, and teacher education at the University of Nebraska, Lincoln. E-mail: gbuck@unlnotes.unl.edu

Alysa Haack is a middle school teacher in Lincoln, Nebraska. E-mail: alanning@lps.org

Published in Middle School Journal, March 2006.