Informal Classroom Assessment Techniques
Informal Classroom Assessment Techniques
The College of New Jersey
Mary Goldschmidt, Ph.D.
“Learning can and often does take place without the benefit of teaching—and sometimes even in spite of it—but there is no such thing as effective teaching in the absence of learning. Teaching without learning is just talking. . . . Classroom Assessment focuses the primary attention of teachers and students on observing and improving learning, rather than on observing and improving teaching.”
–Thomas A. Angelo and K. Patricia Cross, Classroom Assessment Techniques: A Handbook for College Teachers. Second Edition. San Francisco: Jossey-Bass Publishers, 1993. 3-4
Often we think of “Assessment” (with a capital A) at the programmatic or institutional level, and it conjures up thoughts of statistical analysis and lots of paperwork. Classroom assessment techniques (or CATs), however, are much simpler ways of monitoring our students’ learning so that we can “refocus [our] teaching to help students make their learning more efficient and more effective” (Angelo and Cross 3). There are important linkages between CATs and writing, for although not every CAT uses writing (some are geared toward kinesthetic learners and involve group acting), many can do double duty as “write-to-learn” activities, and today’s bulletin will feature some of these exercises.
First, however, indulge me in a slight tangent. Over the summer, as some you may remember, Don Vandergrift had shared on the faculty listserv an article whose subtitle was “How soliciting more criticism can boost your course ratings,” by Craig Fox. I didn’t get a chance to read it until I was back from vacation, and it just so happened that at the same time I was reading Angelo and Cross’s Classroom Assessment Techniques, which affirms a similar conclusion, although it arrives there via a very different pathway!
In “The availability heuristic in the classroom,” Craig Fox applies the availability heuristic to a meaningful and practical setting for faculty: how the design of course feedback forms might influence course evaluations. For those of you who, like me, had never heard of this heuristic before, it refers in its simplest form to the phenomenon where “people sometimes judge the frequency of events in the world by the ease with which examples come to mind” (86). Fox designed his investigation, however, on studies which have examined in more depth the ramifications of how participants are asked to recall examples, and he is especially interested not in frequency-based predictions, but in evaluation-based predictions.
This is what he did: a one page mid-semester evaluation form was administered to two sections of a business course in Duke University’s MBA program. Students were given one of two forms, and the onlyway in which the forms differed was whether students were asked to list 2 or 10 ways in which the course could be improved (in other words, the number of critical comments solicited was the defining difference).
This is what he found: “course evaluations were higher among students who were asked to list 10 ways in which the course could be improved” (88). But because less than a third of students who were given the opportunity to list 10 criticisms actually listed more than 2, Fox decided to examine the “independent effects of the number [of critiques] provided and the number solicited” (89). Here, he discovered that no matter what statistical analysis he tried, “the association between course ratings and the ratio of criticisms produced to solicited remains strong and significant” (88 emphasis added). [Interested in reading the article? Here’s the link that Don had provided: http://journal.sjdm.org/jdm06020.pdf]
Fox’s findings affirmed a practice that I have used periodically over the years: a mid-semester classroom “barometer” which is designed to get anonymous feedback from students, and includes the following statements for students to complete: What has happened in this class so far is; How I feel about it is; What I have learned so far is; What has been good is; What I would like to see change is; and What might help is. While I have never measured whether my final course evaluations were higher in courses where I used the barometer than in courses where I did not use it, I always had the sense that my barometer classes had gone better, if only because students felt that their perspectives had been solicited and they had some agency in the course. Without question, however, the most important information I receive tells me what they have absorbed, what has had an impact, what has gone over their heads, and what I need to spend more time on.
This, really, is the essence of a CAT. Improvement in our course evaluations aside, soliciting some form of feedback from students is a tremendously powerful way to help improve our students’ learning. Many of us probably already practice some version of CATs without necessarily calling it a classroom assessment technique. Take for example what is often called the “Muddiest Point”: asking your students to write in a sentence or two what the muddiest point was for them about a lecture, an assigned reading, or a unit. A similarly easy technique is the one sentence summary, which will tell faculty how well students have understood and how well they’re able to synthesize large amounts of information. These kinds of assessments take little preparation and yet provide quick access to our students’ learning.
The Classroom Assessment Techniques handbook offers 50 CATs, including application cards, approximate analogies, human tableaus, directed paraphrasing, and memory matrixes, to name just a few. Here are two, somewhat more complex examples from Angelo and Cross, taken directly from their text (with emphasis added to underscore the use of writing):
# 14 Word Journal (pp. 188-192)
DESCRIPTION: The Word Journal prompts a two-part response. First the student summarizes a short text in a single word. Second, the student writes a paragraph or two explaining why he or she chose that particular word to summarize the text. The completed response to the Word Journal is an abstract or a synopsis of the focus text.
PURPOSE: . . . First, it focuses on students’ ability to read carefully and deeply. Second, it assesses skill and creativity at summarizing what has been read. And third, it assesses the students’ skill at explaining and defending, in just a few more words, their choice of a single summary word. Practice with this CAT helps students develop the ability to write highly condensed abstracts and to “chuck” large amounts of information for more effective storage in long-term memory.
SUGGESTIONS FOR USE: . . . It works especially well in courses that focus on primary texts rather than textbooks.
# 21 Documented Problem Solutions (pp. 222-225)
DESCRIPTION: . . . prompts students to keep track of the steps they take in solving a problem—to “show and tell” how they worked it out.
PURPOSE: Documented Problem Solutions have two main aims: (1) to assess how students solve problems and (2) to assess how well students understand and can describe their problem solving methods. Therefore, the primary emphasis of the technique is on documenting the specific steps that students take in attempting to solve representative problems—rather than on whether the answers are correct or not. As they respond to the assessment, students benefit from gaining more awareness of and control over their problem solving routines.
SUGGESTIONS FOR USE: . . . is especially useful for assessing problem solving in highly quantitative courses . . . [as well as] other fields that teach structured approaches to problem solving, fields such as logic, tort law, organic chemistry, transformational grammar, and music theory.
The book provides step-by-step directions for designing each CAT, extremely practical guidelines and tips, explanations for how to use the data you collect, specific examples from a vast array of disciplines, pros and cons of each CAT, and contexts in which each CAT is most (or least) useful or applicable. Please feel free to stop by my office to see the lists of CATs and make copies.