For years I have noticed how hard my students work on my course during examination weeks and have toyed with the idea of giving weekly quizzes as a means of motivating them to work that hard more consistently. The material which Intermediate Accounting covers is extensive and technical. Students can ill-afford to fall behind. When they do, they’re likely to find it much more difficult to catch up than they expect. Perhaps as important, class meetings can be woefully lackluster and nonproductive when significant numbers of students don’t know what’s going on.

I’ve taught two sections of Intermediate Accounting, a required course for majors, every fall semester for 15 years. I spend about equal time identifying and explaining new material and having students work in small groups going over solutions to homework problems. While students are working in the groups, I circulate about answering questions and providing check figures for problems. I usually teach back-to-back sections that meet three times a week. There are two two-to-three-hour evening exams during the semester, which both sections take simultaneously, and a comprehensive final. The exams are made up of multiple choice questions and short problems compiled from a test bank provided by the textbook author. The exam items are tied to specific goals spelled out for students in a class hand-out. Competition for grades is keen and motivation is high, especially during the three exam weeks.

Method

My investigation of the impact of weekly quizzes on examination performance involved two distinct phases. During the first, I gave three weekly quizzes to one section (selected by the flip of a coin) and not the other, then compared scores on the first common evening examination. In the second phase I repeated the same procedure but reversed the treatments. At the end of the second common exam, then, both sections had taken one exam preceeded by three quizzes and one exam not preceeded by quizzes. All quiz items were selected from the textbook author’s test bank, with a mix of multiple choice and short answer problem questions. Finally, after the second exam, I polled students about whether or not they wanted additional quizzes over the final third of the course.

Since many of my students knew each other outside of class and exchanged information about class happenings, I kept both sections aware of the purpose and design of the experiment. I also told them that the quiz grades would be averaged into the ten percent of their grade I describe as “class participation.” After each quiz I also posted a copy of the questions and solutions outside of my office.

Results and Discussion

The results of the first phase of my experiment were pretty much as I had expected. The quiz section scored an average of about two points higher than the non-quiz section on the first exam – a modest but statistically insignificant difference. After the second exam, however, I found the opposite of what I expected: the same section (now the non-quiz section) again scored an average of two points higher than the other section. Interestingly enough, even knowing these results, more than 80% of the students in both sections voted to continue the weekly quizzes for the final third of the course. I did give quizzes to both sections over the remainder of the semester. The same section scored higher on the final exam.

I spent a good deal of time and effort after the second exam trying to explain the inconclusive, even contradictory, results. I solicited possible explanations from the students without success. I did an item analysis of the second exam results without finding any single problem or topical area to account for the differences in grades. I discoverd that excluding the most extreme grades did not alter the results. In the end, I looked up the grade point averages of students in the two sections and discovered that the section with the higher exam scores had an average GPA of 2.87 (where A= 4.0) compared to the other’s 2.70.

Even though those GPA differences were also statistically insignificant, perhaps the explanation is simply that one class was made up of better students than the other and weekly quizzes were too modest an intervention to have any effect. The common textbook, identical homework assignments, and the fact that quizzes and solutions were posted for all to see probably dampened any potential impact of quizing on exam scores as well. Still, I was very pleased with student reaction to the quizzes. After all had gone through one round of weekly quizzes and an exam, and a round without quizzes, more than 80% voted to continue the quizzes. They apparently felt that the quizzes helped them stay on top of things and kept them informed about their readiness to answer the kinds of questions likely to be on the exams. I agree, although I think the incentive value wore off a bit late in the semester as pressures from other classes increased. I have decided, therefore, to continue the weekly quiz policy, perhaps with some changes in the grade values and frequency of the quizzes. In addition, I may replicate my experiment next fall.


From To Improve the Academy: Resources for Student, Faculty, and Institutional Development, Vol. 7. Edited by J. Kurfiss, L. Hilsen, S. Kahn, M.D. Sorcinelli, and R. Tiberius. POD/New Forums Press, 1988.