Seldin, P. (1997). Using student feedback to improve teaching. In D. DaZure (Ed.), To Improve the Academy, Vol. 16 (pp. 335-346). Stillwater, OK: New Forums Press and the Professional and Organizational Development Network in Higher Education. Key Words: Student Evaluation of Instructor Performance, Evaluation, Evaluation Methods.

Student feedback has become the most widely used—and, in many cases, the only—source of information to evaluate and improve teaching effectiveness. Some instructional developers use the approach effectively while others do not. This paper discusses important new lessons learned about what works and what doesn’t, key strategies, tough decisions, latest research results, and links between evaluation and development.

The only direct, daily observers of a professor’s classroom teaching performance are the students in the classroom. Students are thus a potentially valuable source of information about their professors’ teaching.

Why is there the need for such judgmental information? There are two reasons: first, to improve teaching performance, and second, to provide a rational and equitable basis for personnel decisions. This chapter will focus on the use of student evaluation for improving teaching.

In truth, there is no better reason to evaluate than to improve performance. Evaluation provides data with which to assist the faltering, to motivate the tired, to encourage the indecisive. College and university professors are hired by institutions in expectation of first-class performance. To help faculty members hone their performance is nothing more than a logical extension of this expectation. Just as students need feedback and guidance to correct errors, faculty members require feedback and helpful direction if they are to improve their performance.

Of course, as Seldin (1995) points out, some teachers fail to recognize the need for improvement in their own teaching. They think that they are already doing a good job in the classroom, a perception that reduces their interest in strengthening their performance. For example, in a survey of nearly 300 college teachers, Blackburn et al. (1980) found that 92 percent believed that their own teaching was above average. For Angelo (1994, p.5), the findings evoked Garrison Keillor’s Lake Wobegon, “a place where all the woman are good-looking, all the men are strong, and all the children are above average.”

In our non-Lake Wobegon world, no matter how effective a particular professor is in the classroom, he or she can improve. No matter how effective a particular teaching method is, it can be enhanced. These are postulates in higher education.

A word of caution: It would be imprudent to limit an appraisal of classroom performance to the students. More apt to produce a fair and reasonably accurate assessment would be to add more sources of information, for example from classroom observation, self-appraisal, samples of instructional material, and videotaped classroom sessions.

Despite the clear value of using multiple sources of information, student feedback is the most widely used—and, in many cases, the only source of information on teaching effectiveness.

Form of the Questionnaire

Is there a single questionnaire suitable to every course, department or institution? Probably not, because different instruments are needed to evaluate different courses and produce different information.

It is virtually impossible to design a single student evaluation questionnaire that is equally effective for a large lecture, a seminar and a laboratory course. On the other hand, meaningful comparative data is generated when a common instrument is used to assess a range of teaching styles and subjects.

At some institutions (the Univetsity of Washington and SUNY College at Brockport, New York, for example) the faculty member selects one of several versions of a questionnaire as most suitable for their course and their way of teaching it. Each version contains general questions that are common to all forms, But one section of the questionnaire is designed to generate diagnostic feedback and contains questions applicable to different learning environments: (a) lecture/discussion, emphasis on content; (b) lecture format with a minimum of class participation; (c) seminar/discussion report; (d) lecture/discussion format, emphasis on process; and (e) format for student self-study or mediated courses.

Regardless of the version of the questionnaire used, students should not be asked questions that they cannot properly answer. They should not be expected to judge whether the materials used in a course are up to date or how well the instructor knows the subject matter. These judgments require more professional background and are best left to the teacher’s colleagues.

On the other hand, Seldin (1993) points out that students’ appraisals of courses and professors can be invaluable when the questions asked are appropriate. Students should be asked to assess what they’ve learned in a course and to report on such things as a professor’s ability to communicate at their level, the teacher’s professional and ethical behavior in the classroom, student-teacher relationships, and ability to stimulate interest in the subject matter.

A cautionary note: It is a mistake to misinterpret small differences in student ratings on individual questions. A professor who receives a rating of 4.13 on a question is not a significantly better teacher in that area than he or she is on a question in which their rating is 4.05.

If student ratings are to be used to improve classroom performance, it is especially advisable to include several open-ended questions so that students can respond more expansively and in their own words. Examples: “List the three traits you liked most about the instructor.” “If you were teaching this course, what is the first thing you would change?” “What was the most significant aspect of this course for you?” “How could the faculty member improve as a teacher?” “In what ways did this course meet or fail to meet your expectations?”

Diagnostic questions calling for perceptions or evaluations of specific teaching behaviors and specific aspects of the course are likely to be more useful than general or global questions about overall teaching effectiveness. The questions should spotlight 20 to 30 particular teaching behaviors (“The professor uses frequent examples in class”. “The professor calls students by name.” and course characteristics (“The assigned reading is too difficult”. In order to yield the specificity needed for improving teaching, the questions should be presented on a scale, rather than calling for a yes or no response.

When should the rating form be issued to students? Experience suggests four to five weeks into the term in order that the professor’s performance can then be monitored and deficiencies corrected, so that the current students are the beneficiaries.

Effects of Student or Instructor Characteristics

In general, factors that might be expected to influence student ratings have scant or no effect Arreola (1995) and Seldin (1993) report that little or no consistent relationship, for example, has been uncovered between student ratings and the instructor’s rank, sex, or research productivity. There appears to be no significant link between the amount of assigned work or grading standards and ratings. Further, little or no relationship has been found between students’ age, sex, year in college, or grade-point average and their ratings of instructors. Ratings are marginally higher in small classes (under 13 students), discussion classes, and classes in the humanities, but the differences are not statistically significant.

Even when significant relationships between extraneous variables and student ratings are obtained, they account for only 12 to14 percent of the variance between positive and negative ratings. To put it another way, 86 to 88 percent of the variance between positive and negative ratings cannot be attributed to extraneous variables (Marsh, 1984).

Perhaps because hundreds of studies have determined that student ratings generally are both reliable (yielding similar results consistently) and valid (measuring what the instrument is supposed to measure), both the American Academy of Arts and Sciences and the Carnegie Commission on Higher Education support the use of student evaluation as an important and trustworthy measure of teaching performance.

The Need for Consultation

Do student ratings lead to the improvement of teaching? An early study by McKeachie (1975) found that they do, but he also found that the improvement was dependent on specific influences. First, it depended on whether the ratings revealed something new to the teacher. Second, it depended on whether the teacher was motivated to improve. Third, it depended also on whether the teacher knew how to improve.

It seems clear that ratings will more· likely produce a salutary effect when discussed with the teacher by a sympathetic, knowledgeable faculty member or teaching improvement specialist who can reassure the professor that his or her problems are not insurmountable and offers appropriate counsel on ways to improve instruction (L’Hommedieu, Menges, & Brinko, 1990; McKeachie 1996; Marsh & Roche, 1993; Paulsen & Feldman 1995; Seldin, 1997).

The reason that student feedback plus skillful consultation often leads to instructional gains is that the consultant is able to interpret student ratings inspecific behavioral terms and to recommend specific behavioral change strategies. Murray (1991) and Paulsen and Feldiman (1995) believe that the need for instructional consultation can be mediated somewhat if more appropriate diagnostic feedback forms are used which include specific, low-inference behavioral items and clear prescriptions for remedial action. For example, low ratings on items like “moves about while lecturing” or ‘ intains eye contact with students” provide the instructor with clear signals as to what is wrong and what remedial action to take.

The University of California at Berkeley has developed an inexpensive yet effective approach tailored to the needs of individual faculty members. Called Personal Teaching-Improvement Guides, they are based on a twenty-four-item student rating form which probes particular teaching behaviors as a starting point. The guides include very specific descriptions of successful teaching practices, matched to the instructor’s lowest-rated items. Thus, faculty members are supplied with simple, proved, practical, suggestions that can be used immediately to improve their teaching (Wilson, 1987).

Additional Ways to Obtain Student Feedback

Although rating forms are the most popular method of obtaining student feedback on teaching, other methods are also available (Angelo & Cross, 1993: Arreola, 1995; Paulsen& Feldman, 1995; Seldin, 1997).

Interviews. The class interview begins with a written request from the professor for an instructional consultant to conduct an interview with the class. The consultant is given a list of questions or concerns by the professor. During the class interview, which lasts 30 minutes, students are asked to indicate by discussion and a show of hands whether they agree with, disagree with, or feel neutral about each concern. Then the results are recorded. For example, “Students would like more essay questions on exams: 70 percent agree, 20 percent disagree, 10 percent are neutral.” After the interview, the instructional consultant writes a report on the topics discussed and provides response percentages as well as specific (but anonymous) student comments. Lastly, the consultant discusses the results with the professor and establishes the needed objectives and strategies for improving instruction.

Student Evaluation Committees. In this approach, a small student group (usually 3 to 5) forms an evaluation committee for the class. Because service on the committee can be a time-consuming task, some professors drop 1 or 2 class assignments for members. The committee meets regularly outside of class to discuss such things as work load, appropriateness of assignments, and availability of the professor outside of class. Through formal and informal means, input from other students is encouraged and solicited. During the semester, the committee meets periodically with the professor to share its findings.

Quality-Control Circles. The purpose of the circle is to provide a vehicle for the systematic collection of student feedback on the teaching and learning that is taking place in the classroom, and how each can be improved. The professor begins by explaining the purpose of the circle to the class and asks for volunteers. The resulting circle is then introduced and other students in the class are encouraged to seek out members of the circle to provide comments, criticism, or suggestions about the course for discussion with the instructor at regular meetings with the members of the circle. One professor at Penn State University found that his students offered valuable suggestions which helped him fine-tune his instruction. The suggestions ranged from depositing a copy of his lecture notes in the library to ways to make more effective me of the blackboard (Kogut, 1984).

Small-Group Instructional Diagnosis. This process, which is usually done at the mid-semester point, begins with a meeting between the professor and an instructional consultant in which specific instructional concerns are identified. Next, the consultant visits the classroom and divides the students into groups of 6 to 8 students. Each group is given about 10 minutes to reach consensus on 3 questions: “What do you like about the course?” “What improvements do you think can be made?” “What strategies do you suggest for producing these improvements?” (The last question is particularly important because it confronts the students with the realization that some changes may be difficult, if not impossible.)

The consultant records each group’s responses, clarifying them if necessary. The responses are summarized and discussed with the professor. Some professors then discuss the results with the class. Taking the extra step enables the professor to respond to criticisms and suggestions and to demonstrate to the students that their views are taken seriously.

Student-Visitor Program. In this unusual approach to receiving feedback from students, the professor invites students into his or her classroom who are not “official” members of the class but who are trained in classroom observation. The student-visitors observe the instructor, talk with students in the class, and provide specific suggestions to enhance the professor’s effectiveness in helping students learn. This technique was pioneered at Brigham Young University (Utah).

Talking With Faculty About Their Teaching

How does the consultant talk with faculty members about improving their teaching? Most important is that the consultant include praise for the faculty member’s achievements. Because teaching demands a monumental investment of self, it predisposes professors to sensitivity toward criticism. Thus, it is critically important that teaching weaknesses be discussed in a framework of accomplishment. The challenge to the consultant is to accomplish change without disturbing the professor’s integrity and self-esteem (Seldin, 1996).

Accomplishing this challenge is not easy. It requires the mental clarity of a Zen master, the determination of Atlas, the stamina of a marathon runner, the tenderness of St. Francis, the creativity of Picasso, the skill of a surgeon. How might it be done? The following guidelines and strategies are based on years of practical experience and the work of Brinko (1993), Seldin (1996), and Weimer (1987).

  1. Place all improvement activities under the instructor’s control. Allow (even encourage) the instructor to select the method of student feedback and to target areas of improvement.

  2. Encourage professor’s to comment on student ratings in the context of their teaching methods and goals. Suggest that they add 2 or 3 questions to the rating form that are tied to their personal teaching methods and objectives. Why? In order to get information of particular interest to the faculty member which is not covered in the standard student rating form.

  3. Analyze teaching continuously. To improve their instruction, professors need a clear record of where they have been and how they are progressing. Because teaching improvement is often painstakingly slow, continuous progress checks are needed to justify the substantial time and effort invested in modifying classroom behaviors.

  4. Determine that the student feedback is perceived by the professor as credible, knowledgeable, and well-intentioned. Unless it is viewed in that light, the professor will likely be unresponsive, or worse, even argumentative, in responding to the information and suggestions of the consultant. At all times, be respectful, supportive, and empathic.

  5. Focus discussion on teaching behaviors. Avoid dealing with teaching in the abstract. Talk about what effective or ineffective teachers do or do not do. Be specific and focused.

  6. De-emphasize strong judgmental conclusions. The discussion about teaching should focus on instruction, not evaluation. Recognize that the effects of instruction will vary from one student to another. In truth, students differ in experience, ways of thinking, and motivation. For that reason, no single method of instruction is equally effective for all students. Also, students react to instructors individually and some individuals, for one reason or another, may be irritated by something an instructor does or says.

  7. Present carefully confined conclusions about teaching effectiveness. The comprehensiveness of the conclusions must be consistent with the available data. For example, don’t take the ratings from one class and apply them to other classes.

  8. Give professors remedies for classroom problems. Just providing the diagnosis of classroom difficulties is not enough. Propose specific ways of being more effective in the classroom. Offer alternatives. Encourage active participation in teaching-development activities.

Conclusion

Student feedback on teaching will never be a panacea for all of the ailments of higher education, but it can heighten instructional effectiveness and thereby improve the quality of education.

There is enough empirical evidence to indicate that student feedback provides reliable and valid information. There is also enough empirical evidence to indicate that the likelihood of improvement increases when the professor can turn to the expertise of a consultant to: (1) interpret the student feedback, (2) discuss specific teaching behaviors open to improvement, and (3) recommend specific behavioral change strategies.

Student feedback on teaching falls far short of a complete assessment of a professor’s teaching contribution. But if teaching is to be improved, a systematic measure of student views can hardly be ignored. To put it another way: the opinion of those who eat the dinner should be considered if we want to know how it tastes.

References

  • Angelo, T. A. (1994). From faculty development to academic development. American Association for Higher Education Bulletin, 46,3-7.
  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college faculty. San Francisco: Jossey-Bass.
  • Arreola, R. A. (1995). Developing a comprehensive faculty evaluation system. Bolton, MA: Anker Publishing Company.
  • Blackburn, R.T., Bober, A., O’Donnell, C., & Pellino, G. (1980). Project for faculty development program evaluation: Final report. Ann Arbor, MI: University of Michigan, Center for the Study of Higher Education.
  • Brinko, K., (1993). The practice of giving feedback to improve teaching. Journal of Higher Education, 64, 574-593.
  • Kogut, L. S. (1984). Quality circles: Japanese management in the classroom. Improving College and University Teaching, 32, 123-127.
  • L’Hommedieu, R., Menges, R. J., & Brinko, K. T. (1990). Methodological explanations for the modest effects of feedback from student ratings. Journal of Educational Psychology, 82,232-241.
  • McKeachie. W. J. (1975). Assessingteaching effectiveness: Comments and summary. First International Conference on Improving University Teaching, Heidelberg, Germany.
  • McKeachie, W. J. (1996). Teaching portfolios and student evaluations: Tools for faculty development. AACSB national videoconference.
  • Marsh, H. W., & Roche, L. (1993). The use of students’ evaluations and an individually structured intervention to enhance university teaching effectiveness. American Educational Research Journal, 30, 217-251.
  • Marsh, H. W. (1994). Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707-754.
  • Murray, H. G. (1991). Effective teaching behaviors in the college classroom. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 135-172). New York: Agathon Press.
  • Paulsen, M. B., & Feldman, K. A. (1995). Taking teaching seriously: Meeting the challenge of instructional improvement (ASHE-ERIC Higher Education Report No. 2). Washington, DC: The George Washington University.
  • Seldin, P. (1993, July 21). The use and abuse of student ratings of professors. The Chronicle of Higher Education, p. A40.
  • Seldin, P. (1995). Improving college teaching. Bolton, MA: Anker Publishing Company.
  • Seldin, P. (1996). Improving college teaching: What works, what doesn’t? Paper presented at the 16th Annual Lilly Conference on College Teaching, Oxford, OH.
  • Seldin, P. (1997). Evaluating college teaching. Paper presented at the National Conference for Department Chairs, American Council on Education, San Diego, CA.
  • Weimer, M. G. (1987). How to talk with faculty members about improving teaching skills. Workshop presented at the Conference on Academic Chairpersons Orlando, FL.
  • Wilson, R. C. (1987). The personal teaching-Improvement guides program - user’s manual Berkeley: Office of Resean:h on Teaching Improvement and Evaluation, University of California.
  • Contact:

  • Peter Seldin

  • Lubin School of Business

  • Pace University

  • Pleasantville, N.Y. 10570

  • 914-773-3305

Peter Seldin is Distinguished Professor of Management at Pace University. He is the author of eleven books, including most recently The Teaching Portfolio, Second Edition and Improving College Teaching. He has contributed numerous articles on the teaching profession, student ratings, faculty evaluation, and academic culture to publications as The New York Times, The Chronicle of Higher Education, and Change Magazine. Peter has won awards both as an educator and as a grower of cherry tomatoes.