Colleagues as Catalysts for Change in Teaching
Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact : [email protected] to use this work in a way not covered by the license.
For more information, read Michigan Publishing's access and usage policy.
Colleagues can provide potentially powerful information for improving teaching. This paper discusses ways information from colleagues might be fed back to college teachers and considers how such feedback affects subsequent teaching.
Underlying the discussion is an analogy with chemistry. This analogy compares colleagues and catalysts, reminding us of the “chemistry” between teacher and student. It implies that teacher/student “reactions” can be productively affected by the presence of a colleague. As catalyst, a colleague may speed up the reaction or otherwise alter its effect. Other features of this analogy are discussed later in the paper.
Research on feedback to teachers relies primarily on student ratings as the source of feedback information. Several studies also include information from peers or teaching specialists who serve as consultants. A meta-analysis of 30 of these studies concludes that there is a modest effect on subsequent student evaluations from student ratings feedback alone. These results show that if the average teacher receiving no feedback is at the 50th percentile, the average recipient of student ratings feedback is at the 59th percentile. The effect is enhanced considerably when student ratings are augmented with consultation and/or other forms of feedback. Under those circumstances, the average teacher’s post-feedback ratings are at the 86th percentile (Menges & Brinko, 1986; see also Cohen, 1980).
Reasons for this strong consultation effect are unclear, since research reports give few details about the consultation process or about the characteristics of those who serve as consultants. In this paper I discuss colleagues and consultation, drawing on qualitative studies in postsecondary education and on conceptualizations from other fields, such as organizational behavior. The paper has five sections: the feedback seeker, the relationship between feedback giver and feedback seeker, the feedback message, the costs of consultation activities, and some programmatic issues.
THE FEEDBACK SEEKER
The more actively feedback is sought, the more effective it is likely to be. To view consultants as experts who work with “clients” is counterproductive since those terms imply dependence. Even more objectionable is the image brought to mind by a recent article whose authors describe the first step in a faculty development program as “setting up a surveillance system” (Fuller and Evans, 1985, p. 32). The term “victim” would perhaps be most appropriate in that situation.
Instead, let initiatives rest with the faculty member. As college teachers, we are naturally inclined to ask how we are doing, to scan the environment for information about the consequences of what we do, and to adjust what we do based on that information. In other words, college teachers are naturally disposed toward seeking and using feedback. That natural inclination may be easily destroyed by teaching improvement programs, however well intentioned, where participation is imposed. Such programs come to be perceived as serving purposes which are primarily institutional. Feedback then functions more as a resource for the organization than as a resource for individuals.
The active role of colleagues is conveyed by the notion of “feedback-seeking behavior” (Ashford and Cummings, 1983). To acknowledge that colleagues are active feedback seekers means dealing with them as designers of programs and of services rather than solely as objects of programs and consumers of services. Decisions about participation in teaching improvement programs then appropriately rest with the faculty member. Let us think of our faculty colleagues neither as objects of assistance nor as recipients of services nor as clients of consultants but primarily as active feedback seekers.
THE RELATIONSHIP BETWEEN FEEDBACK GIVER AND FEEDBACK SEEKER
The feedback giver/feedback seeker relationship has several dimensions. Among the most salient dimensions are expertise, companionability, and status.
The desirable level of expertise about teaching and learning which the feedback giver should have relative to the feedback seeker varies with training and experience. The feedback giver, responding to a request for feedback, may use experience and expertise in several ways: by offering precise terminology for talking about teaching and learning, by sharing new perspectives from which to view issues, and by suggesting new strategies which contribute to change. Expertise in excess is intimidating; so caution is advisable about how it is displayed and applied. While some faculty prefer the fairly formal professional relationship with a trained specialist, others prefer a close and informal relationship with a peer.
An easy, informal relationship can be especially supportive if it encourages sharing frustrations as well as successes. In their discussion of staff development for precollege teachers, Joyce, Hersh, and McKibbon (1983) note that one important function of faculty teams is to provide companionship. Companionability, at least around teaching activities, is all too rare among professors, and can open the way to greater satisfaction and stimulation from one’s teaching role.
A relationship where the feedback giver is of higher status is to be avoided. One study with precollege teachers (Tuckman & Oliver, 1968) found that feedback from students improved teacher performance, but feedback from supervisors actually decreased performance. Presumably the information from students had greater credibility since students come to class more regularly than supervisors. When administrators, colleagues of more senior rank, or others involved in institutional review processes provide feedback, faculty are likely to feel threatened and their natural feedback seeking behaviors may be inhibited.
Each of these characteristics of the feedback giver/feedback seeker relationship—expertise, companionability, and status—deserves more systematic study as part of teaching improvement programs.
THE FEEDBACK MESSAGE
What content is likely to maximize the impact of a feedback message? Research is silent on this question except for several studies where classroom interaction data comprise part of the feedback information (see for example Roland, 1983; Sorge, 1970). Each of these studies finds significantly positive impact on teaching after information about classroom interaction is fed back to college teachers. Such feedback has several distinctive qualities: interaction data are seen as precise, objective, irrefutable, and as carrying implicit prescriptions. Thought might be given to how information on other aspects of instruction might be made similarly persuasive.
It can be argued that the most valuable feedback from colleagues is not about teaching activities but rather about teaching materials such as syllabi, assignments, and examinations. No one is better able than a colleague to make knowledgeable comments about the accuracy and currency of teaching materials. The closer the colleague is to the speciality of the course, the more credible such feedback will be.
When feedback deals with teaching activities, on the other hand, a colleague’s detailed knowledge of course content may hinder rather than help. Conversations tend to focus on substantive details which are less pertinent than data about teacher or student behavior. One task of colleague observers is to take the role of naive learner, but it is even more difficult for a colleague from the same discipline to assume that role than it is for one from a distant discipline.
With regard to form of the feedback message, it appears that faculty give greater credibility to information which comes in discursive form, such as student written comments, than to information reported quantitatively, such as student ratings (Ory & Braskamp, 1981; Clark & Bergstrom, 1983). Research on communicating results of program evaluations to decision makers finds that nonwritten messages (audiotaped or videotaped) are more persuasive than those which are in written form alone (Ripley, 1985).
To incorporate such findings into the teaching improvement process, a feedback giver might select student comments and present them verbally to a colleague. Of course, the selected comments must be representative and should be presented along with suggestions for change. A complete set of written comments should also be provided.
COSTS
The major cost of activities which rely on feedback from colleagues is contributed time, and that cost may translate into many dollars. Other costs are likely to be relatively low in financial terms. Ashford and Cummings (1983) distinguish three categories of costs for feedback programs in organizations: effort costs, face loss costs, and inference costs.
Effort costs depend primarily on the availability of information. Student evaluations usually carry low effort costs because they are readily available. If professors’ questions about the effects of their teaching are answered by student evaluations, additional effort to seek feedback from colleagues is unlikely. On the other hand, if their questions concern matters about which colleagues are better informed than students, the effort required to seek colleagues’ feedback may be seen as worthwhile. For example, if the issue is solving a teaching problem, colleagues may be able to suggest a greater variety of solutions than students can.
Costs in loss of face, that is, costs in personal embarrassment, have both objective and subjective correlates. Some faculty are objectively more vulnerable, for instance those who are nearing tenure review. They naturally want to minimize risks from any activity which may expose weaknesses; for them, loss of face costs are high. Risk of personal embarrassment also depends on less evident subjective factors. One’s estimate of self-confidence and level of assertiveness are subjectively defined. The expectation that others view a request for feedback as signaling weakness is a subjective factor which raises the costs of seeking feedback. Loss of face costs are reduced or offset in proportion to the trust existing between colleagues; embarrassment can be risked with someone who is trusted. Confidentiality is also important. Consulting a colleague about teaching is less risky when that relationship is confidential and completely separated from the institution’s personnel review procedures.
Inference costs have to do with the accuracy and ease of interpretating feedback information. Teachers ordinarily receive feedback from end-of-course evaluations, occasional student comments, test scores, and so on. If these sources of information are contradictory, a colleague may be helpful in weighing and interpreting the data, thus reducing inference errors. Some information for feedback, such as classroom behavior, is almost impossible to gather while one is teaching. Using a colleague as observer and information recorder increases the amount and interpretability of such information, again reducing inference costs.
Costs in effort, loss of face, and inference are not easily budgeted, but they may be critical for program success or failure. They are also related to the reward structure for colleague consultation. Reducing any of them will likely lead to corresponding increases in the motivation of participants.
PROGRAMMATIC ISSUES
To establish colleague consultation in the best of all possible worlds, qualified faculty would volunteer and alternate in the feedback giver/feedback receiver roles. In the real world, however, a number of practical issues arise: some professors are notably better at one or both roles than others are; many are reluctant to participate; and all need some reinforcement, either intrinsic or extrinsic, if their participation is to continue. Issues of recruitment and selection, training, and incentives and rewards can be approached in a variety of ways, as illustrated by the following examples.
Three Program Examples
Pairs. An approach used at several institutions pairs faculty with one another (Katz, 1985). All participants are volunteers, but members of the pair do not necessarily come from the same department. One member visits meetings of the partner’s class, and each of them interviews students in the class about their previous and current course experiences. Students complete a thinking styles inventory, the Omnibus Personality Inventory, and some are selected to be interviewed because of the similarity or dissimilarity of their thinking styles profile with that of the instructor. The pair meets regularly to compare impressions from interviews and classroom visits. In a subsequent term, roles are reversed and the observer becomes the observed.
The greatest value of this approach seems to lie in its dramatic demonstration of the variety of students who are present in every classroom, particularly the diversity of cognitive styles and levels of development. Individual students become much more vivid to the professor. Because the activity requires a fair amount of faculty time, it is sometimes difficult to recruit participants. Once involved, however, they rarely drop out, presumably because the issues are so intellectually challenging. Some faculty realize for the first time that teaching involves intellectual puzzles as formidable as the puzzles they face in research.
Triads. Groups of three faculty were organized for feedback purposes at the University of Cincinnati and other institutions (Sweeney and Grasha, 1979). After a training session, the triad holds a meeting where each member shares two or three major goals for the class session to be observed, goals which previously had been written out. Members then determine what activities will be observed and for which behaviors feedback will be provided. Within a week after the classroom visit, the triad assembles to reconstruct events of the class meeting and to discuss both positive and critical features. The person observed then chooses problem areas to work on before the next observation. The plan calls for each member to be observed twice during the term.
This approach works best, according to the authors, when participants are volunteers, when they are carefully trained, and when someone outside the triad is responsible for monitoring the group’s progress.
Teams. At Texas Tech University, faculty participate in teams which include the professor to be observed, a team leader (preferably someone with previous experience on a team), one to three other professors, and perhaps a graduate student (Skoog, 1980). The role of the observed rotates through the team as observation cycles are repeated. The preobservational conference is under the control of the person to be observed, and the major outcome of that session is a contract which covers the objectives, content, and circumstances of the observation visit. At the appointed time, team members visit the class for 15 to 20 minutes, an adequate duration since each observer has a different task. During the postobservation conference, observers describe their data, initially making no value judgments and emphasizing primarily the strengths of the teacher. Conversation then moves toward formulating a plan for strengthening a pertinent area of teaching.
Throughout, the team critiques its own performance as a group and as individuals, sometimes meeting without the member who has been observed. “By observing, critiquing, and planning strategies for the improvement of the teaching of colleagues, faculty members acquire knowledge, insights, and strategies useful for self-supervision and self-improvement” (p. 24).
Training
Classroom observation is not necessarily a productive technique, especially in the absence of training. Scriven (1981) discusses classroom visits as an example of how not to evaluate teaching. He notes that the presence of visitors is likely to alter classroom events, that one or two visits do not comprise a sufficient sample of teaching, and that teachers require considerable training to minimize the bias which their individual prejudices introduce into observational data. Further, he contends that there is little research evidence to show that observable teaching behaviors are reliably related to measures of student learning.
Some of these points are less pertinent when visits are used for the purpose of improving teaching than when they contribute data for tenure and promotion decisions. Nevertheless, attempts should be made to help faculty become more objective and reliable observers. Consideration might also be given to other ways of assessing classroom events. It is considerably less costly to rely on tape recordings or on the teacher’s own reports or on student evaluations than to spend time visiting the classroom. For some purposes those low effort data may be sufficient for successful consultation with a colleague.
When classroom visits are indicated, some time should be invested in improving observation and feedback skills. Training should cover, among other areas, use of appropriate paper and pencil forms to organize observations, how to select information for feedback which is new information for the person being observed, how to differentiate descriptive and judgmental comments while giving feedback, and how to deal with colleagues if the situation becomes stressful. Role playing is a helpful technique for this training, and role play sessions might be stimulated by videotapes of teachers who are not members of the group.
Sample instruments for observation and guidelines for giving feedback can be found in Fuhrmann and Grasha (1983) and in Bergquist and Phillips (1975) as well as in the files of many practitioners around the country who have developed their own approaches.
Rewards
Faculty participate in consultation activities for many reasons, but primarily because of their inclination toward active feedback seeking and their intrinsic motivation to become more sensitive and flexible teachers. Rewards for participation are also important and should not be overlooked, although such rewards need not be large. A modest cash stipend may suffice, or provision might be made for something which would make teaching more effective or simply make it easier, for example, library or clerical assistance, instructional hardware or software, participation in an off-campus workshop or conference, and so on. Since the value of such rewards/incentives varies with the individual’s situation, they are most effective when selected by each participant.
CONCLUSION
Effectiveness of colleagues as consultants in the teaching improvement process has yet to be validated against criteria of student learning. As far as faculty participants are concerned, however, findings are clear: participants report high satisfaction, more interaction with other faculty members, increased motivation, and renewed interest in teaching.
Returning to the chemistry analogy, we can now identify some conditions associated with situations where colleagues are most likely to affect “reactions” between faculty and their students. The teacher is an active agent in these reactions, that is, an active feedback seeker. The environment is favorable for the reaction; for example, costs must not be prohibitive, participants are appropriately trained, and rewards include both intrinsic and extrinsic consequences.
Finally, it is important to note that when the reaction occurs it affects not only the teacher and students but the feedback giver as well. A colleague enters into the reaction and is inevitably changed by that experience. The fact that both feedback giver and feedback seeker are changed contradicts the common meaning of catalyst; that is, the catalyst is a substance which remains unaltered in the reaction. Although effects on the colleague may weaken the analogy with chemistry, their occurrence is a definite bonus for programs to improve teaching.
REFERENCES
- Ashford, S. J., and Cummings, L. L. (1983). Feedback as an individual resource: Personal strategies of creating information. Organizational Behavior and Human Performance, 32, 370-398.
- Bergquist, W. H., and Phillips, S. R. (1975). A handbook for faculty development. Washington, DC: Council for the Advancement of Small Colleges.
- Clark, D. C., and Bergstrom, S. J. (1983). Type and perception of feedback and teacher change. Paper presented at the American Educational Research Association, Montreal.
- Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings. Research in Higher Education, 13, 321-341.
- Fuhrmann, B. S., and Grasha, A. F. (1983). A practical handbook for college teachers. Boston: Little, Brown.
- Fuller, J. A., and Evans, F. J. (1985). Recharging intellectual batteries: The challenge of faculty development. Educational Record, 66(2), 31-34.
- Joyce, B. R., Hersh, R. H., and McKibbin, M. (1983). The structure of school improvement. New York: Longman.
- Katz, J. (Ed.) (1985). Teaching as though students mattered. San Francisco: Jossey-Bass.
- Menges, R. J., and Brinko, K. T. (1986). Effects of student evaluation feedback: A meta-analysis of higher education research. Paper presented at the American Educational Research Association, San Francisco. (ED 270 408).
- Ory, J. C., and Braskamp, L.A. (1981). Faculty perceptions of the quality and usefulness of three types of evaluative information. Research in Higher Education, 15, 271-282.
- Ripley, W. K. (1985). Medium of presentation: Does it make a difference in the reception of evaluation information? Educational Evaluation and Policy Analysis, 7, 417-425.
- Roland, C. (1983). Style feedback for trainers: An objective observer system. Training and Development Journal, 37, 76-81.
- Scriven, M. (1981). Summative teacher evaluation. In J. Millman (Ed.), Handbook of teacher evaluation. Beverly Hills, CA: Sage. 244-271.
- Skogg, G. (1980). Improving college teaching through peer observation. Journal of Teacher Education, 31, 23-25.
- Sorge, D. H. (1971). Effectiveness of Interaction Analysis feedback on verbal behavior of college teachers and attendant effect on pupil attitude and achievement. Dissertation Abstracts International, 32-2984A, University Microfilms No. 72-1956.
- Sweeney, J. M., and Grasha, A. F. (1979). Improving teaching through faculty development triads. Educational Technology, 19(2), 54-57.
- Tuckman, B. W., and Oliver, W. F. (1968). Effectiveness of feedback to teachers as a function of source. Journal of Educational Psychology, 59, 297-301.