Implementing Peer Review Programs: A Twelve Step Model
Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact : [email protected] to use this work in a way not covered by the license.
For more information, read Michigan Publishing's access and usage policy.
Webb, J., & McEnerney, K. (1997). lmplementing peer review programs: A twelve step model. In D. DeZure (E.d.), To improve the Academy, Vol. 16 (pp. 295-316). Stillwater, OK: New Forums Press and the Professional and Organizational Development Network in Higher Education. Key Words: Faculty Development, Feedback, Peel Evaluation.
Nationally, universities and colleges are expressing increased interest in peer review of teaching in response to public calls for accountability from academe. Further motivation comes from within campuses themselves as they respond to an increasingly non-traditional student body. Based on our experience with a peer observation program at California State University-Dominguez Hills, we identified twelve steps for planning and implementing a peer review process. In this article we discuss each of the twelve steps, presenting a rationale and sharing our experiences.
Teaching is a scholarly activity, with all that implies... if faculty do not take charge of ensuring (and setting the standards for) the quality of teaching, bureaucratic forms of accountability from outside academe will surely rule the day (Hutchings, 1996, p. 3).
Peer review is an accepted scholarly responsibility for faculty members in all post-secondary institutions. As Seldin (1995) indicates, although universities and colleges talk about the importance of teaching, they evaluate faculty primarily on evidence of scholarship, not effective teaching. Nationally, universities and colleges are expressing increased interest in peer review of teaching. For example, in 1994, the American Association for Higher Education implemented an initiative which involved twelve research institutions (Hutchings, 1994). In spring 1995, the California State University system supported pilot peer review programs on five campuses, culminating in a two day conference attended by faculty and administrators from the (then) 22 campuses in the system.
Why this increased interest by the academy? Kennedy (1995) cites public dissatisfaction with higher education and financial constraints as two causes of change in higher education. In response to voters, federal and state legislators are increasingly concerned about the education of students in publicly supported colleges and universities and are requesting, even requiring, accountability from the faculty.
Beyond this external pressure, further motivation comes from within campuses. An increased demand for education by a larger segment of the population has produced a more diverse student body, “often of an age with which the system is still relatively unfamiliar, and...from family circumstances and patterns of work commitment vastly different from our past experience” (Kennedy, 1995, p.11). These students have different needs and expectations than traditional students, and faculty now recognize that they can no longer teach the way that they themselves were taught
In response to these changes, many campuses are introducing faculty development programs which promote the transfer of teaching “from private to community property” (Shulman, 1993, p. 6-7). However, in order for teaching to become community property, it must be valued as a form of scholarship. Boyer (1990) identifies teaching as one of the scholarly activities in which faculty participate, the others being integration, discovery, and application. Both Shulman and Boyer advocate peer review of teaching and its products in order for the scholarship of teaching to be recognized by the academy.
In 1993, faculty at California State University-Dominquez Hills (CSUDH) developed a formative peer review/support program that consists of reciprocal classroom observations and periodic discussions on teaching and learning. This program, called TOPS (teacher observation / peer support) has since been fully institutionalized. Participation in the program is voluntary and has included more than fifty part-time and full-time faculty whose teaching experience varies from less than one to more than twenty-five years. To date, about 20% of the full-time faculty have participated in the program (Webb & McEnerney, 1995).
As the program evolved, we realized how little we knew initially about the peer review of teaching. To help faculty from other campuses plan their own peer review programs, we developed a list of first six, then ten, and now twelve steps (McEnerney & Webb, 1995) we feel are important to the successful implementation of a peer review program (see Figure 1). We wish we had known these when we piloted our own program. In this article, we will present each of the twelve steps, then discuss how it is addressed in TOPS, sharing our experiences and rationale.
1. Create a statement of purpose that clearly identifies program leaders, type of review, disciplinary focus, participants, rewards, and expected outcomes.
The purpose will inform all other aspects of the peer review program; therefore, it must be clear and comprehensive. It should be developed in consultation with faculty and administrators, consistent with the campus mission, and responsive to campus and faculty needs. Campus needs might include a response to legislation or accreditation agencies, changing student demographics, a need for faculty development, or required faculty review. Faculty needs might include recognition of the use of new teaching/learning strategies or inclusion in a community in order to decrease “pedagogical solitude” (Shulman, 1993).
Because campus and faculty needs vary, peer review/support takes many forms, including classroom observations, review of teaching materials or artifacts (syllabi, examinations, assignments, etc.), and classroom research tools. Teaching portfolios are an increasingly popular strategy for peer review because they include multiple assessments of teaching, such as student and self-evaluation in addition to peer review. Such a portfolio may be developed for a variety of purposes and may take many different forms (Seldin, 1997; Anderson, 1993).
The mission of CSUDH, as stated in the University Catalog, indicates that we are a “teaching and learning community.” In response to this mission, the original purpose of TOPS was to improve teaching and learning for those faculty who voluntarily participated. We soon realized that we could not document “improved” teaching and keep the program formative because such documentation would change the nature of the program. In addition, several administrators started recommending that faculty who “needed”improvement should participate (as though peer review were remedial). Most important, the TOPS faculty told us that the real value of the program was the opportunity to reflect and to discuss teaching as a scholarly activity in a supportive community. We responded to the nature of the program (formative) and the values of the faculty while still supporting the campus mission by revising the purposes of TOPS. The goals of TOPS are now to: (1) support a diverse community of teacher/scholars; (2) promote reflection that will enhance the teaching and learning environment; and (3) foster the scholarship of teaching. These purposes are reviewed and refined yearly during a faculty retreat so that the program remains flexible and responsive to faculty and campus needs and expectations. Appendix A is a statement of purpose and fact sheet for potential participants.
2. Identify program leaders consistent with the purpose of the peer review process.
Strong philosophical and fiscal support from higher administration is essential to a program’s success because faculty need to see that the program is valued by administrators. However, a peer review program needs strong faculty-based leadership because the leaders will affect the faculty’s acceptance of the program, thereby impacting its success. Sorcinelli and Aitken (1995), in recommending faculty-based leadership, note that, “The natureof academe is such that faculty will generally resist administrative leadership” (p. 313). Thus, program direction should be set by faculty leaders in consultation with faculty participants, not by administrators.
Campuses might consider selecting a team of two or three faculty rather than one individual to lead the program. Thus, the program would not be closely linked with a single person (e.g., Bob’s program) whose absence might cause the program to lose impetus. Further, faculty would have a choice of individuals with whom to work. Each campus should consider which leadership team would be most effective, considering faculty discipline, years of experience, gender, diversity, and other issues which may impact the program. On our campus, we found that a team of two faculty works well because the leaders model the reciprocal partnership which is central to the TOPS program.
3. Differentiate between formative and summative review.
Peer review programs can be broadly classified as formative or summative depending on a program’s purpose. Formative review is generally used to provide feedback for professional growth and development Summative review is used to make personnel decisions (Centra, 1993; Arreola, 1995). Centra states that, “the key word is... use—not intended use but actual use” (p. 5). Program participants will ask, and should know, how the data will actually be used.
Formative and summative assessment should be independent of each other but related such that each reinforces the other—the formative process leads to improved summative assessments and the summative evaluation reflects efforts made in the formative process. Although the two types of assessment should be conducted separately, they may have the same focus, such as enhancing overall effectiveness of instruction at an institution (Weimer, 1990).
Formative review is usually confidential and non-judgmental; its goal is self-motivated change. Participation in formative programs is often voluntary. Information gathered is confidential and private and, therefore, not used for formal evaluation. If formative data are used to make a decision about a faculty member, the data become summative and may undermine the purpose of the program (Keig & Waggoner, 1994). Because TOPS is a formative program, participation is voluntary and separate from the reappointment, tenure, and promotion process.
Summative assessment is used to collect reliable and relevant data to make personnel decisions, such as hiring, tenure, and promotion, which will be made public. It is judgmental by nature, formally written, and carries legal implications. Therefore, summative assessment must be consistent with relevant contracts, memoranda of understanding, and accreditation requirements, and should include an appropriate appeal or grievance procedure. Even though the stated goals of a summative process may be motivation and improvement, faculty often perceive summative evaluation as threatening (Seldin, 1984).
Many faculty assume that peer review of teaching is the same as peer observation, but that is not necessarily true. Peer review of teaching might include evaluation of course materials, review of student evaluations, or outcomes, such as student preparation for a subsequent course or passing rates on an external examination. If peer review of teaching includes classroom observation, it is critically important to train each observer (Centra, 1993; Braskamp & Ory, 1994; Keig & Waggoner, 1994). Braskamp and Ory (1994) recommend observations by more than one colleague and state that “at least three classroom observations for a given class over a single semester or quarter are recommended to ensure adequate representation” of teaching behaviors (p. 205). A single classroom visit may be likened to a snapshot, while the entire course might be compared to a feature-length movie complete with action, sound, and special effects. The former (snapshot) provides a very limited perspective, while the latter (video) requires a great investment of time and effort. After all, who among us has not had a bad day in the classroom (or a bad snapshot) and would not want that single experience to represent our professional expertise?
4. Determine whether the program will be cross-disciplinary or discipline-based.
Peer review programs are either cross-disciplinary, including peers from a variety of departments and schools, or discipline-based, within a single academic unit. The disciplinary focus of peer review is controversial. Shulman (supported by Keig and Waggoner, 1994) argues for discipline-based peer evaluation of teaching because the disciplines are “the basis for our intellectual communities” and faculty experience with peer review of scholarship lies primarily with disciplinary colleagues (Shulman, 1993, p. 6). Kennedy (1995), however, argues that “all academic scholars belong to the same calling” (p.14). He also suggests that the lines between disciplines are becoming blurred, further supporting the benefits of cross-disciplinary observations and conversations about teaching and learning.
The choice of a cross-disciplinary or discipline-based focus depends on the purpose of the peer review program, faculty needs and values, and the campus culture. The benefit of discipline-specific peer review is the assessment of content validity and currency and of the accurate and effective transformation of content in the classroom. However, content validity and accuracy can be determined through the assessment of teaching materials such as syllabi and assignments. Our experience indicates that classroom visits play only a limited role in determining content accuracy because a professor may need several class sessions (even an entire term) to develop certain concepts. Again, multiple observations by different observers during at least three class sessions are recommended to provide adequate representation of teaching behaviors (Braskamp & Ory, 1994).
The benefits of cross-disciplinary peer review include reduction of anxiety, focus on teaching process instead of content, protection of confidentiality, and ability to view instruction from student perspective. Another is the opportunity to develop relationships with new colleagues from other disciplines. Quinlan (1995) suggests that conversations among faculty from different disciplines “helps faculty take a fresh look at the assumptions they hold about university education and how to teach their subject matter to their students” (p. 19).
In the Dominguez Hills program, the faculty were initially concerned that colleagues from other disciplines would be unable to understand their teaching strategies without understanding their content. However, we found that content knowledge could actually interfere with observation of classroom teaching. For example, one of our colleagues was so bothered by what he perceived as an omission in a statistical formula on the blackboard, that he couldn’t see beyond this one small fact to note effective teaching behaviors or student responses. His assessment of that professor’s teaching and the student’s learning was clouded by his own experience in teaching that content.
The TOPS faculty now firmly believe that cross-disciplinary partnerships are valuable because they offer insights into teaching strategies that might not occur to a disciplinary colleague. One of the most successful partnerships was between professors in theater arts and physics. The theater arts professor was impressed by the physicist’s use of theatrics to explain theories such as wave motion, while the physicist analyzed the teaching strategy used by his colleague and dubbed it “QuARC” for question, answer, response, commentary. They continue to consult with each other about teaching strategies even though their disciplinary content differs widely.
5. Identify all potential participants and determine how best to communicate with them.
The purpose of a peer review program should identify potential participants. For example, a program may be designed specifically for part-time or untenured faculty within a school, for all faculty, or for post-tenure review. Formative peer review programs are frequently voluntary (Finkelstein, 1995), while summative programs are usually mandatory for targeted faculty. Whether or not the program is voluntary, all participants, including those reviewing or being reviewed, should be fully aware of the program purpose, criteria and expectations (Braskamp & Ory, 1994). If participation is mandatory, targeted faculty should be informed in writing about the selection process, the peer review procedure, the responsibilities of participants, expected outcomes, and consequences of non-participation.
Peer review programs which invite rather than require faculty participation may need a recruitment strategy to ensure program success. Such a strategy should include a clear definition of the target audience, a dissemination plan which identifies the program purpose and intended outcomes, and processes for continued communication with the campus community. If the target audience for a formative program is tenured faculty, we recommend that faculty be recruited initially from among campus leaders with reputations for outstanding teaching. In this way, participation in the program becomes an honor. Personal contact by a colleague is also an effective approach to recruitment to draw others into the program. Above all, administrators should be discouraged from directing individual faculty who “need improvement” into voluntary peer review programs because it implies that such programs are remedial, thereby making recruitment of strong faculty difficult.
6. Develop a process for selecting appropriate peer reviewers.
A peer review program should identify not only the target faculty for review but also a process for selecting the peer reviewers, and it should specify the nature of their relationship, (e.g., mentor/mentee, detached observer, etc.) (Millis, 1992). Most summative programs rely on committees of reviewers, usually elected from eligible faculty on campus. The nature of their relationship is usually clearly specified in campus policies, including the confidentiality of discussions and findings and the format of written recommendations. If summative peer review includes classroom observation, all members of the committee should be appropriately trained and participate in the observations. As indicated previously, each faculty member being reviewed should be observed multiple times by more than one colleague (Braskamp & Ory, 1994).
Miller (1987) recommends that peer observation for summative purposes should be conducted by a visiting team composed of two or three colleagues, at least one from the same discipline as the professor being reviewed, and another who is a “respected tenured faculty member from another discipline” (p.77). This team should be selected by the dean in consultation with the department chair and the professor to be observed.
Many formative programs rely on single reviewers in a mentor or partner relationship. Thus, selection of peer reviewers for formative programs should be a personal choice and ideally faculty will select their own. However, program leaders should be prepared to facilitate effective partnerships to ensure the program’s success. In our experience, the two most important factors in effective partnerships are schedule compatibility and personal compatibility. Although it is easy to facilitate the former by collecting and sharing teaching schedules, we have yet to find a formula to guarantee personal compatibility. TOPS faculty are also starting to identify compatibility in teaching strategies as a third component of effective partnerships. For example, faculty are seeking partners with specific interests, such as the use of distance learning techniques, classroom assessment, or cooperative learning.
7. Identify criteria for peer review consistent with program goals.
The criteria for peer review of individuals should be consistent with the goals of the program, clearly stated, and understood by all participants (Seldin, 1984). In formative programs, these criteria are usually established by the faculty member being reviewed in consultation with the reviewer. Explicit criteria help the reviewer focus on the needs of the faculty member, making feedback more useful. This differs from the criteria for summative review which are established with wide consultation, are applicable to a broad targeted audience, and often have legal implications. In either case, criteria are established prior to the review and are known to reviewers and reviewees.
In TOPS, we provide a pre-observation form (see Appendix B) and lists of teaching behaviors which guide the partners in discussing course goals and strategies and their expectations of the process. Each partner reviews the lists of behaviors and identifies 2 to 4 which then become the criteria for that observation. The pre-observation form helps faculty focus on those criteria and leads to reflection and rich discussions about values and teaching philosophies. In our experience, criteria used for formative peer observation are most effective when they are discussed and agreed upon by the partners before the observation. In addition, the reviewer must respect those criteria as boundaries set by his/her colleague.
8. Develop appropriate training strategies for all participants and reviewers.
Training is essential for the success of peer review programs, both formative and summative (Millis, 1992). We believe that training is not only the most important element of a peer review program, but so critical to effective peer review that it should be mandatory for both reviewers and reviewees. Further, the training should be consistent with the program’s purpose and process. For example, if the process is portfolio development, those preparing the portfolio need to know which information and documentation to include, and the criteria for their review (Richlin & Manning, 1995). Training might include a workshop in which faculty analyze sample portfolios to establish criteria for unsatisfactory, adequate, and excellent teaching, or development and review of a mock portfolio. Regardless of the method of training, both reviewers and reviewees need the same experience so that they have the same expectations and understanding of the process.
Training is particularly important in peer observation programs because of the potential for ineffective and inappropriate evaluation. As faculty, we are experienced as critical evaluators of students and peers, and our first instinct is often to identify weaknesses; thus, we highlight specific problems while only generalizing about strengths. We have often heard an untrained observer say, “That was very good, but...” This kind of statement ignores specific strengths and focuses on problem areas, thereby raising defenses and setting barriers to useful communication.
The training process for our own peer observation program has evolved considerably since the program started. Originally, training consisted of discussing videotapes of “outstanding” professors from other campuses. However, we found it necessary to model the complete process because our faculty were critical rather than constructive in the absence of the professors to clarify strategies and actions. We now use a “moderated training” session in which we conduct a mock class presentation, with a pre-observation conference, a completed pre-observation form (Webb & McEnerney, 1995), a live or videotaped presentation, and a post-observation conference. Participants review and discuss behaviors consistent with good teaching and determine how best to give feedback on specific behaviors. This encourages reflection about teaching and learning, which may lead to change. Our training program is still evolving as we explore other strategies such as case studies and narratives.
We feel strongly that training programs that include modeling by program leaders are far more effective than programs with a “Do as I say, not as I do”approach. When leaders present their own work, such as sample portfolio or mini-lectures, for review by their peers (program participants), they are accepting the same risk as other participants.
9. Identify rewards for participation or consequences for non-participation.
Traditionally, universities reward faculty more for their research than for their teaching (Keig & Waggoner, 1994). Therefore, involvement of faculty in either summative or formative peer review of teaching requires rewards for participation or consequences for failure to participate. The rewards may be extrinsic or intrinsic and should be identified for both reviewers and reviewees. Again, the purpose will identify the rewards of participating in the peer review program. For example, all participants in the reappointment, tenure, and promotion process (reviewers and reviewees) understand that the ultimate outcome is the awarding or denial of tenure or promotion.
Although faculty reviewed in a summative process may receive intrinsic rewards, they are generally more motivated by the extrinsic ones (e.g., tenure, honors, etc.). These extrinsic rewards might be as substantial as promotion, stipends, or release time, or as minimal as textbooks or public recognition. Of equal concern are the consequences of not participating in a summative review, which may be far more important than the rewards for participating. For example, failure to participate may result in no promotion, delayed tenured, or even termination. If the peer review is used to award honors or merit salary increases, failure to participate would remove the faculty member from consideration.
Intrinsic rewards are founded in individual values; they include personal satisfaction, membership in a community of teacher/scholars, intellectual challenge or “fulfillment that comes from helping students learn” (Angelo, 1994, p. 5). Because of the confidential nature of formative review, participants may be motivated more by intrinsic than extrinsic rewards. In fact, giving substantial extrinsic rewards for participating in formative peer review has some disadvantages. First, the cost of the rewards will limit the number of participants. Second, when the support ceases, the program generally ends because faculty are unwilling to participate “for free” when they know that former participants were paid (Angelo, 1996). Further, faculty may suspect that a formative program with substantial monetary support from administration may become obligated to that same administration such that confidentiality is violated. Finally, by emphasizing the intrinsic value of a formative review program, leaders can recruit individuals who truly value teaching and learning as scholarship.
Attention should also be given to rewarding reviewers. For example, most senior faculty engaged in a summative review of junior faculty are motivated to retain colleagues who will enhance the department’s research or teaching reputation. However, faculty who participate only as reviewers in a formative process may have less motivation than faculty who are full participants as in a reciprocal process.
10. Establish a time commitment for participants that is commensurate with rewards or consequences.
Time and rewards for peer review are closely related. Whether the review process is summative or formative, the time committment should be perceived by faculty as manageable or at least justified by the rewards. According to Keig and Waggoner (1994), “Faculty will find time for any professional activity if they are convinced it is valuable to themselves and/or if they are rewarded for it” (p.107). Program leaders should make a realistic estimate of the time commitment in any peer review program and communicate it to all potential participants. The total time should include training, preparation for the review activity, the review itself, and any subsequent reports or meetings. Participants in TOPS spend twelve to fifteen hours per semester.
11. Separate results of an individual’s review from programmatic outcomes.
To ensure individual academic freedom and program integrity, program outcomes must be completely separate from results of individual reviews. For example, if the results of a faculty member’s observation were used to measure whether a formative program had improved teaching or learning, both the individual’s academic freedom and the program’s confidentiality could be violated, even if the participant’s name was not used. Remember Centra’s statement regarding use of data: “The key word... is use, not intended use but actual use...” (1993, p. 5).
The use of results of individual peer reviews should be stated in the program’s purpose because a summative review may lead to a personnel action while a formative review may uncover problems which need attention. Participants should know how materials produced in the process will be used and by whom because public dissemination of individual results may have serious consequences (Seldin, 1984). Written materials that result from an individual peer review should be separate from program outcomes. This separation is particularly important to ensure the confidentiality of formative programs, but it is also important in summative programs. Thus, while actions taken (e.g., awarding of tenure) may be publicized, the materials themselves (e.g., personnel files, student evaluations, peer observations) usually remain private.
12. Link outcome measures with the program’s purpose.
Programmatic outcome measures should be clearly and explicitly linked to program purpose to determine whether the program is fulfilling its purpose and to guide the program direction. Not only does this contribute to program viability, but it is also required by accrediting agencies (Western Association of Schools and Colleges, 1992). Program leaders should consult with faculty and·administrators to determine what outcome measures are needed, who will provide the information, how it will be gathered, and how it will be used. Some information may be used purely for internal consideration and not dissemination, particularly in formative programs, and confidential information about individual faculty members should never be used to document program outcomes. However, documentation through reports and publications should be a responsibility of both formative and summative peer review programs. Like the program’s purpose, outcome measures cannot be implied or assumed but must be explicit and specific.
As previously described, an early assessment of the TOPS program prompted us to revise its original purpose from “improvement of teaching and learning” to “support of reflection, community, and scholarly work in teaching and learning.” We made this change because we realized that assessment of the former might violate the confidentiality of the program and change it from formative to summative. In reality, the program’s purpose had not really changed, only the way it was articulated. Now, our purpose and outcomes are closely linked, which has proven valuable in documenting outcomes to the administration and ensuring continued support.
Later assessments prompted revision of the training process, of forms and program materials, and addition of case studies to focus the discussion sessions. We currently gather descriptive information, including subjective comments from participants, and quantitative demographic data, and we work with our faculty and administrators to identify other assessment tools consistent with the program’s purpose.
Conclusion
Teaching is a scholarly creative activity that requires intellectual processes analogous to those used in research. However, teaching will not be considered scholarly until it undergoes a peer review process analogous to that for research (Boyer, 1990; Schulman, 1993). Ultimately, if teaching is valued by peers as a scholarly endeavor particularly through campus reward structures, faculty will put more effort into their classroom activities, improving both teaching and learning in the process.
References
- Anderson, E. (Ed.). (1993). Campus use of the teaching portfolio: 25 profiles, with an introduction by P. Hutchings. Washington, DC: American Association for Higher Education.
- Angelo, T. A. (1994). From faculty development to academic development. American Association for Higher Education Bulletin. 47, 3-7.
- Arreola, R. A. (1995). Developing a comprehensive faculty evaluation system. Bolton, MA: Anker Publishing Co., Inc.
- Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton: Carnegie Foundation for the Advancement of Teaching.
- Braskamp, L.A., & Ory, J. C. (1994). Assessing faculty work. San Francisco: Jossey-Bass.
- Centra, J. A. (1993). Refelctive faculty evaluation. San Francisco: Jossey-Bass.
- Finkelstein, M. J. (1995). College faculty as teachers. The NEA 1995 almanac of higher education.
- Hutchings, P. (1996). Making teaching community property. Washington, DC: American Association for Higher Education.
- Hutchings, P. (1994). Peer review of teaching. American Association for Higher Education Bulletin. 47, 3-7.
- Keig, L., & Waggoner M. D. (1994). Collaborative peerreview: The role of faculty in improving college teaching (ASHE-ERIC Higher Education Report No. 2). Washington, DC: The George Washington University, School of Education and Human Development.
- Kennedy, D. (1995). Another century’s end, another revolution for higher education. Change,27, 8-15.
- McEnerney, K. & Webb, J. (1995). TOPS (teacher observation / peer support) at California State University, Dominguez Hills. In V. Buck (Ed.), Peer review as peer support: Individual and common good. Papers from CSU Peer Review Conference, Long Beach, CA.
- Miller, R. I. (1987). Evaluating faculty for promotion and tenure. San Francisco: JosseyBass.
- Millis, B. (1992). Conducting effective peer classroom observations. In D. H. Wulff & J. D. Nyquist (Eds.), To Improve the Academy, Vol. 11 (pp. 189-201). Stillwater: OK: New Forums Press and the Professional and Organizational Development Network in Higher Education.
- Quinlan, K. M. (1995). Faculty perspectives on peer review. The NEA Higher Education Journal 11, 5-22.
- Richlin, L., & Manning, B. (1995). Improving a college/university teaching evaluation system: A comprehensive two-year curriculum for faculty and administrators. Pittsburgh: Alliance Publishers.
- Seldin, P. (1997). The teaching portfolio (2nd ed.) Bolton, MA: Anker.
- Seldin, P., & Associates. (1995). Improving college teaching. Bolton, MA: Anker.
- Seldin, P. (1984). Changing practices in faculty evaluation. San Francisco: Jossey-Bass.
- Shulman, L. (1993). Teaching as community property. Change, 25, 6-7.
- Sorcinelli, M. D., & Aitken, N. D. (1995). Improving teaching: Academic leaders and faculty developers as partners. In W. A. Wright & Associates (Eds.), Teaching improvement practices. Successful strategies for higher education. Bolton, MA: Anker.
- Webb, J., & McEnerney, K. (1995). The view from the back of the classroom: A faculty based peer observation program. Journal on Excellence in College Teaching, 6(3),145-160.
- Weimer, M. (1990). Improving college teaching. San Francisco: Jossey-Bass.
- Western Association of Schools and Colleges. (1992). Achieving institutional effectiveness through assessment. Oakland, CA: Western Association of Schools and Colleges.
Contact:
Jamie Webb
Director of Faculty Development
California State University-Dominguez Hills
1000 E. Victoria Street
Carson, CA 90747-0005
310-243-3387
310-217-6966 FAX
Kathleen McEnerney
Department of Clinical Sciences
California State University-Dominguez Hills
1000 E. Victoria Street
Carson, CA 90747-0005
310-243-3979
310-516-3865 FAX
Jamie Webb has been a professor of Earth Sciences since 1975 at California State Univetsity-Dominguez Hills. As Director of Faculty Development, she co-developed the TOPS (teacher observation/peer support) program in Spring 1993. Webb is currently Project Director of the University’s Title III Strengthening the Institution grant from the U. S. Department of Education and co-coordinator of faculty development for the Los Angeles Collaborative for Teacher Excellence, a National Science Foundation project Webb and McEnerney have co-authored several publications and presentations on the development and implementation of peer review programs.
Kathleen McEnerney has been professor of Clinical Sciences at California State University-Dominguez Hills since 1981 and is chair of the department. She is currently Coordinator of the TOPS program, which she co-developed. McEnerney has lectured and published on a wide variety of clinical laboratory science topics and on teaching and learning strategies. Webb and McEnerney have co-authored several publications and presentations on the development and implementation of peer review programs.
Appendix A
TOPS Fact Sheet
T O P S
(Teacher Observation/Peer Support) at California State University-Dominguez Hills
FACT SHEET
TOPS is a formative peer review program which supports faculty in their roles as teachers and facilitators of learning. TOPS combines reciprocal peer observation with discussions and workshops on teaching and learning. Faculty participation is voluntary and separate from the reappointment, tenure, and promotion process. TOPS faculty partner with colleagues from different schools and disciplines to observe each other’s teaching. Participants select their own partners and direct the focus of the observation by selecting appropriate criteria. The goals of the TOPS program are to: (1) support a diverse community of teacher/scholars; (2) promote reflection that will enhance the teaching and learning environment; and (3) foster the scholarship of teaching.
The TOPS faculty form a community of teacher/scholars who reflect on their teaching, share classroom experiences, discuss and experiment with teaching strategies, facilitate student learning, and engage in the scholarship of teaching. Between 15 and 20 faculty have participated in TOPS each semester, some continuing for several semesters. Over 60 faculty have participated since the program began in Spring 1993. TOPS faculty spend an estimated 12-15 hours per semester for all activities:
1 hour introductory meeting
2 hour training
3-6 hours for observations
1-2 hour pre-observation conference
1-2 hour observation
1-2 hour post-observation
4-6 hours for discussion meetings / workshops (3-4 during semester)
4-6 hours for retreat (optional)
Training is required prior to peer observation. The “moderated training” is a model observation, with a pre-observation conference, a completed pre-observation form, a live or videotaped presentation, and a post-observation conference. Participants review and discuss behaviors consistent with good teaching and determine how best to give feedback on specific behaviors. The following TOPS faculty comments are typical:
“We frequently tell our students not to study in isolation yet we continue to teach in isolation. The TOPS program helps decrease that isolation by providing an informal structure for discussion of teaching and promotion of collegiality.”
“I have taken ideas for content materials from completely different disciplines because they reflect teaching strategies.”
“Participation in such a program models the kind of behavior I would like my students to exhibit—being lifelong learners and discussing professional growth.”
“TOPS gives me immediate strategies that I can use in my classes.”
“TOPS gave me the opportunity to take time and make a conscious effort to think about technique instead of just content.”
“TOPS meetings provoke highly reflective thinking that filters into the classroom.”
Appendix B
Pre-observation Conference Form
Faculty:
Peer Observer:
The purpose of the pre-observation conference is to review the instructor’s teaching strategies and discuss the role of the observer during and after the observation.
What knowledge, skills, and/or attitudes do you expect from the students?
a. Course
b. Lecture(s)
Type of course and role in curriculum
a. lecture / activity / seminar/ laboratory / other
b. required / general education / elective / personal interest
c. developmental / lower division / upper division / graduate
d. role in degree program (critical / introductory)
e. technology: computers / distance/ other
f. student population (e.g., number of students, mix, other)
g. role of observed class in course or program (previous, future)
h. length of lecture / times per week
i. recent changes in program / course / student outcomes
j. other
Role of instructor in course
a. number of times course previously taught
b. primary method / strategies of teaching
c. special problems / constraints
Observation format
a. date, time, length, place
b. planned / unplanned
c. one course / several courses
d. one session / several sessions
e. relationship of observer to students (detached/involved/introduced)
Teaching behaviors to be observed (be specific, be focused)
Post-observation conference scheduled for: