16 Effective Peer Evaluation in Learning Teams
Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact : [email protected] to use this work in a way not covered by the license.
For more information, read Michigan Publishing's access and usage policy.
Evaluating student performance in learning teams is challenging. This chapter reviews the student learning team and peer evaluation literature. The authors share the results of their experience using four rubrics for peer evaluation in student learning teams. Student learning teams involve forming students into teams for the semester to enhance their active learning. A portion of the course grade is dedicated to team quizzes, activities, and projects. The authors conclude that peer evaluation data should be used both formatively and summatively to enhance team cohesion and accountability and provide their preferred rubric for the peer evaluation process. Usage of forced differentiation in peer evaluation is discussed. A mathematical formula for calculating the impact of peer evaluations in learning teams on course or team project grades is presented.
In an effort to increase students’ level of active learning, faculty often use learning teams. Evaluating student performance in student learning teams is particularly problematic, and both students and professors may have differing perceptions that spark questions:
To what degree can the professor truly determine the functionality of the team?
To what degree have all the students in each team earned the team grade?
How reliable is student feedback without requiring students to differentiate among team members’ performance?
How reliable is student feedback about team members’ performance without self-evaluation?
How should the professor use student feedback to determine the percentage of a team grade to award individual team members?
Students’ major criticism of “working in groups” is a lack of mutual accountability and fairness. They detest a perceived lack of peers’ participation—the “free rider” syndrome (Clary, 1997)—and are suspicious of the instructor’s assigning of grades, which may or may not accurately reflect their performance (Woods, 1994, 1997). Further, many are unwilling to differentiate between performances of peers in their teams and some are reluctant to assign low evaluation scores to their peers. Instructors who teach using learning teams recognize that we must solicit student feedback, but how do we obtain student feedback, how do we prepare our students for giving that feedback, and how do we use it productively?
THE CHALLENGE
Research addresses issues of team cohesiveness as well as individual and team accountability (Michaelsen, Jones, & Watson, 1993). The literature also suggests that student participation in practical classroom team projects significantly predicts increased job performance (Millis & Cottoll, 1998). However, in the area of student peer evaluation in student learning teams, research is limited. As instructors, we must not only be clear with our reasons for conducting such evaluation and our allowing it to impact students’ grades, but we must be aware of the implications of timing and procedure.
The authors of this collaborative study represent mathematics, English, agribusiness, instructional design, theology, and communication. We identified shared interests in reaching effectively with learning teams, following a learning teams workshop conducted by Larry Michaelsen of the University of Oklahoma. We found adaptations of Michaelsen’s model for learning teams helpful, though problematic in some areas. While we resolved many problems through our discussions, we all encountered challenges that we couldn’t easily resolve, centered on rubrics for evaluation and the use of the evaluation data for assigning student grades. As a result of these mutual challenges, we began working collaboratively to solve these problems.
LITERATURE REVIEW
Learning Teams: An Overview
Before exploring these areas further, we will describe our terms and the general learning team process on which we base our classes. Learning teams, as an active learning strategy, draw from several theory bases: nonfoundational social construction theory, group dynamic theory, management sciences, and psychology (see especially Bruffee, 1993; Smith, 1996). Each of these disciplines differentiates between more loosely structured groups and semester-long teams. In a learning teams classroom, the class is restructured so that students participate in semester-long teams limited to four to seven participants in which each member individually assumes responsibility for his or her own learning and hones application skills through cooperative learning experiences (Michaelsen & Black, 1994; Michaelsen, Black, & Fink, 1996; Michaelsen, Fink, & Knight, 1997; Michaelsen, Knight, & Fink, 2002). In group work that is not grounded in learning team theory, a common expectation among professors is that the groups meet outside of class to complete group projects. While some class time might be devoted to a discussion of project parameters, often little class time is devoted to project generation. In contrast, a key component of learning teams supported in the literature is the professor’s expectation that students learn course principles inside class and complete individual projects outside of class. Thus, work on team projects occurs during the class period under the instructor’s guidance.
The value of learning teams for students’ intellectual and social development can be found in discussions of student development theory. Chickering and Reisser (1993) suggest that the value of learning teams for students lies in their development of interpersonal skills, group management skills, inquiry skills, conflict prevention, mediation and resolution skills, and presentation skills. Students work together to identify and utilize different team members’ abilities, to learn to analyze and assign tasks, to apply principles in accomplishing tasks, and to write and/or report collaboratively on findings.
Bruffee (1993) explains that the difference between teams and groups can be described as a difference between cooperation and collaboration. The latter is a loose interaction in which students work together to submit a group project generated with the “divide and conquer” method, lacking cohesive revision needed for a quality project and frequently lacking total team input. Conversely, in a cooperative environment, students interact to complete a task in which they are concerned both about the quality of the performance (as well as with) and about each other’s learning processes.
After leaving the academy, students should find that they have matured in their ability to make connections between theory and practice, as well as maturing in competence, management of emotions, movement from autonomy to interdependence, and maturing in the development of personal identity and integrity (Chickering & Reisser, 1993). While growth and interconnectedness sound good, to actualize it, students must be held accountable for their performance as individuals and as team members.
Peer Evaluation Literature
Though much literature can be found on cooperative learning and learning teams, fewer studies actually address issues of peer evaluation. Chickering and Ehrmann (1996) advocate structuring cooperative teams to provide a basis for peer evaluation so that all teams and team members can succeed. The process, however, is complex. In discussing this complexity, Woods (1994, 1997) advocates examining the degree to which students are equipped in task and social skills to perform well in a team. He suggests using a form that reflects qualities that would characterize a valued team member. For example, students are asked about various behaviors of their peers, such as whether or not a teammate attended to both morale and task components, helped the chairperson be effective, assumed the roles the team needed, and informed others of complications.
The use of characteristics as criteria for evaluation is supported by Ohland and Layton’s (2000) research. They compared the reliability of two different evaluation procedures. One procedure adapted a rubric from Brown (1995) that used general behavioral descriptors as criteria (“excellent,” “very good,” and “satisfactory”) to evaluate each individual’s contributions to the team’s projects. With this procedure, the labels were assigned numerical values. When all evaluations were completed, the professor translated the descriptor feedback into a numerical total. The individual student’s weighting factor was the individual’s average rating as determined by his or her peers, divided by the team average rating. The individual student grade would be the team grade multiplied by this weighting factor.
The second procedure involved using a rubric from Ohland and Layton (2000) that provided ten categories of contribution. Students were asked to assign a numerical rating from 0 to 5 for each category of each teammate’s behavior. Some of the behavioral characteristic categories included attending meetings regularly, contributing to decisions, having good communication skills, taking responsibilities seriously, and completing tasks on time. Results from both rubrics were normalized to a common 0-100 scale for comparison. Results modestly supported Ohland and Layton’s conclusion that focusing students on behavioral characteristics as opposed to Brown’s behavioral descriptors (excellent, good, etc.) provided more useful feedback and thus could improve the meaningfulness of peer evaluation.
Zigon (1998) asserts that criteria should be known to the team and individuals, and that criteria should be linked to organizational measures or objectives. Clary (1997) and Foster et al. (1999) agree, explaining that students take evaluation more seriously when they assume the role of judging individual contributions of their peers—a role they can more logically fill than can their professor. Clary further points out the additional benefit of the evaluation process affords in students learning how to be accountable as well as learning valuable lessons about the learning process and teamwork efforts, which can transfer into useful job skills.
Based on the literature, we began our examination of possible rubrics and methods for using the feedback.
THE EVALUATION PROCESS: INSTRUMENTS FOR FAIR EVALUATION
To adequately determine the degree to which each student participated in the team activity, both individual and team performance must be measured.
Rubric 1: Team Evaluation With Differentiation and Comments; No Self-Evaluation
“Oh brother!” a student in the back muttered. He slouched further into his seat upon hearing the announcement of the use of teams. When asked about his reaction, he replied, “All group work is the same: one works, two help some, and one totally slacks, but all get the ‘A.’ The teacher never has a clue.” In an attempt to “have a clue” and assess student performance, the authors—only one of us had previously sought to evaluate team effectiveness as a means to give grade percentages—used the rubric from Michaelsen’s workshop. This rubric asks students to assess their group members and reward those who worked hard. The rubric offers the caveat to the students’ evaluation process that assigning similar scores to everyone hurts those who worked hard and helps those who did not. Ideally, the evaluator is to distribute an average of ten points to each team member, with no two members having the exact same score (forced differentiation). For example, the five-member teams distribute 40 points, the six-member teams distribute 50 points, the seven-member teams distribute 60 points, and so on. As the student evaluators differentiate somewhat in their ratings, they give more points to those who contributed more. Specifically, students are instructed to give at least one score of 11 or higher (with a maximum of 15) and one score of nine or lower.
The rubric has several problems. First, it is structured as a reward/punishment system. Clary (1997) explains that this reward/punishment system is oriented more toward academe; thus, students don’t view this evaluation experience as giving them practice in personnel evaluation, which they will be called on to do in the work place. Clary’s research states that when students see the connection between team evaluation and work place evaluation, they participate more willingly and thoughtfully in the process). Additionally problematic in this rubric is the failure to include self-evaluation points in the point distribution. Thus, the distribution might be misleading. Finally, the mathematical formula actually privileges larger teams: on average, members of larger teams will receive a higher score than members of smaller teams.
As we reviewed this rubric, we were concerned about student reaction to the process. One colleague was confronted by a student who tearfully complied with the differentiation process, explaining that it undermined the team’s previous efforts to bond. In other words, a team might not want to differentiate because of exemplary performance of all members. However, even when problems occur in team performance, team members may still resist differentiation. Clary’s (1997) research explains this reaction, noting that even though students dislike and complain about free riders, they are often unwilling to penalize them, suggesting that this is, in part, due to the presentation of peer review as “punishment.”
To address these problems, we worked toward increasing student buy-in of peer assessment, developing a rubric with self-evaluation, and understanding the mathematical impact of peer evaluation data on the student’s individual grades.
Rubric 2: Team Evaluation, Inclusive of Self-Evaluation and Differentiation
During this time, we discovered additional colleagues who were also wrestling with student evaluation issues. They introduced us to the idea that peer evaluation could first be formative before being used for summative data. Like Michaelsen, they argued that teams should be encouraged to work out differences rather than being reconstituted throughout the semester (Cooke, Drennan, & Drennan, 1997). At this point, we began using an evaluation rubric for both formative and summative purposes.
Our colleagues’ rubric, or sociogram (Cooke et al., 1997), called for differentiation in the point distribution, but did not ask for self-evaluation. Building on their work, we created a rubric that asked students to include themselves in the evaluation process to ensure fairness; thus, if student A “pads” his or her scores, the padding will be evident when compared with the scores assigned by other team members. This rubric asks students to rank each team member and themselves on factors specific to team coherence, using a Likert scale (1-5). The factors include such items as a fair share of the work load, shared responsibility rather than taking charge of every activity, and respect for the ideas of other team members. Other factors include attendance, preparation for the quizzes and tests used to determine individual knowledge of course principles, etc.
Additionally, we adapted an academic hiring model for evaluating job candidates. In this model, students are asked to differentiate pair-wise between all team members, themselves included. The rubric is formatted as a grid with all team members’ names in alphabetical order along both the X and Y axis. As the evaluator considers the pairings of peers, he or she muse decide which of the two in each pairing performed more efficiently.
This rubric was not as effective as desired because it was lengthy, the students hated such overt differentiation, and the Likert scale used in the first part could easily be completed without being taken seriously. Students themselves pointed out during the formative peer evaluation that using the forced differentiation model tended to create antagonism between team members rather than foster greater team cohesion. Consequently, we still struggled with the use of the evaluation data in determining individual grades.
Rubric 3: Team Evaluation With Individual Comments; No SelfEvaluation
“So, Professor, exactly how do our evaluations affect our grades? Do they affect the project grade or the class grade? What if someone in my team doesn’t like me? What if someone who didn’t really work is afraid of being graded down by the team and gives herself a really high score?” The student folded his arms and leaned back in his seat with an air of “Let’s sec you get out of this one, Professor.” The students waited expectantly.
To address these questions while shifting more toward evaluation as a job skill, we continued searching for evaluation rubrics. The next one we used was designed by Dee Fink (1998), director of the instructional development program at the University of Oklahoma and a colleague of Larry Michaelsen. This rubric asked for assessment of team members based on each member’s contribution to the group. The student evaluator was to base this assessment on such areas as preparation, contribution, respect for others’ ideas, and flexibility. Instructions to students stated that the assessment would be determine the number of the group’s points to be given to each member. The evaluator was to distribute 100 points among the team members, assigning points to each team member for each assessment criterion, giving more points to those who contributed more to the team.
This rubric differed from the others we had tried in ways we believed significant. Each evaluator had to justify the point distribution for each individual in the team, by adding comments about the points they assigned to each team member on each criterion. Responses remained confidential, but the form had to be validated with a signature.
Yet even with these significant differences, we found that this rubric had the same problems as the first rubric we tested—problems of being presented as a reward/punishment system as well as mathematical problems when using the data to determine grade percentages.
Rubric 4: Differentiation, Self-Evaluation, and Lessened Resistance—Our Solution
“Well, I think it’s important for everyone to get here prepared. I mean groups make it too easy to say, ‘I don’t have time to read tonight, but I’m sure someone in the group will have done the homework.’ And the teacher never knows. Only the group knows.” The student sits back, satisfied with her revelation.
“Okay, your team should put that down as a criterion. If you other teams think it works for you, you can add this to your list of criterion, also,” responds the professor. “Another area for accountability?”
While the rest of us wrestled with versions of team evaluation instruments, one colleague ensured student feedback in the evaluation process—both formative and summative—by asking each team to articulate its own criteria for evaluation (cf. Clary, 1997). At the beginning of the course, he guided students in considering areas they should evaluate and in framing evaluation in a real-world, job-related context. Each team began with the generic rubric template designed by the instructor, including ten suggestions for evaluation criteria. From the generic template, each team established its own distinct criteria for evaluating team members, thus increasing team buy-in and reducing resistance to the process. Two times during the semester, the teams performed formative evaluation, and at the end of the semester, teams performed summative evaluation using their team’s rubric. Evaluation involved distribution of 100 points among team members, excluding the evaluator. After completing an evaluation of his or her team members, the evaluator also self-evaluated, relative to his or her distribution of points to the other team members. We experimented with this approach and collaboratively designed the rubric we have all adapted to be discipline-specific for our respective courses (see Appendix 16.1). Self-evaluation scores provide an introspective opportunity for team members and indicate evaluation anomalies to the instructor.
Formative peer evaluation is imperative, because it aids in the norming of team members’ feedback. The formative evaluation process gives teams experience in conducting peer evaluation and helping them identify and address problems of cohesion and individual responsibility toward the goal of all members participating fully. Our experience demonstrates that teams who have learned how to evaluate their peers through the formative evaluation process are more effective when they complete summative peer evaluation.
We advocate this last rubric (see Appendix 16.l) because it can be used for both formative and summative evaluation, requires self-evaluation, and allows for differentiation. However, it also permits teams that performed equitably to distribute peer evaluation scores evenly. Several of us also distribute the peer evaluation via email, thus ensuring confidentiality. The next section describes a mathematical formula for accurately determining individual grade percentages on team projects.
THE EVALUATION PROCESS: USING THE DATA IN DETERMINING GRADES
If the instructor decides to use the summative peer evaluation data for determining a percentage of the final grade or team project grades, a mathematically sound formula is critical. In the case of using the summative peer evaluation data to impact a course grade, the data would be allotted a percentage of the final grade. The instructor would simply use the rubric (Appendix 16.1) to develop a chart (see Figure 16.1). Each team member’s total peer score would be divided by the highest individual total peer score and that percentage would be utilized in final course grade computations as a percentage of the final grade. Since many professors utilize a spreadsheet, this procedure is relatively simple. Applying this procedure to determine the percentage of the team project grade that the individual will receive is more complex.
After working with the different mathematical formulas that accompanied each of the four rubrics we tried, we made adjustments and devised a formula for our current rubric that can be used with or without including self-evaluation scores.
Using our suggested rubric, students are asked to distribute 100 points among all team members, excluding themselves. Figure 16.1 illustrates a possible point distribution for Team 1 in a specific class. Team 1 has five members: Susan, Yukiko, Jon, Lavonne, and Ian. Figure 16.1 illustrates points awarded by all team members without self-evaluation scores.
This team scored 85% on its team project. Logically, the team member with the highest total peer score should receive the entire 85% as a grade. Those with lower total peer scores should receive a lower final team project grade. How much lower is determined proportionally.
First, the instructor determines the maximum deduction allowed between the highest and lowest grades within a team. In essence, how much can peer evaluations negatively impact an individual’s grade on the team project? The highest grade anyone on this team can make is 85%, because the team project grade is 85%. The instructor decides that the lowest team project grade anyone on this team can make is 60% (minimum passing grade). Thus, the 85% team project grade minus the 60% gives a 25-point maximum deduction. Susan’s individual team project score is calculated below:
25 (maximum number points deductible)
x.95 (This is 105/110; Susan’s total peer score/the highest individual score)
23 (Susan’s peer proportion score)
To figure Susan’s percentage of the team grade, subtract the maximum score deductible from the team project grade, and add back Susan’s individual group score:
85 (team project score) - 25 (maximum deductible) + 23 (Susan’s peer proportion score) = 83 (Susan’s individual team project score is 83)
In the case of Lavonne, her scores indicate that all team members perceived her to be the weakest member. Her assigning equal scores to all individuals is typical of weaker students who often make end-of-project/semester attempts to persuade the other team members that everyone deserves the same grade. Lavonne’s individual team project score is figured below:
25 (maximum number of points deductible)
x.67 (74/ 110: Lavonne’s total peer score/the highest individual total)
16 (Lavonne’s peer proportion score)
The percentage of the team project score Lavonne would receive is figured below:
85 (team project score) - 25 (maximum deductible) + 16 (Lavonne’s peer proportion score) = 76 (Lavonne’s individual team project score is 76)
The project grades now reflect the students’ perception of each individual’s level of participation. If the instructor guided them carefully through the process of evaluation, using formative feedback first, the professor can feel secure about the equity of these grades.
To determine the individual score with self-evaluation scores included, the process would be the same except that the team members’ total peer scores would be higher because of the additional self-evaluation number.
FINAL THOUGHTS
We have offered information on rubric design, process, and use of peer evaluation data. In a future article, we will examine factors in ensuring student buy-in to the evaluation process. However, many other areas still remain to be studied about peer assessment. For example, in combined undergraduate/ graduate level courses, how would graduate learning teams differ from undergraduate learning teams in evaluation issues? What happens to the mathematical validity of evaluation data when attrition drops the number of team members to three—two strong students and one weak—and how can this be compensated for? To what degree can principles of peer evaluation be useful for faculty peer evaluation? As we address these and other issues sparked by peer assessment in learning teams, our goal is optimally effective and equitable learning teams that are excellent cooperative learning experiences for students and provide meaningful feedback to faculty concerning the team experience.
REFERENCES
- Brown, R. W. (1995). Autorating: Getting individual marks from team marks and enhancing teamwork. Proceedings of the Frontiers in Education Conference. Pittsburgh, PA: ISEE/ASEE.
- Bruffee, K. (1993). Collaborative learning: Higher education, interdependence, and the authority of knowledge. Baltimore, MD: The Johns Hopkins University Press.
- Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles: Technology as lever. AAHE Bulletin, 49(2), 3–6.
- Chickering, A. W., & Reisser, L. (1993). Education and identity (2nd ed.). San Francisco, CA: Jossey-Bass.
- Clary, C. R. (1997). Using peer review to build project teams: A case study. NACTA Journal, 42(3), 25–27.
- Cooke, J. C., Drennan, J. D., & Drennan, P. (1997, October). Peer evaluation as a learning tool. The Technology Teacher, 23–27.
- Fink, L. D. (1998). Improving the peer evaluation process in learning teams. Presentation to Abilene Christian University, Abilene, TX.
- Foster, D., Green, B., Lakey, P., Lakey, R., Mills, F., Williams, C., & Williams, D. (1999, March). Why, when and how to conduct student peer evaluations in learning teams: An interdisciplinary exploration. Paper presented at the annual convention of the American Association for Higher Education, Washington, DC.
- Michaelsen, L. K., & Black, R. H. (1994). Building learning teams: The key to harnessing the power of small groups in higher education. In S. Kadel &J. Keehner (Eds.), Collaborative learning: A sourcebook for higher education (Vol. 2, pp. 65-8l). Syracuse, NY: National Center on Postsecondary Teaching, Learning, and Assessment.
- Michaelsen, L. K., Fink, L. D., & Black, R. H. (1996). What every faculty developer needs to know about learning groups. In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. 15. Resources for faculty. instructional, and organizational development (pp. 31-58). Stillwater, OK: New Forums Press.
- Michaelsen, L. K., Fink, L. D., & Knight, A. (1997). Designing effective group activities: Lessons for classroom teaching and faculty development. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 16. Resources for faculty. instructional and organizational development (pp. 373–398). Stillwater, OK: New Forums Press.
- Michaelsen, L. K., Jones, C. F., & Watson, W. E. (1993). Beyond groups and cooperation: Building high performance learning teams. In D. L. Wright & J.P. Lunde (Ed.), To improve the academy: Vol. 12. Resources for faculty. instructional/, and organizational development. Stillwater, OK: New Forums Press.
- Michaelsen, L. K, Knight, A. B., & Fink, L. D. (2002). Team-based learning: A transformative use of small groups. Westport, CT: Praeger.
- Millis, B. J., & Cottell, P. G. (1998). Cooperative learning for higher education faculty (ACE Series on Higher Education). Phoenix, AZ: Oryx Press. [Now distributed through Greenwood Press].
- Ohland, M., & Layton, R. (2000). Comparing the reliability of two peer evaluation instruments. Proceedings of the American Society of Engineering Education. Washington, DC: ASEE.
- Smith, K. A. (1996). Cooperative learning: Making “groupwork” work. In C. Bonwell & T. Sutherlund (Eds.), Using active learning in college classes: A range of options for faculty (pp. 71–82). New Directions for Teaching and Learning, No. 67. San Francisco, CA: Jossey-Bass.
- Woods, D. R. (1994). Problem-based learning: How to gain the most from PBL. Water-down, Ontario, Canada: D. R. Woods.
- Woods, D. R. (1995). Problem-based learning: Resources to gain the most from PBL. Waterdown, Ontario, Canada: D. R. Woods.
- Zigon, J. (1998). Measuring the hard stuff. Teams and other hard-to-measure work. Retrieved March 25, 2003, from http://www.zigonperf.com/articles/hardstuff.html
Contact:
Debbie Williams
Department of English
Abilene Christian University
ACU Box 28252
Abilene, TX 79699
Voice (915) 674-2405
Fax (915) 674-2408
Email [email protected]
Doug Foster
Graduate School of Theology
Abilene Christian University
ACU Box 29429
Abilene, TX 79699
Voice (915) 674-3795
Fax (915)674-6180
Email [email protected]
Bo Green
Department of Mathematics and Computer Science
Abilene Christian University
ACU Box 28012
Abilene, TX 79699
Voice (915) 674-2008
Fax (915) 674-6753
Email [email protected]
Paul Lakey
Department of Communication
Abilene Christian University
ACU Box 28156
Abilene, TX 79699
Voice (915) 674-2292
Fax (915) 674-6966
Email [email protected]
Raye Lakey
Adams Center for Teaching Excellence
Abilene Christian University
ACU Box 29201
Abilene, TX 79699
Voice (915) 674-2880
Fax (915) 674-2834
Email [email protected]
Foy Mills
Department of Agriculture and Environment
Abilene Christian University
ACU Box 27986
Abilene, TX 79699
Voice (915) 674-2276
Fax (915) 674-6936
Email [email protected]
Carol Williams Graduate School
Abilene Christian University
ACU Box 29140
Abilene, TX 79699
Voice (915) 674-2354
Fax (915) 674-6717
Email [email protected]
Debbie Williams is Assistant Professor of English at Abilene Christian University. Her PhD is from Purdue University. She is a leader in the utilization of learning teams and learning communities. Her specialties include rhetoric and composition theory and pedagogy.
Doug Foster is Professor of Bible and Director of the Center for Restoration Studies at Abilene Christian University. A PhD from Vanderbilt University, he is a church history scholar, particularly American and Stone-Campbell studies.
Bo Green is Professor of Mathematics at Abilene Christian University. His doctorate is from Purdue University. He is known for his development of mathematical problems and brain challenges. He has made numerous presentations concerning peer evaluation and teams.
Paul Lakey is Professor of Communication and Scholar-in-Residence in the area of faculty development at Abilene Christian University. His PhD is from the University of Oklahoma with emphases in organizational and intercultural communication. His research interests include conflict management, leadership, and active learning.
Raye Lakey is Director of lnstructional Development and Faculty Development and Associate Director of the Adams Center for Teaching Excellence at Abilene Christian University. Her research interests include active learning, cultural diversity, and quality assessment.
Foy Mills is Professor and Chair of Agriculture and Environment at Abilene Christian University. A PhD from Texas Tech University, his interests include peanut quality and marketing, team-based learning in the classroom, and experimental economics in the classroom.
Carol Williams is Professor of Mathematics and Acting Assistant Provost for Research and Acting Dean of the Graduate School, Abilene Christian University. Her PhD is from the University of California, Santa Barbara. She specializes in the use of learning teams and mathematics education.
APPENDIX 16.1
STUDENT PEER EVALUATION
Peer evaluation serves several essential purposes in a team-based or collaborative learning classroom. The evaluation provides a measure of team accountability (i.e., how each member participates in group processes). It also allows the professor to judge how well the group is actually working together as a team.
As you evaluate your peers, your judgment should presumably reflect each team member’s contribution to group activities, such as:
Attendance: Were team members present and on time for class?
Preparation: Were team members prepared for assignments? Contribution: Did they contribute productively to team discussion and assignments?
Respect: Did they respect others’ ideas and encourage participation?
Cooperation: Were they willing to work through disagreements for the good of the team?
Attitude: Did they demonstrate a positive attitude towards the team and its work?
Please take this responsibility seriously. This evaluation will be a factor in your grade. Evaluating the work of your team members should not be done in haste or carelessness. Please be honest in your appraisal. This evaluation will be kept confidential.
I. Based on these suggested criteria, distribute 100 points among the members of your team. Give more points to those who contributed more to the team efforts. You must write a brief explanation of why you gave each person the number of points you did. If you believe that all members participated equally, divide the points equally and explain your decision.
List each team member | Points |
---|---|
1. _______________ Comments: | ____________ |
2. ________________ Comments: | ___________ |
3. ________________ Comments: | ______________ |
4. ________________ Comments: | ____________ |
5. _____________ Comments: | ___________ |
II. Assess your personal contribution to the team. Give yourself a score in comparison to the scores you gave your peers. Then write a brief explanation of why you assigned this number of points. Your self-evaluation will be compared to your peers’ evaluation of your contribution.
_______________(your name)
(points) ______
Comments: