Using the SGID Method for a Variety of Purposes
Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact : [email protected] to use this work in a way not covered by the license.
For more information, read Michigan Publishing's access and usage policy.
Black, B. (1998). Using the SGID method for a variety of purposes. In M. Kaplan (Ed.), To Improve the Academy, Vol. 17 (pp. 245-262). Stillwater, OK: New Forums Press and the Professional and Organizational Development Network in Higher Education. Key Words: assessment, classroom assessment, consultation, course evaluation, evaluation methods, faculty development, feedback, formative evaluation, instructional effectiveness, instructional improvement.
The Small Group Instructional Diagnosis (SGID) process (Redmond & Clark, 1982) has been used for consultation purposes at the Center for Research on Learning and Teaching at the University of Michigan since 1990. Since then it has become a multi-purpose tool with far-reaching results. This article describes a variety of ways we have used this process: to provide feedback to individual faculty and teaching assistants on their teaching, to inform coordinators of large multi-sectioned courses on how the course is working as a whole, to inform coordinators of TA training on the effectiveness of their programs, to advocate for better classroom design, and to get feedback and inform changes in curriculum design.
I first heard about the Small Group Instructional Diagnosis (SGID) process almost two decades ago when Bill McKeachie (University of Michigan) called me into his office to hear about a fascinating process that he thought might have potential. Joe Clark (University of Washington), who was briefly in the area for personal reasons, had made a “drop-in” visit to the Center and intrigued us with the description of a process that he felt was making a big difference in teaching on his campus. He left us a detailed description of how to facilitate the SGID process and went on his way. Not having any experience with the process, and not having many instructors who were clambering for our help, the description was filed away in my “things to try” file where the idea lay dormant for several years.
As our staff at the Center grew, and we started to receive more and more calls for consultations, a group of us became interested in improving our skills in providing services to teaching assistants and faculty. In 1989 with the encouragement and guidance of Arye Perlberg (a visiting scholar from Technion-Israel Institute of Technology) five staff members at CRLT developed and participated in a series of activities designed to help us learn consultation skills: how to take non-judgmental, objective observational notes and to give feedback in a non-threatening, non-directive manner. This process is driven by the instructor’s goals for the course, with the data (collected through observation) analyzed and reflected on by the instructor to see where there is disjuncture between the goals and actions in the classroom (see Hofer, Black, & Acitelli, 1997, for a more detailed description). This training was the backbone of our development as consultants and has had far reaching implications for how we work with faculty and GSIs. Another source of information and help in this area was the annual POD Conference. As we exchanged ideas on observation and feedback with other members, we started to hear interesting stories of the SGID process that had already become “old hat” for many faculty developers. In 1990 I got up the courage to try conducting an SGID. I went to the file, where so many years before I had deposited the description, and read it in earnest. The next time someone called for a consultation on their teaching, I suggested using the process. I was pleased with the results and thus started the beginning of a love affair with the possibilities of the SGID.
Our training in observation and non-directive feedback complemented the facilitation of the SGID method, resulting in a five step process: (1) getting acquainted with the instructor and his or her goals for the course before visiting the class, (2) taking objective data during observation, (3) collecting small-group feedback from students, (4) providing feedback to the instructor in a non-directive way that encourages him or her to clarify goals and reflect on classroom data (from the students and from the observation), and (5) discussion on possible ways to respond to the feedback. The combination has served the Center well. We have molded and modified the SGID process for many different purposes. This article will describe a variety of ways we have used this process to provide feedback to instructors, programs, and departments.
The Basic Process
The basic process that we use is a variation of that described by Redmond and Clark (1982). After a pre-meeting with the instructor to discuss the process and his or her goals and concerns for the course, the consultant arrives at the beginning of the designated class period and observes until there are approximately 25 minutes left. At that time, the instructor turns the class over to the consultant and leaves the room. The consultant explains the procedure and its purpose and then divides the class into groups of four or five students. Each group receives a sheet with the following questions (patterned after those used by the Center for Instructional Design and Research at the University of Washington):
1. List the major strengths in this course. (What is helping you learn in the course?) Please explain briefly or give an example for each strength.
2. List changes that could be made in the course to assist you in learning. Please explain how suggested changes could be made.
Students are asked to come to a consensus in their groups on responses to each of the questions. After about eight minutes, the groups share their responses. The consultant posts the responses on an overhead transparency (or the chalkboard when equipment is unavailable) where they can be discussed, clarified, and developed into a common response from the whole class. We find that the use of the overhead transparency increases the efficiency and accuracy of the process.
Soon after the feedback session (preferably before the class convenes again), the consultant meets with the instructor to share the students’ comments and the data taken during the observations (the latter often include a map of the classroom interactions). After the instructor analyzes and reflects on the data, the instructor and consultant discuss possible actions the instructor might take in response to the feedback.
Evaluation of the Process
Both faculty and GSls like this process and evaluate it highly. For example, out of the 20 new faculty in the College of Literature, Science, and the Arts who received SGIDs during the Fall 1997 semester, 16 sent back their evaluations of the process. All of the respondents strongly agreed with the statement “Overall, I feel that the service was valuable.” All 16 also said they would recommend the service to colleagues in their departments. Typical comments on evaluations include:
A great process! I have been getting end-of-course ratings for 30 years and I never got as much helpful information as I did using this process.
The SGIDs were very helpful. Students tell those CRLT people far more than they will ever tell us instructors.
Getting midterm feedback from students was particularly helpful in my development as an instructor.
Students also like the process because their comments at midterm have the potential to change the class for them. I have had many students come up to me after they participate in the process and ask how they could get it done in another class where there were “really” problems.
Training
Since the Center gets more requests for this process than the regular staff can handle, we have, for several years, hired and trained upper-level graduate students (who have excellent teaching records) to facilitate the process for TAs. Our use of graduate students has evolved into a Graduate Student Associate program, with six upperlevel graduate students who are awarded a position at the Center for a year to help out with a number of our programs including the Midterm Student Feedback process. We also train departmental faculty and graduate student mentors to facilitate the process in their departments.
In preparation for facilitating feedback sessions, consultants read several articles about SGIDs and participate in two training sessions, two and a half hours each. The first training session focuses on observational skills and has two goals: (1) to give participants an understanding of and practice in recording nonjudgemental, objective data while observing a class; and (2) to teach participants to give feedback to the instructor in a nonthreatening, nondirective manner. To achieve these goals, we discuss methods for taking objective data and give participants practice in taking data while observing a short segment of a videotaped class. Using data from the same video, facilitators role-play a feedback session that is observed and critiqued by the participants. Using other videotaped classes, participants, in groups of three, use role-plays to practice the process (each participant gets a chance to be an instructor, a consultant, and an observer of the process). After each role-play, we discuss the results and talk about issues and questions that came up during the role-play (for a more detailed description of this training session see Black & Gates, 1997).
The second session is designed to help participants understand how to conduct SGIDs in the classroom. We have participants act as the “students,” first showing them a segment of a videotape of “their” class, and then having a facilitator conduct an SGID about the class with them. This is followed by a reflection about and discussion of the process. Using data collected from the “students” as well as the observation of the same videotaped class, the facilitators then roleplay a feedback session. Finally, we discuss the process, concerns, and issues.
The discussion continues on a one-to-one basis after each participant observes an experienced consultant facilitating the process in an actual classroom. After everyone has facilitated the process in a few classes, another group meeting is held to reflect on the process and answer any questions. In addition, we send an evaluation form to each instructor who receives an SGID. Completed forms are shared with the consultant who facilitated the SGID.
SGIDS for a Variety of Purposes
CRLT has found many uses for the SGID process to provide feedback to individual instructors, faculty in charge of multi-section courses, and departments on various programs. This section describes the variety of ways in which we have used this process.
Individual Consultations
Individual faculty and TAs can request an SGID from the Center, and many instructors do so over and over again. Instructors find the feedback especially useful when teaching for the first time. Some use the service to diagnose difficulties. I had an instructor call last week who had been one of my first “guinea pigs” as I learned how to use the process. He said, “Beverly, you helped me several years ago when I was having problems with a class, and now I have another class that is giving me problems, can you come in?” Some instructors have used the process when they were developing a new course to give them feedback on how it was working for the students. One instructor who seems to thrive on developing new courses likes to have the method facilitated during the last week of the course. He loses the benefits of being able to change the course for those students who provide the feedback, but, according to him, he gets a better sense of the whole course than the usual end-of-course evaluations provide.
New Faculty Members
The College of Literature, Science, and the Arts (LS&A) became so enamored with the midterm student feedback process that they tried to require it for all the new faculty in the College. In response to the furor that erupted on our very decentralized campus, the College agreed not to require it but to make it available to all new faculty members. The College contributes resources to the Center to make the service available and sends a description of the process and a letter to each of the new faculty members encouraging them to take advantage of the service. They provide us with the names of the new instructors and the courses they are teaching. This gives staff at the Center a good excuse for making contact with new faculty members, introducing ourselves and our services, and explaining the process to them. Although not all of the instructors take advantage of this service, many do, and because of the early contact, we see a larger percent of new faculty taking advantage of some of our other programs, whether or not they choose to take advantage of the early feedback process.
In the College of Engineering, all new faculty receive midterm student feedback as part of their Faculty Fellows program that focuses on teaching. The College funds one of the staff members at the Center to provide this service.
Departmental Instructional Development Programs
The SGID process is used by some departments as a part of their instructional development programs. Mathematics, for example, requires an SGID in at least one section for every instructor who teaches for the first time in the Reformed Calculus Program. This includes all new faculty, postdocs, and TAs as well as tenured faculty who are new to teaching the reformed courses. New TAs (and other instructors who have received low end-of-course student ratings) have another SGID facilitated in their class during the second semester of their teaching as well. CRLT facilitated the process for the first two years of this intense usage of SGIDs. Since then, Math faculty and graduate students who are on the departmental “training team” for the year take part in CRLT’s annual SGID training workshops to learn the process. The training team does all of the visits (with the exception of some done by myself as the instructional consultant for the courses).
Use of SGIDs by Groups of Instructors Who Wish to Improve Their Teaching
Groups of faculty members or TAs sometimes decide they will all have the SGID process done in their classes. For example, a Faculty Peer Teaching Group in the Chemistry Department (as part of the AAHE Peer Evaluation Program) decided to have the process facilitated in their classes and then shared and discussed the results with each other. This helped to create an atmosphere of collegiality around teaching that eventually allowed faculty to start examining the curriculum and how their courses did (or did not) fit together.
Combining SGIDs with the Videotaping of a Class
In the Department of Communication Studies, as part of a course on teaching, all new TAs receive midterm student feedback coupled with the videotaping of their classes. In the feedback session, the TAs and consultant both view and discuss the tape as well as address the feedback from students. For the last session of their training, the TAs show and discuss about five minutes of their videotapes (chosen by the TAs), and during the process they often share with each other some of the students’ feedback. In this atmosphere of sharing, they have thoughtful discussions about various aspects of teaching and learning and many of their concerns are addressed.
Large Lectures
Facilitating the midterm student feedback process in large lectures has been a difficult, evolving process. Classes with over 80 students require a different tactic than those with fewer students. Because of the nameless, faceless aspects of large lectures, when the process is facilitated at the end of the class period, there tends to be a rush for the doors as soon as the instructor leaves. For those who stay, the small groups work well, but it is difficult to get representative statements from the whole group. We have had some success by asking the small groups to choose one of their members to stay and represent the group’s views and letting the remaining members leave the room. This brings down the size to a manageable number resulting in better discussions.
We have also tried facilitating the SGID during the first part of the lecture period and stopping after collecting the written responses from the small groups. The data is later collated and organized into trends which we can discuss with the instructor. However, it takes a tremendous amount of time to collate and analyze the students’ comments. I tried to do this with a lecture of 500, and it took me many hours to figure out the trends.
A process we are just now experimenting with seems to hold promise. In this scenario, the consultant visits the class ahead of the scheduled feedback session to get a general sense of the class and students’ attitudes, etc. On the appointed day, the instructor introduces the consultant then helps the consultant divide the class into groups (often the TAs help as well). The instructor and the TAs then leave the room. Each small group gets a numbered sheet for their responses. Groups are asked to agree on and write down their three most important responses to the questions and to rank their comments. The facilitator then calls out a number and the group that has that number on their response sheet provides their “most important comment,” and it is put on the overhead transparency. (If the group’s number one comment has already been stated, the group gives its second comment.) The facilitator chooses a sampling of groups (e.g., every fifth group) to call on and when there are no new responses, the facilitator asks the whole group if anyone else has something important that hasn’t been stated. In this way the feedback session becomes more orderly, and it is easier to address the issues raised. When the process is finished, the instructor comes into the room and goes on with his or her agenda for the day, while the consultant observes the class.
Large Courses With Many Sections
Another use of the SGID process is to gather feedback on large courses with many sections. This could be a large course divided into small sections that use the same syllabus and have common midterms and finals. The instructors teaching the sections are responsible for the day-to-day teaching of the class. Another course coming under this rubric is a large lecture taught by a faculty member with many discussion or lab sections taught by TAs. In both cases the “client” is the faculty member who is in charge of the large course, and she or he asks all of the instructors in the course to have SGIDs facilitated in their sections. The individual feedback is confidential between the instructor teaching the section and the consultant; however, the written feedback from across sections is analyzed by a CRLT staff member to find patterns and course-wide information.
When CRLT staff were first starting to conduct SGIDs, we decided to look for a multi-section, large course in which the faculty member wanted midterm feedback for all the TA-led sections. This would give us a chance to practice the process, give us similar experiences that could deepen our discussions about the use of the process, and help one of the large courses at the same time. Our chance came when students in introductory calculus started writing letters to the student newspaper complaining about the quality of teaching in the course. In response, the student paper came out with a long article about the “terrible” first-year calculus courses.
This seemed like a great opportunity. I called the faculty member in charge of the first-semester calculus course and told him that CRLT had a new method of getting student feedback that could help TAs become better teachers. I also said that we were looking for a large course with many sections to try out the process. He agreed, but only if we could complete the process over a one-week time span and visit all of the sections (taught by both faculty and TAs). In one week, six of us visited 51 sections. As we talked to each other about the process and our visits, we found a pattern of responses across sections, and we realized that our findings had implications for the course as a whole. We compiled all of the data we collected and analyzed it for patterns of responses to the sections as well as feedback on the course as a whole. A report of the results was given to the faculty member in charge of the course, who was delighted to receive the information. (This started a long-standing partnership between CRLT and the Department of Mathematics to improve the quality of teaching in that department.)
This process can give the faculty member a general sense of how the course is going for the students. It can identify areas of concern about the book, the syllabus, the course-wide exams, etc. It can also identify areas of miscommunication between the course coordinator and the instructors, or whether or not students and instructors are understanding the goals of the course.
In the case of the small discussion sections (or labs) connected to a large lecture, consultants facilitate an SGID in at least one section for each of the TAs. This process can tell the faculty in charge of the course whether or not the lectures and the labs or discussions are working together. For example, in a course with several sections of lectures and six labs connected to each lecture, students were not understanding the connection between the lectures and the labs. In response to this feedback, the course coordinator created a way to make it easy for the instructors to demonstrate how concepts related to lab projects by using a MAPLE graphic system to project three dimensional graphs on a screen. In addition, the lecturers and TAs were encouraged to make explicit connections for students between the lectures and the lab work wherever possible.
In another course, major changes were made when the student feedback across sections indicated that most of the TAs teaching the discussion sections were mostly lecturing, reviewing highlights of the large lectures. As a result, the faculty member in charge of the course developed a training session for the TAs to help them learn discussion skills. She also developed an activities file that TAs could use in planning their classes. Weekly meetings moved from a discussion of the material to discussions and demonstrations of activities TAs could use to help students understand the material.
This process has had interesting repercussions in one department. Because the TAs were being required to receive feedback in their classes, and because they found it very useful, they wanted to be able to give feedback in some of their graduate classes. As a group, they went to the department and requested that all of the instructors teaching the core group of graduate courses be required to get midterm feedback from the students in the course (i.e., the graduate students making the request). In response, the department made the request to the faculty; however, only a couple of them took advantage of the service. Graduate students are continuing to try to get the department to make it mandatory.
Implications for Departmental GSI Training
During the past two years the College of Literature, Science, and the Arts (LS&A) has required all departments to provide training for new GSIs equivalent to a one-credit course on teaching. The College has designated CRLT as a resource for the departments to use as they develop, conduct, and evaluate their programs. One of the services CRLT has provided to departments is the use of the SGID method for getting feedback from the TAs who participate in the training programs. The results have been useful to the departments who have received the feedback. One department learned that the training was too general and needed to be focused more on what the TAs were teaching. TAs in another department asked for more help in facilitating discussions: how to begin them, how to maintain them, and how to summarize at the end. Another department learned that the TAs were adamant about getting out at the designated time. The instructors didn’t mind if the sessions went for two hours instead of ninety minutes, just as long as they were told ahead of time what to expect, with the session finishing on schedule so they could make plans.
We have also used SGIDs to gather feedback from graduate student mentors. The College of LS&A gives funds to the departments to hire upper-level graduate students who serve as mentors (one mentor for each 10 new TAs) to help with the TA training programs and to work with individual TAs. In one large department, the graduate student mentors (GSMs) asked for an opportunity to give formal feedback to the department through the SGID method. Consequently, the department asked CRLT to come in and facilitate two feedback processes: with the mentors and with the TAs (asking for feedback on the mentoring program). As a result, the department learned that they needed to make the mentoring positions more credible both to the GSIs and to the faculty with whom the mentors were working. They also found that they needed to give more guidelines for new mentors and to have better communication with GSIs on how they could take advantage of the mentors.
Advocacy for Better Rooms
In the Department of Mathematics the second-year calculus courses (with about 100 students in each of several lecture sections) are taught in two long, narrow rooms with the seats all on one level. As an instructional consultant to the Department, I was asked to facilitate SGIDs in all of the large lectures and do an analysis over all the sections in order for them to get a sense of how the courses were going for the students. In every section, those students seated behind the first few rows complained that they could not see, and, unless the faculty member had a big booming voice, they could not hear. While observing, I sat in different parts of the room, both to watch the students and observe the instructor. My observations backed up the students’ comments. Beyond the first five rows, some students were craning their necks to see what was written on the board, and some students were waiting until the board was raised before they copied what was on the board, putting them several paces behind the lecture. Also, the room had an echo and, even if one could hear, it was sometimes difficult to understand what the lecturer was saying (especially for those instructors who had an accent).
Although faculty had complained about these rooms, nothing had happened. After I did an analysis of the feedback from the students across sections, I wrote a letter to the Chair, the faculty member in charge of the course, and the Associate Chair for Undergraduate Education, emphasizing the problems. I also included quotes that I obtained from the students during the SGID process. (Students had been very graphic about the difficulties they were having in seeing and hearing.) The Chair of the Department immediately sent the letter to the Dean of the College and the person in charge of facilities.
Student voices got action. A group made up of Math faculty, myself, one of the University’s architects, and the person in charge of facilities have been meeting this semester to redesign the two rooms to be more learner friendly. Student voices and my observations have played a big part in the discussions. They are planning to make the changes in the Summer of 1998.
Curriculum Changes
Some departments have used the SGID process to learn whether or not curriculum changes were working for the students. For example, over the past six years the Mathematics Department has made curriculum changes in the introductory calculus courses. The new courses require a different type of learning for the students: they are responsible for reading the book (material is not all covered in class); they work difficult, open-ended, real-world problems in home-work teams, explain their answers in writing, work cooperatively with other students in class, and use graphing calculators to help them understand the concepts. The goals of the courses are to help the students learn how to think about mathematics and to learn analytical and problemsolving skills as well as learn calculus and how it relates to the real world.
For the first few years of this project, SGIDS were conducted in all of the experimental sections of the course (see section on multisection courses for the method). When a problem was identified in several sections, it called for an adjustment in the course materials and/or training. For example, we found that many students were frustrated with the new course and did not understand why the course was not set up like the calculus courses in high school: they were accustomed to calculus being rote learning and the manipulation of symbols. Many of them had emphatically stated, “I know what calculus is, and this isn’t calculus!” It was obvious from the data that students did not understand the goals of the course and why the course was set up the way it was. In response, the course organizers worked with the instructors so they would explicitly discuss the course goals with the students throughout the semester and encourage the students in their learning of skills necessary to succeed in the course. In addition, the goals of the course were more clearly laid out in the students’ course pack.
There were many other areas of concern that were discovered through the students’ feedback. For example, the homework groups were not working as well as they should, and many students were frustrated because they were not sure what or if they were learning from the cooperative activities in class. As a result, guidelines for working in groups were added to the students’ course pack, and a session on “Helping Students Learn How to Work in Groups” was added to the Professional Development Program before classes began in the fall.
Facilitating an SGID with the instructors of the course. In addition to facilitating SGIDs with students in calculus, we conducted modified SGIDs with the instructors of the first-year courses. From these feedback sessions, we learned faculty’s and TAs’ perceptions of what was working in the courses and what was not, and how we could better support them as instructors. For example, we learned that weekly meetings needed to be more organized and focused on pedagogical issues; instructors were still uncomfortable facilitating cooperative learning activities in class; and they were concerned because some of the homework groups were not working. They asked for more help in these areas. They also made suggestions that helped us make the common midterms and final run more smoothly. We also learned which parts of the book they thought were good and which parts they felt uncomfortable in using.
The information we gathered from both students and the instructors through using the SGID process was invaluable in helping the courses get off to a good start. We used this process for gathering information until after the new curriculum was in place and working across all 60 plus sections. Since then, SGIDs have been used solely for giving feedback to individual instructors. This semester, however, now that the course has been running for several years, we are again facilitating feedback sessions in all of the second semester sections to take a pulse on the course.
Some Disadvantages
As you can see from the above descriptions, there are many advantages to using the SGID process. However, it would be a disservice to leave readers with the impression that this is a fool-proof method. There are drawbacks to using SGIDs and knowing about them will allow you to minimize their impact.
SGIDs take a lot of time for both the consultant and the instructor. We figure it takes an average of about four hours of time for the process including: premeeting, facilitating the process in class, analyzing and organizing the data, and then meeting with the instructor. However, the process sometimes takes more time, and we try to leave our schedules open so the meeting with the instructor can go longer if necessary.
SGIDs take more class time than some instructors want to give up. In this case a questionnaire at midterm for each student during the last ten minutes of class might be more suitable. Another possibility is to have the class write a two-minute paper at the end of class, including what is going well and suggestions for change.
Training to conduct the process and an ongoing evaluation of the process are essential. We feel that the training we do is generally adequate in helping the consultants to become effective in facilitating this process. However, we have occasionally misjudged a graduate student’s ability to let go of his or her “expertise” in the classroom and learn to listen and respond in light of the instructors goals and desires. Consequently, it is important to send out evaluations immediately after the process is done in order to identify problems early.
Using the SGID process can also backfire. The act of verbally exchanging ideas on how to make a class better seems to create an expectation in the students that something will change for them (more so, it seems, than does an individual written evaluation). If an instructor ignores students’ suggestions, students become disgruntled and the class atmosphere can disintegrate. It is essential that the consultant and the instructor explicitly discuss how the instructor is going to respond to students’ suggestions: what he or she is going to do differently, what will not be changed and why, and how the instructor is going to communicate this to the students. Also, if an instructor is really having problems and students are generally unhappy with everything in the course, their negative comments can “snowball, “picking up speed as students share their frustrations. This happened to me on one occasion and it was a very difficult situation. Since then I have learned that if, when observing, I notice a high degree of dissatisfaction among students (or tension between the instructor and students), I have students write individual responses to the questions and then gather them and collate and analyze for patterns. I have had to do this very rarely and recommend it only in extreme circumstances.
Because small groups are asked to reach a consensus, individual voices are sometimes lost. There may be some instances where the instructor is more interested in the full diversity of responses than in consensus building. In such cases, open-ended surveys or focus groups would be more appropriate.
Conclusions
The use of the SGID method to provide feedback has opened many doors for the staff at CRLT. It has also stretched us to our limits. During those prime few weeks during the middle of the semester, everyone is spread thin: fewer meetings are scheduled during that time, other work is postponed. Everyone involved in the facilitation of midterm feedback sessions agrees that we have learned an incredible amount about teaching and learning by being involved with this process. We have learned a lot about listening, observing, and withholding judgment. We have learned respect for the thoughtfulness and integrity of both the instructors and the students as they work together to make this complicated process of teaching and learning successful. The only advice I have for those of you who have ideas tucked away in “things to try” files is to take the leap and try them.
References
- Black, B., & Gates, B. (1997). Training TAs as consultants at the University of Michigan: Workshop for peer mentors. In K. Brinko & R. Menges (Eds.), Practically speaking: A sourcebook for instructional consultants in higher education. Stillwater, OK: New Forums Press.
- Clark, D. J., & J. Bekey. (1979). Use of small groups in instructional evaluation. PODQuarterly, 1(2), 87-95.
- Hofer, B., Black, B., & Acitelli, L. (1997). Reflecting on practice: Observing ourselves consulting. In K. Brinko & R. Menges (Eds.), Practically speaking: A sourcebook for instructional consultants in higher education. Stillwater, OK: New Forums Press.
- Redmond, M.V., & Clark, D. J. (1982). A practical approach to improving teaching. AAHEBulletin, 34(6), 8-10.
- Wulff, D. H., & Nyquist, J. D. (1986). Using qualitative methods to generate data for instructional development. To Improve the Academy, 5, 37-46.
- Wulff, D. H., Stanton-Spicer, A. Q., Hess, C. W., & Nyquist, J. D. (1985). The student perspective on evaluating teaching effectiveness. ACA Bulletin, 53, 39-47.
Contact:
Beverly Black
Center for Research on Learning and Teaching
3300 School of Education Building
University of Michigan
Ann Arbor, MI 48109-1259
(734) 764-0505
(734) 647-3600 FAX
Beverly Black has been an instructional consultant at the Center for Research on Learning and Teaching at the University of Michigan for twenty-five years. She has worked with both faculty and teaching assistants, conducted workshops, and headed the team that works with teaching assistant programs. During the past four years she has worked half time in the Department of Mathematics to help integrate cooperative learning into the introductory courses and to develop an extensive training program and support system for graduate students and faculty teaching in the courses.