The purpose of this chapter is to review recent literature on instructional development in higher education. More specifically, it defines and illustrates instructional development as a major component of faculty development. Next, it reviews research on how development activities are associated with teaching and learning. Finally, it argues there is a critical need for additional research and offers suggestions for accomplishing that research agenda.

INTRODUCTION AND LIMITATIONS

The focus of this review is to present key concepts, research findings, resources, and suggestions for future research related to instructional development and the role it plays in teaching and learning in higher education. Due to the volume of research and changes in the field, this review concentrates on material published primarily since 1995. In addition, though it can be argued that reward structures are part of faculty and instructional development, research on this topic is not included here. Finally, though there is literature on development efforts designed specifically for new faculty, nontenure-track faculty, and teaching assistants, the focus is on development efforts for tenured or tenure-track faculty or that for all instructors.

WHAT IS INSTRUCTIONAL DEVELOPMENT?

For the purposes of this review, instructional development is one component (a large one) of more general faculty development and refers to a wide range of activities at a variety of levels that aim to improve teaching and, thus, to enhance student learning. These activities can be at the individual (e.g., an instructor keeps a private teaching journal and reflects on it), dyad (e.g., an instructor works with a peer or a consultant), group (e.g., faculty members join a teaching circle to discuss their teaching), or organizational levels (e.g., department, college, or university-wide activities such as teaching workshops; national teaching conferences). The structure of such activities ranges from very informal and decentralized (such as faculty who are teaching the same course meeting over coffee to discuss teaching issues) to more formal and centralized (as in institutes offered by university teaching centers), and from short-term (e.g., a onehour panel on teaching) to long-term (e.g., a year-long peer mentoring relationship).

Many types of instructional development activities or services are noted in the literature. These include, but are not limited to, the following: efforts to obtain formative feedback such as classroom assessment or research, peer observation or review, teaching circles, newsletters, web sites, handbooks, resource rooms, workshops, institutes, symposia, videotaping and microteaching, mentoring, small grants, team teaching, student group instructional diagnosis, one-on-one consultations, awards, and technology support (Chism & Szabo, 1996; Paulsen & Feldman, 1995; Seldin, 1995; Weimer, 1990; Wright, 1995; Wright, 2000).

Several writers have discussed models of, or the processes involved in, faculty or instructional development (Caffarella & Zinn, 1999; Emery, 1997; Licklider, Schnelker, & Fulton, 1997; Middendorf, 1998; Paulsen & Feldman, 1995; Robertson, 1999; Smith & Geis, 1996; Weimer, 1990). In one of the earliest of these, the authors describe a fivestep process focusing on individual instructors: 1) developing instructional awareness, 2) gathering information, 3) making choices about changes, 4) implementing the alterations, and 5) assessing the alterations. In a more recent and somewhat complex model, Caffarella and Zinn (1999) discuss many personal (e.g., health, life transitions, personal values, self confidence), interpersonal (e.g., mentoring, level of support from department chairperson, collaboration), and institutional (e.g., resources, policy statements, competitive or cooperative climate, time for professional development) factors that enable or impede the professional development of faculty.

The Relationship between Instructional Development and Teaching/Learning?

In discussions of the characteristics of institutions that strongly support teaching and learning, one characteristic often included is the existence of faculty or instructional development efforts (Feldman & Paulsen, 1999; Paulsen & Feldman, 1995; Patrick & Fletcher, 1998; Smith, 1998; Woods, 1999; Wright, 1996). For example, Paulsen and Feldman (1995) note the following empirically based characteristics of institutional cultures that support teaching and teaching improvement.

  • Support from key administrators

  • Faculty shared values and ownership

  • A broad definition of scholarship

  • A teaching demonstration as part of the faculty hiring process

  • A strong faculty community

  • Supportive department chairpersons

  • A strong connection between valid evaluation of teaching and personnel decisions

  • A faculty development program or campus teaching center

Much of the research on the relationship between instructional development and teaching and learning is program evaluation research. The impact or effectiveness of instructional development efforts may be defined and measured in a variety of ways, including assessing the level of use of instructional development activities, general perceptions of developers, satisfaction of participants, cost effectiveness, impact on teacher attitudes or behaviors, and impact on student perceptions, behaviors, or learning. According to Chism and Szabo (1997), the most common types of information actually obtained are data on use and satisfaction.

Methods used to gather the various types of assessment data include anecdotal observations, counting numbers (e.g., attendance at a workshop), self-evaluation by staff or participants, questionnaire studies of staff or participants, interviews of participants, satisfaction scales or indexes for specific events or programs, reviews of a program by outside experts, quasiexperiments, and follow-up studies (e.g., observations or interviews) of participants after services (see also Chism & Szabo, 1997; Weimer, 1990). Recent examples of some of this work follow.

Using reviews of the writing of others, anecdotal evidence, observations, and personal expertise, some authors have proposed key characteristics or features of successful faculty or instructional development programs (Menges, 1997; Paulsen & Feldman, 1995; Seldin, 1995; Weimer, 1990; Wright, 1995). Successful programs are often implicitly defined as development opportunities that are actually used by faculty, appreciated by faculty, and both fit and are supported by the institution. Paulsen and Feldman (1995) suggest that effective services involve collaboration, consultation, feedback, and targeting new and junior faculty. Seldin (1995) lists several strategies for successful development programs: fit the program to the institution’s culture, focus on long-term impact, obtain visible support from top-level administrators, use advisory boards, start small, take a positive and inclusive approach, involve faculty in as many ways as possible, obtain feedback, and recognize and reward excellence in teaching.

Another topic of study in this area is the view of faculty developers and faculty members about the role and impact of instructional development. Researchers have conducted interview and questionnaire studies on faculty needs, use of instructional improvement, satisfaction with services, perceptions of effective opportunities, and similar issues (Chism & Szabo, 1996; Eleser & Chauvin, 1998; Sandy, Meyer, Goodnough, & Rogers, 2000; Stanley, 2000; Woods, 1999; Wright & O’Neil, 1995).

For example, Wright and O’Neil (1995) report a survey of instructional developers in four countries (Canada, United States, United Kingdom, and Australia). They found that more than 60% of the institutions represented have centers or units devoted to the improvement of teaching. In addition, they report the practices these developers believe are most likely to improve the quality of teaching at their school. A few of the top-rated activities are recognition of teaching in tenure and promotion decisions, deans/heads who foster importance of teaching responsibilities, centers to promote effective instruction, mentoring programs and support for new faculty, and grants to faculty to devise new approaches to teaching.

Another example of this type of research is a study of faculty in 13 institutions in New Hampshire (Sandy, Meyer, Goodnough, & Rogers, 2000). The researchers found that 40% of faculty were satisfied with the quantity and quality of development activities offered at their schools. Faculty reported the following as positively impacting their satisfaction: the presentation of relevant and interesting topics, support for the unit by the institution, supportive colleagues, and the dedication of monetary resources. In addition, they found that faculty at institutions with instructional development units rated the importance of the quality of teaching to campus administrators higher than did faculty at institutions without such units.

Analyzing survey data from approximately 100 teaching centers, Chism and Szabo report that”. . . the average program reaches 82 percent of its client base with publications, 47 percent through events, 11 percent through consultation, and 8 percent through mentoring programs” (1996, p. 125). In a study of faculty development resources and services at Research I and II institutions, Wright (2000) asked respondents what new services or opportunities they planned to offer or increase in the next five years. The initiatives noted were development opportunities related to instructional technology (21%), graduate student programs (21%), assessment services (9%), peer review (9%), and preparing future faculty (9%).

Stanley (2000) offers a different approach to looking at faculty perceptions and impact on faculty. She interviewed ten faculty members who had made repeated use, over time, of consultations and other center services. She found that faculty experiment with new teaching strategies and ideas, and that they are often motivated by personal reflection and/or a situation in the classroom. These faculty members often worked alone on teaching but recognized the value of faculty development. They felt their consultations with faculty developers were helpful, and that they could speak freely and obtain resources through the faculty development unit.

Finally, there are research studies and reviews on the nature and outcomes of specific forms of instructional development programs or activities. Generally this research describes the program/activity and attempts to evaluate the program or activity using one or more methods. For example, Eison and Stevens (1995) discuss workshops and institutes. Several authors look at various mentoring or peer collaboration/review programs (Anderson & Carta-Falsa, 1996; Bernstein, Jonson, & Smith, 2000; Goodwin, Stevens, Goodwin, & Hagood, 2000; Scott & Weeks, 1996; Sweidel, 1996; Wildman, Hable, Preston, & Magliaro, 2000). Stanley, Porter, and Szabo (1997) surveyed instructional development clients about the outcomes of a consultation. Others discuss and assess the use of formative student input through classroom assessment or small group instructional diagnosis (Black, 1998; Farmer, 1999). Recent work reviews development activities in support of the use of instructional technology (Kitano, Dodge, Harrison, & Lewis, 1998; Lieberman & Reuter, 1996; Taber, 1999). Some of the literature includes reports on specific programs that incorporate a mix of several different development activities (Fulton & Licklider, 1998; Middendorf, 1998; Paulsen & Feldman, 1995; Rauton, 1996; Seldin, 1995; Wright, 1995).

What do these studies show? For much of this evaluation research of instructional development, a variety of beneficial outcomes are reported. An increased sense of collaboration and community about teaching and high levels of participant satisfaction are outcomes reported in most of the studies. A third fairly common finding is that instructors’ attitudes about teaching change. For example, instructors become more self-confident, more positive, and more concerned with students. In a few of these studies, changes in teaching behaviors are reported. These include, for example, trying one or more new teaching techniques, creating a more positive classroom environment, increased use of instructional technology, and taking more time to reflect on teaching (e.g., Bernstein, Jonson, & Smith, 2000; Eison & Stevens, 1995; Farmer, 1999; Fulton & Licklider, 1998; Kitano, Dodge, Harrison, & Lewis, 1998; Stanley, Porter, & Szabo, 1997; Sweidel, 1996; Wildman, Hable, Preston, & Magliaro, 2000). Rarely are student perceptions or outcomes measured, but there are exceptions. For example, Lieberman and Reuter (1996) and Kitano, Dodge, Harrison, and Lewis (1998) look at student reactions to the use of instructional technology (after faculty attended technology institutes or other forms of training) and report mixed and positive student reactions, respectively. In addition, Bernstein, Jonson, and Smith (2000) report a detailed, longitudinal, multimeasure study of the impact of peer review on faculty attitudes and behaviors as well as student attitudes and learning. They found “uneven” affects of peer review on teaching practices and student learning.

Thus, much research on instructional development, and its relationship to teaching and learning outcomes, exists. Most of the work, however, is descriptive or correlational in nature, and focuses on instructor use of services, instructor satisfaction with services, changes in instructors’ attitudes, and instructor perceptions of behavior changes.

WHERE DO WE GO FROM HERE? SUGGESNONS FOR FURURE RESEARCH

Over a decade ago, Weimer (1990) identified limitations of, and suggestions for, research on the effectiveness of faculty development. Based on the recent literature reviewed here, it appears many of her conclusions hold today. We, as faculty and/or developers, understand these limitations. Designing reliable and valid research on this topic is difficult. For instance, concepts such as learning are complex and context specific, making operationalization a challenge. There are many practical and ethical constraints to conducting experimental research and drawing causal conclusions. In addition, much of this research is program evaluation conducted by development staff. Staff members report the following difficulties in evaluating their own programs: “lack of time, lack of resources, problems in designing studies, difficulties in getting the cooperation of the program users, and lack of appreciation or requirements for performing program evaluation” (Chism & Szabo, 1997, p. 60).

Thus, in many ways, it is not surprising that much of the writing in this area is still descriptive (“how to” and “best practice” suggestions, anecdotal evidence, detailed case studies) or correlational (cross-sectional, questionnaire studies of developers or faculty clients). It is also not surprising that dependent variables tend to be attitudinal and/or about faculty, rather than behavioral and/or about student learning. Data and analysis of the type already obtained are essential, but other research is desperately needed.

Despite all this useful research, we really know very little about the impact of instructional development on teaching practices and student learning. As someone with the responsibility for such development, I want to know, I need to know, much more. I want to know if, how, when, and for whom such services contribute to changes in teacher behaviors (e.g., trying a new way to present material, changing evaluation techniques, adjusting the amount of content or pace of the course, using formative assessment techniques, etc.). I want to know if, how, when, and for whom changes in teacher behaviors are related to changes in student learning and development. I want to know which instructional development practices will have the greatest impact on teacher behavior and student learning under what circumstances and at what cost.

So, where do we go from here? We need more studies attempting to measure the impacts of instructional development on teacher behaviors and on student learning. We need more longitudinal research of both a qualitative and quantitative nature following programs, instructors, and students over time. For example, we might follow a group of instructors over a period of at least two years, measuring, at regular intervals, their involvement in all types of instructional development activities, assessing key teaching attitudes and practices via interview, questionnaire, observation, or content analysis of syllabi. We could look at class or program evaluation and assessment data for these instructors’ students over time as well.

More frequent use of well-designed quasi-experiments would add to the knowledge base. For example, two of my colleagues have instituted a change in their courses and are conducting their own quasi-experimental classroom research to assess the impact of graded versus ungraded homework on student participation and learning. Perhaps we could select two groups of faculty, attempting to match the groups on as many potentially relevant variables (discipline, class level and size, teaching experience, etc.) as possible. As a pre-test, we compare these two groups on a range of teaching attitudes and practices and on measures of their students’ learning. We encourage, through a variety of means (and measure), high levels of use of instructional development in one group over an academic year. Finally, we compare the groups again on a post-test.

We should conduct additional research on development efforts that are any of the following: cooperative, interdisciplinary, interinstitutional, and in support of instructional technology. Furthermore, studies that link specific features of development activities to specific types of outcomes and impact are needed. For example, we need more work comparing the impact of workshops versus consultations versus small grants on changes in instructor attitudes and teaching behaviors. Thus, we need more studies that follow instructors who have participated in a particular type of development over time, observing classes, analyzing materials, and interviewing faculty and students to assess the relationship of the development activity to teacher behaviors and student outcomes. Finally, we must continue to assess what services are needed by, and are effective for, specific groups or types of instructors (e.g., by discipline or by years of instruction).

Assessing how and why instructional development works in different types of institutions is essential. What features of the institutional structure and culture interact in what ways with instructional development to change teaching and learning on campus? For example, more research is needed on the role of the academic department and department chairperson in instructional development as well as on various reward structures as a form of instructional support. We might compare, via interviews and analysis of documents, the climate in similar or related departments (e.g., Are faculty encouraged to use development services? Are development activities rewarded in annual review? To what extent is teaching and learning discussed in faculty meetings or teaching brown bags?). These departments could then be compared on instructor teaching attitudes and behaviors and student measures such as engagement or faculty contact.

How can we accomplish this, admittedly, very difficult work? We can do some of this research as part of an effort to support the scholarship of teaching and learning (SoTL) on our campuses and cooperatively with other campuses or groups (e.g., The Carnegie Foundation for the Advancement of Teaching or the Professional and Organizational Development Network (POD) or disciplinary associations). We need to draw upon the research expertise of faculty colleagues in departments such as education, psychology, and sociology. Offices and staff in instructional development can reach out to staff in college or university grant and research offices on their campus looking for ways (grants, joint projects, cosponsored workshops or internal grant opportunities) to work together on SoTL and program evaluation. For some campuses, partnerships with institutional research or assessment staff could yield useful research. Given external (e.g., parents, legislators, board members) and internal pressures to improve instruction, as well as increasing institutional and personal investments (e.g., staff, equipment, space, funds, participants’ time) in instructional development, we (faculty developers, faculty, and administrators) must find the time, resources, and strategies to encourage and support this research as well as to conduct more of it ourselves.

ACKNOWLEDGMENTS AND NOTES

A different version of this paper was written for the American Sociological Association’s summer workshop on the Scholarship of Teaching and Learning, August 2000. Thanks to Nancy Bragg, K. Patricia Cross, and Nancy A. Diamond for comments on earlier drafts of this paper.

REFERENCES

  • Anderson, L. E., & Carta-Falsa, J. S. (1996). Reshaping faculty interaction: Peer mentoring groups. Journal of Staff, Program, and Organization Development, 14, 71-75.
  • Bernstein, D. J., Jonson, J., & Smith, K. (2000). An examination of the implementation of peer review of teaching. New Directions for Teaching and Learning, No. 83. San Francisco, CA: Jossey-Bass.
  • Black, B. (1998). Using the SGID method for a variety of purposes. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 245-262). Stillwater, OK: New Forums Press.
  • Caffarella, R. S., & Zinn, L. F. (1999). Professional development for faculty: A conceptual framework of barriers and supports. Innovative Higher Education, 23, 241-254.
  • Chism, N. V. N., & Szabo, B. (1996). Who uses faculty development services? In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. 15. Resources for faculty, instructional, and organizational development (pp. 115-128). Stillwater, OK: New Forums Press.
  • Chism, N. V. N., & Szabo, B. (1997). How faculty development programs evaluate their services. Journal of Staff, Program, and Organization Development, 15, 55-62.
  • Eison, J., & Stevens, E. (1995). Faculty development workshops and institutes. In W. Alan Wright (Ed.), Teaching improvement practices: Successful strategies for higher education (pp. 206-227). Bolton, MA: Anker.
  • Eleser, C. B., & Chauvin, S. W. (1998). Professional development how to’s: Strategies for surveying faculty preferences. Innovative Higher Education, 22, 181-201.
  • Emery, L.J. (1997). Interest in teaching improvement: Differences for junior faculty. The Journal of Staff, Program, and Organization Development, 15, 29-34.
  • Farmer, D. (1999). Course-embedded assessment: A catalyst for realizing the paradigm shift from teaching to learning. Journal of Staff, Program, and Organization Development, 16, 199-211.
  • Feldman, K. A., & Paulsen, M. B. (1999). Faculty motivation: The role of a supportive teaching culture. New Directions for Teaching and Learning, No. 78. San Francisco, CA: Jossey-Bass.
  • Fulton, C., & Licklider, B. L. (1998). Supporting faculty development in an era of change. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 51-66). Stillwater, OK: New Forums Press.
  • Goodwin, L. D., Stevens, E. A., Goodwin, W. L., & Hagood, E. A. (2000). The meaning of faculty mentoring. Journal of Staff, Program, and Organization Development, 17, 17-30.
  • Kitano, M. K., Dodge, B. J., Harrison, P. J., & Lewis, R. B. (1998). Faculty development in technology applications to university instruction: An evaluation. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 263-290). Stillwater, OK: New Forums Press.
  • Licklider, B. L., Schnelker, D. L., & Fulton, C. (1997). Revisioning faculty development for changing times: The foundation and framework. Journal of Staff, Program, and Organization Development, 15, 121-133.
  • Lieberman, D. A., & Rueter, J. (I 996). Designing, implementing and assessing a university technology-pedagogy institute. In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. 15. Resources for faculty, instructional, and organizational development (pp. 231-249). Stillwater, OK: New Forums Press.
  • Menges, R. J. (1997). Fostering faculty motivation to teach: Approaches to faculty development. In J. L. Bess (Ed.), Teaching well and liking it (pp. 407-423). Baltimore, MD: Johns Hopkins University.
  • Middendorf, J. K. (1998). A case study in getting faculty to change. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 203-224). Stillwater, OK: New Forums Press.
  • Patrick, S. K., & Fletcher, J. J. (1998). Faculty developers and change agents: Transforming colleges and universities into learning organizations In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 155-170). Stillwater, OK: New Forums Press.
  • Paulsen, M. B., & Feldman, K. A. (1995). Taking teaching seriously: Meeting the challenge of instructional improvement. ASHE-ERIC Educational Report No. 2. Washington, DC: The George Washington School of Education and Human Development.
  • Rauton, J. T. (1996). A home-grown faculty development program. Journal of Staff, Program, and Organization Development, 14, 5-9.
  • Robertson, D. L. (1999). Professors’ perspectives on their teaching: A new construct and developmental model. Innovative Higher Education, 23, 271-294.
  • Sandy, L. R., Meyer, S., Goodnough, G. E., & Rogers, A. T. (2000). Faculty perceptions of the importance of pedagogy as faculty development. Journal of Staff, Program, and Organization Development, 17, 39-50.
  • Scott, D. C., & Weeks, P.A. (1996). Collaborative staff development. Innovative Higher Education, 21, 101-111.
  • Seldin, P. (1995). Improving college teaching. Bolton, MA: Anker.
  • Smith, B. (1998). Adopting a strategic approach to managing change in learning and teaching. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 17. Resources for faculty, instructional, and organizational development (pp. 225- 242). Stillwater, OK: New Forums Press.
  • Smith, R. A., & Geis, G. L. (1996). Professors as clients for instructional development. In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. 15. Resources for faculty, instructional, and organizational development (pp. 129-153). Stillwater, OK: New Forums Press.
  • Stanley, C. A. (2000). Factors that contribute to the teaching development of faculty development center clientele: A case study of ten university professors. Journal of Staff, Program, and Organizational Development, 17, 155-169.
  • Stanley, C. A., Porter, M. E., & Szabo, B. L. (1997). An exploratory study of the faculty-client relationship. Journal of Staff, Program, and Organization Development, 14, 115-123.
  • Sweidel, G. B. (1996). Partners in pedagogy: Faculty development through the scholarship of teaching. In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. 15. Resources for faculty, instructional, and organizational development (pp. 267-274). Stillwater, OK: New Forums Press.
  • Taber, L. S. (1999). Faculty development for instructional technology: A priority for the new millennium. Journal of Staff, Program, and Organization Development, 15, 159-174.
  • Weimer, M. E. (1990). Improving college teaching. San Francisco, CA: Jossey-Bass.
  • Wildman, T. M., Hable, M. P., Preston, M. M., & Magliaro, S. G. (2000). Faculty study groups: Solving good problems through study, reflection, and collaboration. Innovative Higher Education, 24, 247-263.
  • Woods, J. Q (1999). Establishing a teaching development culture. In R. J. Menges & Associates (Eds.), Faculty in new jobs: A guide to settling in, becoming established, and building institutional support. (pp. 268-290). San Francisco, CA: Jossey-Bass.
  • Wright, D. L. (1996). Moving toward a university environment which rewards teaching: The faculty developer’s role. In L. Richlin & D. DeZure (Eds.), To improve the academy: Vol. I 5. Resources for faculty, instructional, and organizational development (pp. 185-194). Stillwater, OK: New Forums Press.
  • Wright, D. L. (2000). Faculty development centers in research universities: A study of resources and programs. In M. Kaplan & D. Lieberman (Eds.), To improve the academy: Vol. 18. Resources for faculty, instructional, and organizational development (pp. 291-301). Bolton, MA: Anker.
  • Wright, W. A. (1995). Teaching improvement practices: Successful strategies for higher education. Bolton, MA: Anker.
  • Wright, W. A., & O’Neil, M. C. (1995). Teaching improvement practices: International perspectives. In W. A. Wright (Ed.), Teaching improvement practices: successful strategies for higher education (pp. 1-57). Bolton, MA: Anker.

Contact:

  • Kathleen McKinney

  • Center for the Advancement of Teaching

  • Box 3990

  • Illinois State University

  • Normal, IL 61790-3990

  • (309) 438-5943

  • Email: [email protected]

Kathleen McKinney is Professor of Sociology and Director of the Center for the Advancement of Teaching at Illinois State University. In addition, she supervises the university assessment office. Her current research interests include the scholarship of teaching and learning, sexual harassment in higher education, and personal relationships. McKinney is active in organizations and research related to both sociology and faculty development.