In this review of 138 studies on the impact of educational development practices, we discuss the idea of impact and summarize previous studies of this topic. We then present an overview of findings on impact from cu"ent studies of typical educational development activities: workshops, formal courses, communities of practice, consultation, mentoring, and awards and grants programs. We conclude that although the studies vary in quality, the sheer volume of results offers guidance for development practice.

Calls to improve the assessment of the impact of educational development have permeated the literature on this topic from early studies, such as Hoyt and Howard (1978), to the present (Stes, Min-Leliveld, Gijbels, & Van Petegem (2010). Advice to new centers emphasizes the collection of information on effectiveness for improvement as well as accountability. The literature on educational development stresses the scholarship of practice, the serious study of educational development work that can establish credibility, and sound warrants for specific development approaches (Baume, 2002). Nearly fifty years after the formal establishment of educational development centers in North America, these calls continue, yet the consensus among developers is that documentation of impact is still lacking (Weimer, 2007).

The Idea of Impact in Educational Development

Educational developers and training specialists more generally have suggested several levels at which the impact of their work might be assessed. The most cited of these is Kirkpatrick (1998), who, along with Guskey (2000), listed four levels that can be adapted for this context: satisfaction of the participant, learning, application, and results on the organization and its mission. Chism and Szabo (1998) listed similar levels but included a focus on the learner rather than the organization: immediate satisfaction of the participant, change in teaching beliefs or knowledge, change in teaching behaviors, and change in student learning. Smith (2004) added a focus on the career trajectory, suggesting that impact can occur at the level of the individual participants, their careers, their students’ experiences, and the effect on the teacher’s department and the institution.

More comprehensive lists that build on the work of these researchers have been provided by three overviews. Kreber and Brook (2001) asserted that impact can be assessed at six levels: (1) participants’ perceptions and satisfaction with the intervention, (2) their beliefs about teaching and learning, (3) their teaching performance, (4) students’ perceptions of participants’ teaching performance, (5) students’ learning, and (6) the culture of the institution. Steinert et al. (2006) developed six similar levels, adding a separate category for changes in teacher knowledge and skills to this list, and not dividing students’ perceptions and learning. Finally, Stes et al. (2010), working from Steinert et al.’s (2006) framework, eliminated the satisfaction level and added four levels of learning change for teachers (attitudes, conceptions, knowledge, and skills), three levels for student impact (perceptions, study approaches, and learning outcomes), and levels for teacher behavior change and institutional impact.

Such frameworks are important in conceptualizing studies; these advances in their development are important steps for the study of impact. Overviews of the findings and methods used in the literature on impact have also continued to accrue, a development that has helped to describe patterns that are useful for practitioners and researchers alike.

Past Overviews of the Impact Literature

Overviews of educational impact activity and findings span over thirty years and have become substantially more detailed with time. Chism and Szabo (1998) found support for the belief that the attention of most educational development assessment efforts is at the level of participant satisfaction rather than impact on actions or thinking, and that measurement of impact at the level of the learner or the overall organization is quite rare and complicated. Using similar categories to the Chism and Szabo study, Hines (2009) found that the same situation continues to exist. These studies follow general reviews in the literature (Hoyt & Howard, 1978; Levinson-Rose & Menges, 1981; Weimer & Lenze, 1991), as well as overviews of activity in certain spheres (see, for example, Schonwetter & Nazarko, 2009, on activities for new faculty} that concluded that rigorous evaluation of faculty development programs is rare. In a summary review, McKinney (2002) listed the main overall patterns from these studies: they document high levels of satisfaction, elevated levels of collaboration and community, and conceptual change in teachers associated with interventions. She finds that few studies measure student perceptions and calls for more systematic research.

Three more recent studies looked at trends in current literature, all from different perspectives. In the first, Bamber (2008) examined a selection of studies evaluating the results of academic development programs. She focused explicitly on how theory was used and on the stance of the evaluation. She concluded that large-scale studies (cross-institutional, systematic) offer political advantages and the opportunity to use statistical techniques designed for large populations, but that small-scale studies, conducted in local settings with research designs that probe depth and supply detail, are especially useful. While emphasizing that evaluation is a complex and uncertain activity, she offered hope that patterns emerging from locally designed and theory-based studies will offer important insights for practice.

Kucsera and Svinicki (2010) applied five standards of rigorous research to their examination of 750 studies that appeared in seven leading journals from 1997 to 2001. They excluded certain types of studies that did not meet seven criteria, such as focus on teaching and learning, about faculty, activity initiated by faculty developer, and containing a description of methods, which eliminated all but ten studies. On the basis of this review, the authors agreed with prior surveys of the literature, finding that rigor is lacking; they issued a general call for more rigorous work, stating that qualitative in-depth approaches are needed.

Steinert et al. (2006), examined fifty-three papers that explored results of educational development interventions for faculty in medicine and presented detailed summaries of eight exemplary studies. The studies generally found beneficial effects from educational development activities, citing changes in teachers’ attitudes, knowledge, skills, and behaviors, as reported by teachers and their students. They found that few studies focused on the levels of student learning or organizational impact and that the strongest results were associated with interventions that included experiential learning activities in educational development interventions, used a variety of pedagogical approaches and good instructional design in interventions, provided frequent feedback, and created a positive social context for the faculty learners.

The extensive reviews by Stes et al. (2010) were based on an initial eighty sources identified through using teaching development descriptors in the ERIC database in 2008. After applying criteria for inclusion, such as postsecondary context, intentional initiative, focus on impact, and empirical data, they selected thirty-six studies for further review on the basis of a scan of the abstracts. All of the studies, except one that was inconclusive, found positive effects on at least one of the areas of impact that they studied, although several could not identify an impact on all of those areas. The authors concluded that designs from 2000 to the present do not differ significantly from earlier ones. They acknowledged the challenges of studying impact and suggest that future research focus more on actual behavioral outcome measures instead of self-reported outcomes, stating that mixed-methods studies, quasi-experimental designs, and use of standard instruments would improve the quality of future research.

Despite this critical history of past findings, the volume of studies that have looked at the impact of various common faculty development approaches, often within the context of a single case, continues to grow. The overview of these studies that follows presents findings by type of intervention, with the goal of informing those in practice settings of the potential efficacy of choices they make in allocating their efforts and resources. Our review thus differs from other recent ones that have focused more on methodological issues with the literature than on implications for practice.

Methods

To provide an overview of the findings of existing research studies, we searched literature from 2000 to the present in the following international publications: To Improve the Academy, International Journal for Academic Development, Journal of Faculty Development (and its predecessor, Journal of Staff, Program, and Organizational Development), Journal of Graduate Teaching Assistant Development, College Teaching, and International Journal of Teaching and Learning in Higher Education. We also examined the results produced through a search of popular databases in education and reference lists in the articles that were originally found. We kept information on most sources in a Zotero database, with the exception of some that were contextual in nature. We used these criteria for identifying articles

  • About a developer-led activity: Developer is defined as one who leads an activity designed to support or improve the practice of another.

  • About teaching: Can be about instructional strategies, design work, curriculum development or other aspects.

  • Aimed at anyone who teaches: Can be graduate students or fulltime or part-time faculty.

  • About any disciplinary context or from any country.

  • Within a higher education context (as opposed to staff development in K-12 schools or corporate settings).

We coded each piece by type of intervention, target of intervention, disciplinary (or general) context, and the researcher’s method to study impact. We wrote a summary of each study, included the abstract (where available), and attached a PDF (when further reference would be needed). The three team members coded several pieces together to establish consistency of coding at the start of the process.

In this way, we compiled information on 138 studies in our database. Table 9.1 displays the type of interventions studied and the methods used to study them. We did not attempt to select studies based on perceived quality or specific criteria for the type of research methods that were used, although we eliminated studies that did not detail the use of any data collection methods that warranted their claims. Our results, then, are descriptive of a broad cross-section of impact evaluation.

Results

In the following sections, we present highlights of the results, focusing only on one or two exemplary studies in each section. We intentionally chose examples from across disciplines and methods used to show variety, as well as examples that depicted representative outcomes across the range of studies on a given intervention. An annotated bibliography with full information on all the studies, including the structural change category that was omitted here due to space constraints, is available on the POD Network’s Wikipodia page. The results are presented here by type of intervention. General statements about a specific type of intervention carry the caveat that the statement holds true only for implementations that are well designed and well executed.

Table 9.1 Intervention Studied and Method of Analysis Used in Research.
Type of Development ActivityNumber of StudiesMethod of AnalysisNumber of Studies
Workshops62Debrief or informal interview11
Workshop series13Document analysis (for example, syllabi)45
Institutes (eleven) or shorter (thirty-eight)49Interview30
Course20Observation19
Communities of practice29Scores on standard instruments26
Faculty learning community17Survey75
Project12External evaluation3
Consultation18Other6
With professional consultant5
With peer8
Involving student evaluations5
Grants and awards7
Structural change6
Mentor4
Other3
Total149a215a
aTotal unique studies: 138. Some studies included more than one intervention or method.

Workshops

In general, workshops are used to elevate the visibility of professional development units or activities or to interest faculty in more intensive interventions. Workshops are also used in combination with other instructional development approaches, such as coaching, in which faculty members attend an initial workshop to introduce them to a concept and then receive follow-up coaching to implement the approach. However, in this section, we review studies of workshops as the major approach being studied. In our presentation of the sixty-two studies we examined, we differentiate institutes of one day or more (eleven), shorter workshops (thirty-eight), and workshops in a series (thirteen).

INSTITUTES OF ONE DAY OR MORE.

Workshops delivered over the course of one day or more on a focused topic or theme have been shown to have positive effects on teaching attitudes and changes in teaching practices. For example, through a follow-up e-mail survey, Kahn and Pred (2001) found that forty-six of the fifty participants in a one-day institute on using instructional technology reported that they were using technology effectively in their teaching four months later. The faculty members were also more motivated to seek further development.

In addition to campus-based institutes, we found a growing literature on institutes sponsored by professional associations and organizations (Walstad & Salemi, 2011, for example, report on a program funded by the National Science Foundation).

SHORTER WORKSHOPS.

For many teaching centers, workshops on various topics are often offered in short, one-time offerings. In a one-hour workshop teaching a specific skill (for example, the one-minute preceptor technique in medical clinical teaching), Furney et al. (2001) found that residents who were taught the technique reported changes in their behavior and appreciation for the strategy on a follow-up survey. Students of the residents showed improvements in their skills and greater motivation to do outside reading compared with students of control-group residents.

Assuming quality is high, there appear to be moderate improvements in demonstrated teaching behaviors as the length of the workshop increases. For example, a four-hour workshop for medical faculty on using more interactive classroom approaches found through surveys and videotapes of participants and a control group of nonparticipants that participants both reported and were observed using more interactive techniques that increased student participation (Nasmith & Steinert, 2001).

WORKSHOPS IN A SERIES.

A growing number of studies examine the effectiveness of workshop series, in which participants attend multiple workshops over time, often based on their personalized needs. Such series that blend elements of traditional workshops with communities of practice have been shown to be both highly relevant and contribute to changes in faculty practice. For example, Ho, Watkins, and Kelly (2001) charted the growth over time of participants in four-session college teacher conceptual change courses against a control group, finding that perceptions of teaching, teaching behaviors, and students’ approaches to study all changed favorably in the treatment group (see also Ho, 2000).

Formal Courses in Teaching

Formal courses on teaching offered to faculty members over a term are quite common in the United Kingdom and countries modeled on its system, where faculty who seek promotion are often required to document completion of these courses. In North America, weeklong institutes or a sequence of full-day sessions over a period of time are more likely to be the format of choice. In general, research on the use of these extended experiences on teaching usually finds that they influence teachers’ thinking about teaching but concludes that documenting effects on student learning is difficult.

There are some exceptions. Gibbs and Coffey (2000) studied the effects of teaching development courses that aimed at developing more studentcentered approaches, finding effects on students’ learning. Lawson, Fazey, and Clancy (2007) found positive change in teachers’ beliefs and approaches and students’ approaches to studying following a teaching development course. Stes (2008) found that a 140-hour course in general teaching methods had significant positive effects on how teachers approached their work, but did not find definitive evidence that students learned better in the classes of teachers who took the course. She estimated that the way that a course is taught influences only about 6 percent of the outcome, given the importance of student motivation, time on task, and other learner factors.

Although these studies vary in the interventions they studied and their results, they suggest that longer, more intensive learning experiences have beneficial effects on faculty teaching beliefs and behaviors, which sometimes can be linked to enhanced student learning. With the exception of those working in countries where formal courses are mandatory for promotion, developers do not often have this format available to them since faculty are unwilling to invest extended time in formal learning about teaching. A more realistic approach that may lead to similar impacts would be to support meetings of a cohort over time, a format exemplified by various types of communities of practice.

Communities of Practice

In our literature base, we found twenty-nine faculty communities of practice, defined here as groupings of a cohort of faculty members engaged in dialogue about teaching for a semester or more. These are organized around a variety of topics and may involve several elements, such as course revision or inquiry projects, as well as regular discussion among participants. Communities of practice can be formally designated as faculty learning communities (Cox, 2003) or teaching and learning circles (Erklenz-Watts, Westbay, & Lynd-Balta, 2006) or can simply be general series of project- or dialogue-based meetings. Commitment of time and mutual reinforcement of learning among participants are key features of these interventions. Generally writers in the literature have found positive effects on teaching development associated with communities of practice. The combination of facilitated peer exchange, sharing of questions and solutions, and task-oriented nature of the regular gatherings is found to advance teaching knowledge and behaviors.

In sum, research on communities of practice has blossomed over the last decade, perhaps because such sustained interventions provide a more practical context for collecting data and the promise of more recognizable impact accruing from the time invested by the participants. Although there is great variation among these types of activities, the studies document solid gains for participants; some even are able to trace these to impacts on student learning.

The examples that follow highlight studies of general and specialized communities of practice.

GENERAL COMMUNITIES OF PRACTICE.

Many communities of practice focus on conceptual change. Qualters (2009) evaluated Dialogues, a sustained program that engaged thirty-one participants at two institutions in examining their assumptions about teaching together. Through analysis of transcripts and notes of meetings and participant survey results, she concluded that participants were better able to think about the assumptions behind their practice and made plans for or enacted change in their teaching.

PROJECT-BASED COMMUNITIES.

Other communities of practice are Project based, often involving course redesign. O’Meara (2005) collected Pre-, during-, and postprogram self-ratings; interviewed faculty; and observed meetings of a program for early-career science and technology faculty that spanned an academic year in which participants attended sixteen dinner seminars and completed a course redesign project. She found gains in the impact of the program on teaching careers (commitment, satisfaction, teaching skills), participants’ understanding of how students learn, and their understanding and use of assessment. She concluded that the project component of the program was crucial for participants’ self-knowledge and their understanding of how their actions influenced student learning, a finding that Gusic et al. (2010) strongly endorse within the context of medical education.

SCHOLARSHIP OF TEACHING AND LEARNING COMMUNITIES.

Scholarship of teaching and learning is often the focus of communities of practice. In her analysis of a year-long program in which eight faculty members were engaged in a scholarship of teaching and learning (SoTL) program, Schroeder (2005) used products of faculty scholarship to document "transformational learning" of the participants. Their ability to articulate assumptions, reflect critically, and take action to implement new practices was associated with participation in the program.

Mentoring

Although there is a considerable literature base about mentoring in general, we found only four studies of educational development devoted solely to its impact. Three of these centered on increasing technological skills through mentoring by a more experienced colleague. In one study of a more general and common use of mentoring, Miller and Thurston (2009) examined the impact of a new faculty mentoring program over the course of nine years of operation through formative annual surveys, summative surveys at the fifth and ninth years, and interviews with administrators. Based on responses from a group of twenty-nine mentors and twenty-three mentees, the authors found that 55 percent of the respondents said that the program aided their transition, while 27 percent said that the program influenced their teaching and research and 34 percent said that the mentoring influenced their ability to publish and present their research.

Across the studies, authors found that successful mentoring programs are those in which mentees had flexibility in shaping both the topics and ways in which they interacted with mentors. They often cited reciprocal benefits for mentor and those mentored, which involved increased confidence, improvements in specific targeted teaching skills, and richer conceptualization of learning.

Consultation

Piccinin (1999) and Piccinin and Moore (2002) found that instructional consultation helped improve the student ratings of younger faculty within a year and older faculty within one to three years. Their findings echo the results of major meta-analyses of consultation with feedback such as Cohen (1980) and Menges and Brinko (1986). We encountered only eighteen studies of consultation in our search, and several of these involved other interventions as well.

Although studies of the beneficial effects of consultation and feedback have focused primarily on discussions of student ratings, some literature on consultation in general also documents resulting changes. McShannon and Hynes (2005) reported on a semester-long program for engineering and science faculty that involved weekly classroom observations and discussions with a consultant. Their results are based on sixty-two faculty who participated during one semester during the five years of the program that were studied. In addition to finding that the faculty reported greater use of active learning methods and increases in student learning, the authors found small increases in the number of students receiving grades of A, B, and C, as well as gains in the numbers of students remaining in science and technology programs.

In addition to these examples of consultation with an educational developer, the literature documents positive effects of consultation with a peer. For example, Bell and Mladenovic (2008) found that peer observers Were able to help·faculty improve their recognition of strengths and Weaknesses and develop motivation to make changes. They used results of the observations, a survey, and focus group data to evaluate the effectiveness of the program.

In general, these studies of the effects of consultation suggest that talking with an expert or a knowledgeable peer about a particular teaching context is associated with changes in teaching knowledge and behaviors when the consultation is done skillfully. The literature supports the case that those who establish a consulting relationship with faculty members are likely to be able to support their transition to successful implementation of teaching change.

Awards and Grant Programs

The most common extrinsic rewards educational developers use in promoting faculty change are awards and grants. Studies of faculty awards programs as a developmental approach have largely failed to identify much impact beyond the general reinforcement of the institution’s value for teaching. Chism and Szabo (1997) were not able to locate studies documenting that awards either prompted award recipients to further their own growth or encouraged others to make improvements in order to gain the awards. Chism (2006) analyzed existing awards programs and found that very few had established criteria or systematic review processes. In a community college context, Peterson (2005) found "ambivalent attitudes, even hostility and anger, toward the formalized nomination process for awards as well as the way in which awards are disseminated" (p. 157).

Less is known about the impact of grants on faculty change. Seven of the 138 studies reviewed used grants as the primary intervention. Common results described grant programs used with other types of interventions, such as instructional coaching coupled with course releases. For example, Morris and Fry’s (2006) study of a small grants program tied to SoTL found that recipients reported growth in their understandings of practice. They cited the opportunity to reflect on and develop new teaching skills and expertise, their development of partnerships, and interactions with peers on teaching and learning issues as most beneficial.

The impact of the studies reviewed generally showed both short-term and continuing improvement in teaching practices and reflective thinking. However, the combination of grants with other strategies makes it difficult to determine their effectiveness alone. For example, Cox {2003) found that small stipends or grants associated with faculty learning communities were an important motivator. In all of the studies examined for this review, the connection with the support offered along with the grant was highlighted over the monetary incentive itself.

In sum, the literature suggests that grants may be an initial motivator, inspiring some faculty to engage in teaching development activities, but support associated with the grant might be more important. Strong beneficial effects of awards without accompanying development programs have not been extensively documented on either those who receive them, their peers, or the culture of the institution.

Discussion

Based on this review of the literature on the impact of educational development, we offer several observations:

  • Studies are increasing in quantity. A disproportionate number (one-third) of the studies we coded have been published in the past two years. The 138 we examined are those that were readily retrievable; we believe that there are many more not available in online databases. We found that most published studies describing interventions now have an assessment section. In addition, we found growth in the documentation of faculty development efforts in discipline-specific areas, such as medicine and science, technology, engineering, and mathematics fields.

  • Studies are assessing other levels of outcomes beyond participant satisfaction. Although most focus on knowledge change, many assess changes in teacher behavior, and some are attempting to explore student learning changes tied to these teacher behavioral changes. Studies of institutional impact are still infrequent.

  • The methods used in studies vary widely. While survey research and self-reported data are still the prevailing mode, quasi-experimental design is frequently employed (often in medical education), and qualitative methods such as observation, interview, and analysis of documents are on the upswing.

  • Authors of the overwhelming majority of the studies report specific, effective results; some are not able to demonstrate clear outcomes. There are few studies of failures.

  • The presence of increased detail regarding context and program design of assessed activities is enhancing opportunities for developers to judge the transferability of results to their own settings.

Questions of quality pervade reviews of research on educational development impact. First, there is the quality of the research itself. Across the studies, we encountered research that was well described and methodologically sophisticated; there were also studies that were less detailed in their descriptions and used more informal methods. A second issue is the quality of the intervention itself. In this collection of published articles, We read descriptions of interventions and how they were designed and delivered, but we do not know whether the workshop facilitator was skilled, whether the faculty attended all sessions of the course or learning community, whether the consultant was experienced or new, and a host of other factors that may have affected the quality of the intervention that was assessed. In most cases, contextual factors are described, but analysis often does not test whether different outcomes were found for Participants by gender, discipline, or other variable. Most studies are of interventions located at only one institution and were performed by researchers associated with the unit in which the program was located, Finally, all of the studies found positive results on at least some dimension of impact that were explored.

Conclusion

Despite these limitations, the body of accrued literature on impact is now substantial enough to reveal patterns that can provide guidance for decision making within educational development programs, as well as providing support for the efficacy of development practice. Believing that the establishment of a systematic scholarship is essential for the field of educational development, we are heartened by this exploration of the literature on impact and urge that this line of research continue. We also call for increased efforts, such as postings to Wikipodia, that will enable the educational development community to collaborate in collecting and analyzing studies in ways that make them easily accessible for use in practice and further study.

References

  • Bamber, V. (2008). Evaluating lecturer development programmes: Received wisdom or self-knowledge? International Journal for Academic Development, 13, 107-116. doi:10.1080/l 3601440802076541.
  • Baume, D. (2002). Scholarship, academic development and the future. International Journal for Academic Development, 7(2), 109-112.
  • Bell, A., & Mladenovic, R. (2008). The benefits of peer observation of teaching for tutor development. Higher Education: The International Journal of Higher Education and Educational Planning, 55(6), 735-752.
  • Chism, N. (2006). Teaching awards: What do they award? Journal of Higher Education, 77(4), 589-617.
  • Chism, N., & Szabo, B. (1997). Teaching awards: Assessing their impact. In D. DeZure (Ed.), To improve the academy: Resources for faculty, instructional, and organizational development, Vol. 16 (pp. 181-199). San Francisco, CA: Jossey-Bass/Anker.
  • Chism, N., & Szabo, B. (1998). How faculty development programs evaluate their services. Journal of Staff, Program, and Organization Development, 15(2), 55-62.
  • Cohen, P. (1980). Effectiveness of student rating feedback on the improvement of instruction: A meta-analysis of findings. Research in Higher Education, 13, 321-341.
  • Cox, M. D. (2003). Proven faculty development tools that foster the scholarship of teaching in faculty learning communities. In C. Wehlburg & S. Chadwick-Blossey (Eds.), To improve the academy: Resources for faculty, instructional, and organizational development, Vol. 21 (pp. 109-142). San Francisco, CA: Jossey-Bass/Anker.
  • Erklenz-Watts, M., Westbay, T., & Lynd-Balta, E. (2006). An alternative professional development program: Lessons learned. College Teaching, 54, 275-280. doi:10.3200/CTCH.54.3.275-280.
  • Furney, S. L., Orsini, A. N., Orsetti, K. E., Stern, D. T., Gruppen, L. D., & Irby, D. M. (2001). Teaching the one-minute preceptor: A randomized controlled trial. Journal of General Internal Medicine, 16(9), 620-624.
  • Gibbs, G., & Coffey, M. (2000). Training to teach in higher education: A research agenda. Teacher Development, 4(2), 31-44.
  • Gusic, M. E., Milner, R. J., Tisdell, E. J., Taylor, E.W., Quillen, D. A., & Thorndyke, L. E. (2010). The essential value of projects in faculty development. Academic Medicine, 8S(9), 1484-1491. doi:10.1097/ ACM.0b013e3181eb4dl7.
  • Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Sage.
  • Hines, S. R. (2009). Investigating faculty development program assessment practices: What’s being done and how can it be improved? Journal of Faculty Development, 23(3), 5-19.
  • Ho, A. (2000). A conceptual change approach to staff development: A model for program design. International Journal for Academic Development, 5(1), 30-41.
  • Ho, A., Watkins, D., & Kelly, M. (2001 ). The conceptual change approach to improving teaching and learning: An evaluation of a Hong Kong staff development programme. Higher Education, 42, 143-169.
  • Hoyt, D. P., & Howard, G. S. (1978). The evaluation of faculty development programs. Research in Higher Education, 8, 25-38.
  • Kahn, J., & Pred, R. (2001). Evaluation of a faculty development model for technology use in higher education for late adopters. Computers in Schools, 18(4), 127-150.
  • Kirkpatrick, D. (1998). Evaluating training fmigrams: The four levels (2nd ed.). San Francisco, CA: Berrett-Koehler.
  • Kreber, C., & Brook, P. (2001). Impact evaluation of educational development programmes. International Journal for Academic Development, 6(2), 96-108.
  • Kucsera, J. V., & Svinicki, M. (2010). Rigorous evaluations of faculty development programs. Journal of Faculty Development, 24(2) 5-18.
  • Lawson, R. J., Fazey, J. A., & Clancy, D. M. (2007). The impact of a teaching in higher education scheme on new lecturer’s personal epistemologies and approaches to teaching. In C. Rust (Ed.), Improving student learning through teaching. Oxford, UK: Oxford Centre for Staff Development.
  • Levinson-Rose, J., & Menges, R. J. (1981). Improving college teaching: A critical review of research. Review of Educational Research, 5(3), 403-434.
  • McKinney, K. (2002). Instructional development: Relationships to teaching and learning in higher education. In D. Lieberman & C. Wehlburg (Eds.), To improve the academy: Resources for faculty, instructional, and organizational development, Vol. 20 (pp. 225-237). San Francisco, CA: Jossey-Bass/Anker.
  • McShannon, J., & Hynes, P. (2005). Student achievement and retention: Can professional development programs help faculty GRASP it? Journal of Faculty Development, 20(2), 87-93.
  • Menges, R., & Brinko, K. (1986, April). Effects of student evaluation feedback: A meta-analysis of higher education. Paper presented at the annual meeting of the American Educational Research Association, Washington, DC.
  • Miller, T. N., & Thurston, L. P. (2009). Mentoring junior professors: History and evaluation of a nine-year model. journal of Faculty Development, 23(2), 35-40.
  • Morris, C., & Fry, H. (2006). Enhancing educational research and development activity through small grant schemes: A case study. International Journal for Academic Development, 11(1), 43-56. doi:10.1080/13601440600579001.
  • Nasmith, L., & Steinert, Y. (2001). The evaluation of a workshop to promote interactive lecturing. Teaching and Learning in Medicine, 13( 1 ), 43-48.
  • O’Meara, K. (2005). The courage to be experimental: How one faculty learning community influenced faculty teaching careers, understanding of how students learn, and assessment. Journal of Faculty Development, 20(3), 153-160.
  • Peterson, C. (2005). Is the thrill gone? An investigation of faculty vitality within the context of a community college. In S. Chadwick-Blossey & D.R. Robertson (Eds.), To improve the academy: Resources for faculty, instructional, and organizational development, Vol. 23 (pp. 144-161). San Francisco, CA: Jossey-Bass/Anker.
  • Piccinin, S. (1999). How individual consultation affects teaching. In C. Knapper & S. Piccinin (Eds.), New directions in teaching and learning: No. 79. Using consultants to improve teaching (pp. 71-84). San Francisco, CA: Jossey-Bass.
  • Piccinin, S., & Moore, J. P. (2002). The impact of individual consultation on the teaching of younger versus older faculty. International Journal for Academic Development, 7(2), 123-135.
  • Qualters, D. M. (2009). Creating a pathway for teacher change. Journal of Faculty Development, 23(1 ), 5-13.
  • Schonwetter, D. J., & Nazarko, 0. (2009). Investing in our next generation: Overview of short courses, and teaching and mentoring programs for newly-hired faculty in Canadian universities (part 2). Journal of Faculty Development, 23(1), 54-63.
  • Schroeder, C. (2005). Evidence of the transformational dimensions of the scholarship of teaching and learning: Faculty development through the eyes of SOTL scholars. In S. Chadwick-Blossey & D.R. Robertson (Eds.), To improve the academy: Resources for faculty, instructional, and organizational developme11t, Vol. 2.1 (pp. 47-71). San Francisco, CA: Jossey-Bass/ Anker.
  • Smith, H.J. (2004 ). The impact of staff development programmes and activities. In D. Baume & P. Kahn (Eds.), Enhancing staff and educational development (pp. 96-117). Oxford, UK: Routledge-Falmer.
  • Steinert, Y., Mann, K., Centeno, A., Dolmans, D., Spencer, J., Gelula, M., & Prideaux, D. (2006). A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical evaluation: BEME Guide No. 8. Medical Teacher, 28(6), 497-526.
  • Stes, A. (2008). The impact of instructional development in higher education: Effects 011 teachers and students. (Doctoral dissertation.) Antwerp: University of Antwerp.
  • Stes, A., Min-Leliveld, M., Gijbels, D., & Van Petegem, P. (2010). The impact of instructional development in higher education: The state-of-the-art of the research. Educational Research Review, 5, 25-49. doi:10.1016/j. edurev.2009.07.001.
  • Walstad, W. B., & Salemi, M. K. (2011). Results from a faculty development program in teaching economics. Journal of Economic Education, 42, 283-293.
  • Weimer, M. (2007). Intriguing connections but not with the past. International Journal for Academic Development, 12( 1 ), 5-8.
  • Weimer, M., & Lenze, L. F. (1991). Instructional interventions: A review of the literature on efforts to improve instruction. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 7, pp. 294-333). New York: Agathon Press.