While autobiographical narratives and case study reflections remain vital to faculty development research, we must also make substantive efforts to build theory in our field. Researchers making claims about collective meanings of observed behaviors and the mechanisms that underlie them (i.e., theoretical claims about social behavior) must be disciplined in how they identify and organize the evidence they use to support those claims. Such systematic, inductive theory-building in the social sciences is called “grounded theory” research. This chapter presents the basics of grounded theory research, describes a grounded theory research program currently being executed by faculty developers, and offers practical tips especially for faculty developers.

As faculty developers, we are constantly drawn into complex stories and relationships at our institutions, and it is natural for us to begin forming hypotheses about “how things work here.” Formalizing our hypotheses and investigating them systematically is the scholarly next step that most of us are trained to take. For these investigations, many seek out numerical data like course evaluation scores, GPAs, institutional research data such as retention-attrition rates, and so on. These quantitative data can be informative, but when it comes to collecting data directly from the teachers or administrators with whom we work, the often delicate nature of our professional relationships can make the qualitative data collection process more compatible with our faculty development goals. A trusting, collaborative relationship with our clients is the foundation on which effective faculty development rests (Gillespie, Hilsen, & Wadsworth, 2002), and the interpersonal experience of a qualitative interview or observation can result in more trust-building than filling out a questionnaire.

Our field has benefited greatly from many kinds of qualitative research, and so have a wide array of our client populations, including new and untenured faculty (e.g., Boice, 1992; Mullen & Forbes, 2000; Olsen & Sorcinelli, 1992; Whitt, 1991), graduate students (e.g., Austin, 2002; Damron, 2003; Nyquist et al., 1999; Smith, 2001), women and minorities in academia (e.g., Taylor & Antony, 2000; Ward & Wolf-Wendel, 2004), scholars of teaching and learning (e.g., Schroeder, 2005), and interdisciplinary faculty (e.g., Lattuca, Voigt, & Faith, 2004; Stein & Short, 2001). Moreover, qualitative research has given us insights into our own circumstances as faculty developers and how best to operate within them (e.g., Austin, Brocato, & Rohrer, 1997; Carusetta & Cranton, 2005; King, 2004; Mullinix, 2006).

Given this rich collection of precedents, faculty development will likely continue to benefit from qualitative data and analysis. However, the rigor with which any research is conducted is the measure by which we must judge the knowledge claims it produces. Although autobiographical narratives and case study reflections are vital to the field, researchers making claims about the collective meanings of observed behaviors and the mechanisms that underlie them (i.e., theoretical claims about social behavior) must be disciplined in how they identify and organize the evidence they use to support those claims. Systematic, inductive theory-building in the social sciences is called “grounded theory” research, and its goal of producing theoretical models distinguishes it from “thick description” qualitative methods, which have the more documentary, anthropological goal of rendering new social settings intelligible to the reader. Within the larger sphere of qualitative—and even within grounded theory itself—there are many approaches to the task of building nonquantitative knowledge. Even a brief overview of this diverse landscape is beyond the scope of this chapter, but several good resources exist to inform and guide qualitative research in education (e.g., Bogdan & Biklen, 2003; Camic, Rhodes & Yardley, 2003; LeCompte, Millroy, & Preissle, 1992). One such resource, Strauss and Corbin (1998), describes a rigorous grounded theory approach in no-nonsense terms and the methods reviewed here draw primarily from that source.

This chapter, therefore, has three goals: 1) to introduce the basics of grounded theory research, 2) to describe the steps taken in a grounded theory research program currently being executed by faculty developers, and 3) to offer practical real-world tips to faculty developers conducting grounded theory research.

Grounded Theory Research: The Basics

All research generally includes data inputs, analytical procedures, and some form of reporting output. In this way, qualitative and quantitative methods are no different. Important differences between the two methods lie in the nature of the data, the forms of analysis, and the criteria used to judge the reports they ultimately generate. In terms of input, quantitative methods only accept numerical data, while qualitative methods accept many forms of non-numerical data: observations, interviews, focus groups, videotapes or transcripts of social events, and documents of many kinds. In terms of output, quantitative and qualitative methods differ in the criteria they use to legitimate knowledge claims. Quantitative research bases its knowledge claims on the extent to which measured observations are significant, reproducible, generalizable, and so on. However, these criteria do not always make sense when applied to research on social phenomena occurring in a natural setting. For example, it is unrealistic to consider complex real-world social events perfectly reproducible. Instead, criteria for judging qualitative research include (among others) the extent to which observations are credible, confirmable, and detailed enough for the reader to judge the transferability of findings from one context to another (Lincoln & Guba, 1985).

Of most practical interest here, however, are the actions between data collection and theoretical output—the actual steps of analysis. Where statistical analysis is used in quantitative social science research, the analysis used in grounded theory research is called “coding” and occurs in a few distinct steps: open coding, axial coding, and selective coding. Grounded theory research is often referred to as a constant-comparative method because coding involves continually comparing new data to old data in pursuit of an ever more accurate description of the explanatory schemas that underlie the observations. This recursive process takes place across the lifespan of the study, up to and through selective coding.

Open Coding

Open coding is a provisional first pass at the data, to identify data that seem important and possible meanings those data might have. This process can be as microscopic as a word-by-word analysis of a transcript, but in practical terms it frequently takes the form of circling words and phrases in transcripts, jotting notes in the margin, and writing reflective “memos” to one’s self about possible interpretations of specific data. As these notes accrue, concepts begin to emerge—certain things may be repeated, described with great energy, or obvious grounds for some kind of decision. These concepts become more clearly characterized by their properties and the dimensions along which they vary.

To illustrate the coding process, we will use a phenomenon familiar to most readers of this volume: the academic conference. Often, conversations about a conference begin with “Where is the conference next year?” Conversations about a winter conference held in Honolulu, Hawaii, would unfold in somewhat predictably different ways than if the same winter conference were held in Fairbanks, Alaska. So “location” is obviously a concept that is important to the phenomenon of the academic conference, with one property of location being seasonal weather (varying along the dimension of pleasantness: perhaps from glorious to nasty).

Axial Coding

Axial coding involves relating concepts into categories and subcategories and identifying sequences of cause 2 action/interaction consequence. To extend our academic conference example, the conceptual category of “conference presentation” has several subcategories (poster, roundtable, paper, symposium, plenary) and a built-in sequence (proposal, acceptance, scheduling, delivery, evaluation). Within these sequences, meaning is created and negotiated by motivated subjects acting/interacting with context-specific results. For the graduate student who has just finished presenting a paper session, seeing the “big name” scholar clearly approaching with the intent to engage may cause a great deal of anxiety. Because the impending interaction means so much to the graduate student, it will likely have significant emotional consequences in the moment and possibly longer term professional consequences as well.

Selective Coding

Selective coding is where we take the concepts we first identified in open coding, then related into categories and sequences during axial coding, and now use them to build theory. Selective coding is the process of identifying a category that can be considered core in the sense that one can relate all of the other major categories to it. Core categories must be described abstractly enough to explain many kinds of data. Perhaps, as the conference approaches, we observe a greater frequency of conversation about it, a dramatic increase of anxiety among those who are scheduled to present, and even more anxiety among those responsible for coordinating the event. These observations might lead us to become curious about the sources of this anxiety—the emotional stakes for the people involved in the conference. Upon reviewing a travel reimbursement form, the checkbox under “Reason for travel” labeled “increases the university’s reputation” might catch our eye. This could lead us to consider the possibility of “reputation” as the basis for a core category: personal for the presenters, institutional for the schools which send them, and organizational for the body that offers the conference. We may begin to think of our data in terms of whether “Reputation marketplace” or “Intersections of prestige” seem to have explanatory power for much or all of the meanings associated with the academic conference. This step is selective in the sense that it leads us to ask some specific questions and seek specific kinds of data, so we can try on labels for the core category to see how well they fit. Coming to recognize a core category can be the most difficult part of generating grounded theory and the core category criteria described by Glaser (1978) are extremely helpful in this regard. As the core category is selected and refined, the study begins to reach theoretical saturation—that is, when all new data are compared to previously collected data and found to fit into existing categorical schemes without need to adjust or reinvent those categories. This marks the natural point of conclusion for a grounded theory study.

As with any research, there are many agonizing decisions to be made throughout, frustrating setbacks, and dead ends. But for our purposes, we hope the preceding overview prepares the reader for a description of our faculty focused, grounded theory research program that is currently under way.

The following section briefly describes the research goals, data sources, analytical steps, and some preliminary findings that were presented at the annual conference of the American Psychology Association in 2006. For brevity’s sake, we have omitted the literature review situating our research amidst existing educational psychology literature, but the .interested reader can find it online at https://webspace.utexas.edu/ms4l6453/Teacher Mistakes

APAPaper.pdf. Importantly, this particular study is incomplete: We remain in pursuit of our core category, but we thought “looking under the hood” of a still unpolished piece of grounded theory research could give the reader a realistic sense of the process while it is still under way.

A Live Example of Grounded Theory Research: Teacher Mistakes–A Window into Teacher Efficacy?

Any experienced teacher knows that teaching is not a coldly logical process of problem solving and rationally choosing among clear alternatives. Instead, it may have the same motivational and affective tone that “hot” cognition (Pintrich, Marx, & Boyle, 1993) has for student choices in the classroom, with cognition and emotion intertwined. Indeed, Palmer (1998) says, ‘‘As we try to connect ourselves and our subjects with our students, we make ourselves, as well as our subjects, vulnerable to indifference, judgment, ridicule” (p. 17).

If teaching indeed has an emotional charge, it seems one of the most intense experiences for teachers—especially new teachers—may come when they make a “mistake” in the classroom. Unfortunately, little exploratory, much less experimental, work has been done to guide teachers faced with this situation. In addition, research has seldom explored the causes and effects of teacher attitudes and reactions toward mistakes. Drawing from Kegan and Lahey’s (2001) notion that “competing commitments” and underlying “big assumptions” can be revealed in negative reactions to events, we wanted to conduct exploratory research into how teachers describe, categorize, and react to their own mistakes. Although research on student attitudes and reactions to failures has flourished (e.g., Firmin, Hwang, Copella, & Clark, 2004; Linnenbrink & Pintrich, 2002; Perry, Hladkyj, Pekrun, Clifton, & Chipperfield, 2005), research has seldom explored the causes and effects of teacher attitudes and reactions toward their own mistakes. Ultimately, we hope to generate both theoretical and practical findings useful to those helping new faculty members acclimate more smoothly to their teaching roles. Through a better understanding of how to interpret and manage their own inevitable “mistakes,” we hope to help new faculty accept more openly the new pedagogical strategies and tools that will become available to them throughout their career.

In our discussion around what constitutes mistakes and how we could study them, we were at a loss to come to any agreement or even a guess at what the boundaries of “mistakes” might be for faculty. This was the perfect situation for the use of a grounded theory approach: We needed to let the words of those most intimately involved—the teachers themselves—become the data on which to build our understanding of mistakes.

Data Collection

In the initial phase of our exploratory study, we conducted interviews with demographically representative faculty members at the University of Texas at Austin regarding how they define and react to teaching mistakes they make in the classroom. Faculty participants were identified initially on the basis of personal prior contact or participation in various faculty development opportunities offered on campus. Subsequent faculty participants were identified using a cascading procedure of asking each participant to recommend one other person, who was then contacted individually by a member of our research team.

During interviews, participants were asked to identify an incident that they considered to be a “mistake” they made during class, and to describe their thoughts and feelings during and after that time. Qualitative data from the interviews were coded into categories centering on faculty members’ definitions of mistakes as well as their reactions and coping strategies.

Interviews and Transcription

A pilot interview was conducted with a volunteer, and the team discussed possible themes that might emerge in later interviews and the procedures for conducting these interviews (the interview protocol). After developing our initial interview protocol and getting institutional review board (IRB) approval, we began conducting our interviews. The interviews were semi-structured, digitally recorded, and later transcribed by the interviewer. Steps were taken to ensure that the interviewer was not acquainted with the faculty member prior to the interview. A total of 19 faculty were interviewed by 6 different interviewers.

Open Coding

To train and calibrate their coding, all members of the team open coded one transcript, then compared the concepts they saw emerging between that transcript and the pilot interview. Interpretive parameters around emerging concepts were negotiated. Of the themes that began to emerge, the team chose three on which to focus first: what constitutes a mistake, how teachers described their emotional reaction, and how the teachers described their behavioral response.

Two raters (neither of whom was the original interviewer) open coded each transcript separately, then compared coding. When the two raters both agreed on the meaningfulness of a statement, it was coded; when they did not agree, they returned to the original audio of the interview. If they still could not agree on a coding, then the statement was dropped from analysis. Interviews were coded by four coding teams, and coded statements were entered into a spreadsheet by interview number, line number, coding team, and coding label.

Axial Coding

Once the open coding was completed, each coding team’s entries were assigned to a different coding team to identify themes across the concepts that had emerged. For example, of the 19 teachers interviewed, teams considered whether patterns emerged among teachers who defined “mistakes” in a certain way, who felt certain emotions about those mistakes, or who responded or coped in certain ways. These thematic categories were intended to capture the structures and sequences of the teachers’ experiences of their mistakes: in other words, to capture the collective stories that these teachers were telling us.

Selective Coding

The process of selective coding is still under way. Axial coding has proved fruitful enough to justify and guide a second round of interviews, but because theoretical saturation has not yet been attained, a core category cannot yet be determined.

Preliminary Findings

As the faculty members in this study told their stories about mistakes they made in the classroom, three major categories of “mistake” emerged, which we labeled as structural/design mistakes, procedural/execution mistakes, and relational/self mistakes. Interestingly, the Ohio State Teacher Efficacy Scale, which was found to be the most psychometrically sound among eight commonly used teacher efficacy scales (Tschannen-Moran & Hoy, 2001), is organized into three efficacy factors: instructional strategies, classroom management, and student engagement, which appear congruent with the three domains that emerged in our interviews.

Structural/design mistakes included the preparation for and organization of the class material usually either at the beginning of the semester or the beginning of a class period. Underestimating the amount of time for class topics, too rigidly planning the syllabus, creating unclear learning objectives, and designing poor tasks and tests were just some of the examples taken from the interviews. As one faculty member described:<tab/>,

That’s another . . . mistake that I have made over time. In assigning topics for papers. I mean you have to have a middle ground between telling them exactly what they are supposed to do, and giving them a completely open-ended type of assignment. And I think sometimes I haven’t made very good paper assignments.

The emotional reactions related to this type of mistake included feeling thoughtful, regretful, discouraged, fearful, and frustrated. However, in general, many faculty members who described a structural/design mistake eventually exhibited a behavioral response to correct the mistake, and most were willing to adapt to the mistake by rethinking the material, taking feedback into consideration, preparing better, being more flexible, and reaffirming themselves.

Procedural/execution mistakes focused more on mistakes during a class and involved class procedures, such as moving through material at an inappropriate pace for students, being inflexible, giving poor instructions, boring students, and being inefficient. Specifically, some faculty members provided examples of giving factual errors to students, going too quickly through PowerPoint slides, and managing classroom behavior inadequately. One faculty member in particular commented on managing students in the classroom: “I think it’s a mistake for faculty to let bad behavior go by in a lecture. Because I’ve been in classes where there’s a lot of disruption . . . people moving in and moving out.” Emotional reactions to this type of mistake appeared much more ambiguous than those in the other two domains—reactions were not always clearly directed at one particular thing or person; they seemed much more diffused and sometimes directed at various aspects of the learning environment. These ambiguous emotional reactions included feeling insecure, uneasy, confused, anxious, perturbed, and uncomfortable. In one interview, a faculty member mentioned, “And I think it may have been exactly this kind of insecurity that comes from dealing with new material.” Perhaps one explanation for these more complex feelings comes from the lack of proper faculty training about effective classroom procedures (Menges & Austin, 2001). Similar to the structural mistakes, behavioral responses or coping strategies remained positive and encouraging for the faculty members who discussed procedural mistakes. In general, the teachers became more flexible; reconsidered their objectives, and attempted different methods in future situations.

Relational/self mistakes referred largely to the faculty member’s social interactions with students in the classroom. Faculty who experienced this type of mistake recounted stories of being unprofessional, offending students, shutting down students, and getting angry or losing their tempers with students. Sometimes the faculty member did not even realize the impact of the mistake until much later, as was the case in one interview:

I think that one of the first mistakes that I made probably was making light of a student in one of the early sessions in a case discussion . . . with a lot of numbers in it. The student hadn’t done any numbers and I said lightly, “Well, what’s the matter, was your calculator broken?” And it was the second or third session and I was trying to establish rapport with the class and all that. And I learned later that student’s feelings were hurt greatly.

In general, the emotional reactions faculty members reported in response to these kinds of mistakes were much more intense than mistakes in the other two domains. They reported immediate feelings of anger, devastation, and shame when handling mistakes concerning social interactions with students, and as one faculty member experienced after a mistake, “. . . but that was devastating. I thought ‘Oh, sh—.’ That’s almost the worst thing I can imagine .. .’’ The fact that relational mistakes made the most severe emotional impression lends support to Borich’s (1999) description of the teacher-student relationship as one between “significant others.’’ Even in the face of this intensity, however, faculty members mostly demonstrated promising behavioral responses to cope with the mistakes. They turned mistakes into learning opportunities for both themselves and the students, became more sensitive to students, spoke to students individually, and tried to find alternatives to avoid making the same mistakes in the future.

At this point in the process the research team is considering how to gather more data that will help tie these various themes together. In a research paradigm of this sort, the appropriate strategy is both to return to the data already gathered with this in mind and/or to return to the informants for more insights. That is where we stand at this point in time.

Practical Tips: Conducting Grounded Theory Research as a Faculty Developer

As faculty developers, we have limited time to devote to any one dimension of our job. The purpose of this section, therefore, is to share what practical tips we have learned so that others may benefit from our experience.

Tips for Collecting Data

Collaborate, collaborate, collaborate. It is difficult to overstate how helpful collaboration can be for the many interpretive tasks involved in qualitative coding. Having several people code the same data, then compare their coding, and discuss the meanings each saw in the data can help calibrate a research team to generate categorical schema with much greater depth, breadth, and discrimination. Beyond the added texture that collaboration can bring to interpretive analysis, it very simply can make a seemingly insurmountable research project possible. Many faculty developers have neither the training nor the staff to carry out a well-designed qualitative study, so collaborating with a qualitative researcher from elsewhere on campus—or even at another campus—can provide the experienced know-how that the faculty developer might lack. Furthermore, graduate and undergraduate students are frequently looking for real experience on research teams, and can often be the ones who ask the innocent questions that reveal important assumptions, tacit knowledge, and overlooked relationships, making their lack of experience a very practical advantage. Of course, each additional team member makes additional person hours available to the project, whether to collect data at a time when no one else is available or get a larger step in the project completed much faster, since “many hands make light work.” If working alone is your only option, several methods described by Lincoln and Guba (1985), such as triangulation, audit trails, memberchecking, and negative case analysis, can enrich how you understand your data and establish the “trustworthiness” of your findings.

Carefully design—and revisit— your interview protocols. A lesson we learned the hard way was that our interview questions did not generate in our participants the kinds of reflection we had hoped. After all the interviews in our first round were completed, we analyzed the data and found that, in our attempts to brainstorm a core category, we simply were not yet able to. Any attempt to generate a core category from our existing data would require us to make inferences far beyond our comfort level. We learned a great deal about the kinds of mistakes faculty experience and how they experience them, but do not yet have the insights into teacher efficacy we wanted.

We are presently still a considerable distance from theoretical saturation and need to conduct a new round of interviews, this time with similar questions asked in a few different ways. At the very least, we will ask interviewees to be thinking about the topics of our interview questions ahead of time, so we get more reflective and less on-the-spot responses. We may even send our interviewees the entire protocol ahead of time. Doing so would not be giving away anything: If having more than 30 seconds to think about our questions will help them give us richer, more detailed answers, all the better. That said, we would avoid reverting entirely to a virtual email interview because of the difficulty of establishing rapport in a purely electronic medium, and all the para-verbal and nonverbal communication available in the face-to-face setting that can be very helpful in guiding the interview conversation. This last concern—the nuances of interpersonal communication in a face-to-face setting—is especially significant to faculty developers, given the importance to us of our relationships with the people we serve.

Digital is good. If at all possible, record interviews. There are of course conditions when recording is impossible, inappropriate, or simply too intrusive but even with excellent note-taking skills, the human memory cannot always be relied on to notice everything important or retain it for long. Among the best decisions we made at the outset of our project was to use a digital (MP3) recorder instead of traditional audio cassettes. In addition to good sound quality, the digital nature of the audio files made storing the data in a secure—but easily accessible—location very easy. We simply created a special area in our university course management system (a Blackboard “organization”) and uploaded the audio files directly to that organization. This allowed only the people on the team simultaneous, round-the-clock access to all the data. This access enabled us to email each other with requests like, “Hey, I am transcribing Interview 12, but I can’t figure out what she is saying at 4 minutes and 42 seconds. Can someone download it, have a listen, and tell me what you think?” It is not hard to see how much faster and less frustrating this made the entire transcription process. If you do not already have access to a digital recorder (even an iPod with a recording attachment), many campus technology centers now have them available for loan.

Tips for Analyzing Data

Use to the fullest the technology you already have. Software packages like NVivo and ATLAS.ti exist to help facilitate large-scale qualitative research, but our research was not large enough to justify such an investment. We found that some word processing and spreadsheet functions, combined with an LCD projector, were incredibly helpful in organizing and analyzing our data and keeping ourselves calibrated as a team. Specifically, one team member would transcribe an interview in Microsoft Word and add line numbers (Page Setup>Layout> Line numbers), then upload the Word document to our Blackboard organization. Two other team members would then download the transcript separately, and open code it individually. They would then open Microsoft Excel and—still separately—enter their open codings into a spreadsheet with the Excel row numbers corresponding to the line numbers on the transcript. They would then compare their separate spreadsheets, referring to the downloaded audio file when necessary, and produce a combined “interrater” spreadsheet that they would then upload to the Blackboard organization. Finally, when we met as a team, we used the LCD projector to look at all of the coding spreadsheets as a group and discuss what we had found, what categories we felt were emerging, what ambiguities needed resolution, and so on. This made for a very efficient coding process, given the relatively large size of our team. We made liberal use of Excel’s ability to include many sheets in a workbook, to color-code cells, and to hide/reveal columns as it suited us. This gave us the ability to keep all of our data in a single (large) Excel workbook.

Pay attention to the increasing sensitivity of your coding instruments (you). As you code more, you will become more discriminating at resolving the ambiguity inherent in the task of categorizing real-life data. Category definitions will become more nuanced, as will the criteria you use to include or exclude data from those categories. As you code, make notes about the rationales for why you are coding certain data in certain ways (Excel’s Insert > Comment function is handy for this) and be prepared to go back and recode your first few interviews after you have coded a dozen or so: Your categorical rubric will have evolved enough to make this necessary—but that’s a good thing.

Tips for Managing Relationships

Ethical considerations. Your campus undoubtedly has some form of IRB with procedures in place for research on human subjects (which is what you are doing). IRBs usually require that a description of your project, your plan for acquiring subject consent, and at least a prototype of your research protocol be approved before you can begin collecting data. Though it is easy to chafe at the extra chore of getting IRB approval, these requirements are in place for good reasons, so learn what you need to do to get the IRB’s blessing and do it. Beyond the IRB, one must also consider the ethics of the potential “dual relationship” one can create with a client-turned-research-subject. Here again we see the virtues of having a research team: They allow you to distance yourself as a researcher from any of your own clients who may be participating in the study. If you do not interview your own clients, nor even know who does interview them, nor which interview number corresponds to them—then you can be considered to be taking good-faith measures to keep the boundaries of your relationship clear. That said, participants must always be informed of their right to withdraw from the study at any time. Remember, research must always play second fiddle to the maintenance of good relationships with one’s clients.

Courtesy considerations. We recruited our participants with personal invitation letters sent from the senior faculty developer on the team. These letters included the rationale for a faculty developer doing research in the first place, as well as a reassurance about measures taken to protect the participants’ privacy. We wrote “Part of my ability to serve the community is to understand the issues you as instructors face at a deeper level, which means doing some qualitative research. But to maintain confidentiality, I won’t be the one interviewing you, and when I see transcripts they will have been made anonymous and devoid of any specific identifiers.” After transcribing each interview, researchers gave the faculty participant a copy of the transcript and a handwritten thankyou note from the senior faculty developer. (To maintain confidentiality, these thank-you notes were all written generically at the outset of the study and only later addressed to participants by their particular interviewer.) Finally, faculty who expressed interest in our findings were given copies of the American Psychology Association paper that resulted from the data they helped us collect (Roberts et al., 2006).

Conclusion

So, we end where we began—focusingon the importance of maintaining relationships with our faculty clients while doing the qualitative research necessary to serve them better. As the midcourse corrections in our own research illustrate, qualitative research is a constant learning process requiring collaborative adjustments and flexible attempts to try things a new way. At the same time, qualitative research takes the faculty developer into the lived stories of one’s constituents in ways that quantitative research never can—which can make the process both more rewarding and frustrating—but it is a journey that ultimately makes us better at what we do.

References

  • Austin, A. E. (2002, January/February). Preparing the next generation of faculty: Graduate school as socialization to the academic career. Journal of Higher Education, 73(1), 94–122.
  • Austin, A. E., Brocato, J. J., & Rohrer, J. D. (1997). Institutional missions, multiple faculty roles: Implications for faculty development. In D. DeZure & M. Kaplan (Eds.), To improve the academy: Vol. 16. Resources for faculty, instructional, and organization development (pp. 3–20). Stillwater, OK: New Forums Press.
  • Bogdan, R. C., & Biklen, S. K. (2003). Qualitative research for education (4th ed.). Boston, MA: Allyn & Bacon.
  • Boice, R. (1992). The new faculty member: Supporting and fostering professional development. San Francisco, CA: Jossey-Bass.
  • Borich, G.D. (1999). Dimensions of self that influence effective teaching. In R. P. Lipka & T. M. Brinthaupt (Eds.), The role of self in teacher development (pp. 92–117). Albany, NY: State University of New York Press.
  • Camic, P. M., Rhodes, J.E., & Yardley, L. (Eds.). (2003). Qualitative research in psychology: Expanding perspectives in methodology and design. Washington, DC: American Psychological Association.
  • Carusetta, E., & Cranton, P. (2005, July). Nurturing authenticity through faculty development. Journal of Faculty Development, 20(2), 79–86.
  • Damron, J. (2003). What’s the problem? A new perspective on ITA communication. Journal of Graduate Teaching Assistant Development, 9(2), 81–88.
  • Firmin, M., Hwang, C., Copella, M., & Clark, S. (2004, Summer). Learned helplessness: The effect of failure on test-taking. Education, 124( 4), 688–694.
  • Gillespie, K. H., Hilsen, L. R., & Wadsworth, E. C. (Eds.). (2002). A guide to faculty development: Practical advice, examples, and resources. Bolton, MA: Anker.
  • Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory. Mill Valley, CA: Sociology Press.
  • Kegan, R., & Lahey, L. L. (2001). How the way we talk can change the way we work: Seven languages for transformation. San Francisco, CA: Jossey-Bass.
  • King, K. P. (2004). Both sides now: Examining transformative learning and professional development of educators. Innovative Higher Education, 29(2), 155–174.
  • Lattuca, L. R., Voigt, L. J., & Faith, K. Q. (2004, Fall). Does interdisciplinarity promote learning? Theoretical support and researchable questions. Review of Higher Education, 28(1), 23–48.
  • Lecompte, M. D., Millroy, W. L., & Preissle, J. (Eds.). (1992). The handbook of qualitative research in education. San Diego, CA: Academic Press.
  • Lincoln, Y. S., & Guba, E.G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.
  • Linnenbrink, E. A., & Pintrich, P.R. (2002). Motivation as an enabler for academic success. School Psychology Review, 31(3), 313–327.
  • Menges, R. J., & Austin, A. E. (2001). Teaching in higher education. In V. Richardson (Ed.), Handbook of research on teaching (4th ed., pp. 1122–1156). Washington, DC: American Educational Research Association.
  • Mullen, C. A., & Forbes, S. A. (2000, April). Untenured faculty: Issues of transition, adjustment and mentorship. Mentoring & Tutoring, 8(1), 31–46.
  • Mullinix, B. B. (2006, April). Trends across the HE landscape: The faculty status of faculty developers. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
  • Nyquist, J. D., Manning, L., Wulff, D. H., Austin, A. E., Sprague, J., Fraser, P. K., et al. (1999, May/June). On the road to becoming a professor: The graduate student experience. Change, 31(3), 18–27.
  • Olsen, D., & Sorcinelli, M. D. (1992). The pretenure years: A longitudinal perspective. In M. D. Sorcinelli & A. E. Austin (Eds.), New directions for teaching and learning: No. 48. Developing new and junior faculty (pp. 15–25). San Francisco, CA: Jossey-Bass.
  • Palmer, P. J. (1998). The courage to teach: Exploring the inner landscape of a teacher’s life. San Francisco, CA: Jossey-Bass.
  • Perry, R. P., Hladkyj, S., Pekrun, R. H., Clifton, R. A., & Chipperfield, J. G. (2005, August). Perceived academic control and failure in college students: A three-year study of scholastic attainment. Research in Higher Education, 46(5), 535–569.
  • Pintrich, P.R., Marx, R. W., & Boyle, R.A. (1993, Summer). Beyond cold conceptual change: The role of motivational beliefs and classroom contextual factors in the process of conceptual change. Review of Educational Research, 63(2), 167–199.
  • Roberts, R., Sweet, M., Walker, J., Walls, S., Kucsera, J., Shaw, S., et al. (2006, October). Teacher mistakes: A window into teacher self-efficacy. Paper presented at the annual meeting of the American Psychological Association, New Orleans, LA.
  • Schroeder, C. M. (2005). Evidence of the transformational dimensions of the scholarship of teaching and learning: Faculty development through the eyes of the SoTL scholars. In S. Chadwick-Blossey & D.R. Robertson (Eds.), To improve the academy: Vol. 23. Resources for faculty, instructional, and organizational development (pp. 47–71). Bolton, MA: Anker.
  • Smith, K. S. (2001, Fall). Pivotal events in graduate teacher preparation for a faculty career. Journal of Graduate Teaching Assistant Development, 8(3), 97–105.
  • Stein, R. B., & Short, P. M. (2001, Summer). Collaboration in delivering higher education programs: Barriers and challenges. Review of Higher Education, 24( 4), 417–435.<tab/>.
  • Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Thousand Oaks, CA: Sage.
  • Taylor, E., & Antony, J. S. (2000, Summer). Stereotype threat reduction and wise schooling: Towards the successful socialization of African American doctoral students in education. Journal of Negro Education, 69(3), 184–198.
  • Tschannen-Moran, M., & Hoy, A. W. (2001, October). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17(7), 783–805.
  • Ward, K., & Wolf-Wendel, L. (2004, Winter). Academic motherhood: Managing complex roles in research universities. Review of Higher Education, 27(2), 233–257.
  • Whitt, E. J. (1991, Winter). “Hit the ground running”: Experiences of new faculty in a school of education. Review of Higher Education, 14(2), 177–197.