• RSS

Abstract

Evidencing the value of programs and services challenges educational developers to measure a range of outcomes. While direct measures of faculty use of effective teaching behaviors and student learning are desirable, these methods are time consuming and resource intensive. We provide a scale that is easy to deploy and can be adapted to different programs. Our psychometrically sound scale measures one facet of faculty learning about teaching—appreciation of pedagogy. The scale measures awareness, knowledge integration, emotions, beliefs, and self-reported behaviors related to the appreciation of pedagogy. We also examine scale correlates, including teaching identity, confidence, and control.

Keywords: assessment, pedagogy, evidencing impact, faculty learning


Recent calls from numerous scholars tout the importance of measuring the impact of educational development on students, faculty, and institutional culture (Beach et al., 2016; Condon et al., 2016; Hines, 2017; Wright et al., 2018). Evidencing the value of programs and services offered by educational developers through robust assessment and evaluation methods serves a variety of purposes. Quality evidence can maximize the use and procurement of resources, support strategic planning, generate institutional reports, develop meaningful connections to stakeholders, and determine the impact of programs and services on students, faculty, and institutional culture. Assessing the impact of educational development programs and services involves moving past methods that collect participant satisfaction (Chalmers & Gardiner, 2015; Chism & Szabó, 1997; Hines, 2009; Kucsera & Svinicki, 2010). Assessment frameworks, such as the Academic Professional Development Assessment Framework, have moved beyond measurement of input and outputs to also include assessment of the process by which faculty develop pedagogical expertise (Chalmers & Gardiner, 2015). In particular, researchers advocate for a process assessment of participants’ beliefs regarding teaching, learning, and assessment, noting the mediating effect of faculty attitudes, interest, values, and expectations on whether new learning transfers into practice (Chalmers et al., 2012). Taking this step forward in the assessment process challenges educational developers to embrace the assessment cycle, which begins with establishing clear and measurable outcomes (Erwin, 1991).

Educational development outcomes must represent the range of programs and services offered by centers and professionals and be mindful of the intended audiences and levels of impact. While the ultimate goal of many educational development programs is to enhance student learning, measuring the impact of interventions on student learning is complicated by the fact that the primary audience for most educational development programs is instructional faculty. Thus, many developers strive to achieve outcomes that prioritize the implementation of evidence-based teaching strategies. However, a growing body of educational development scholars acknowledge that if the outcome of educational development endeavors is to transform teaching practices, developers must attend not only to teaching practices but also to outcomes of professor identity, beliefs, awareness, and conceptions of teaching (Booke & Willment, 2018; Chalmers et al., 2012; Karm, 2010; Trigwell & Prosser, 1996). In their extensive review of the literature, Chalmers et al. (2014) defined this shift in educational development as moving beyond the measurement of satisfaction with educational development programs to measuring outcomes or the actual changes within faculty: changes in faculty conceptions of teaching and teaching beliefs that undergird changes in teaching practice. In addition, scholars increasingly acknowledge that such changes occur over time (Akkerman & Meijer, 2011; Chalmers et al., 2014; Cilliers & Herman, 2010). Thus, a rich array of assessment instruments, beyond the traditional satisfaction survey, is needed to triangulate evidence of transformation of faculty conceptions of teaching and subsequent changes in teaching practice, often over a longitudinal time course.

A recent comparison of instructor self-report instruments reveals that many instruments used to document teaching only explore the use of select teaching practices, offering limited, if any, insights on the ways that instructors conceptualize teaching (Williams et al., 2015). For example, the Approaches to Teaching Inventory (Trigwell & Prosser, 2004), the Borrego Engineering Faculty Survey (Borrego et al., 2013), and Higher Education Research Institute Faculty Survey (Hurtado et al., 2012) primarily include items related to select teaching practices. And although these surveys include some items that explore constructs such as faculty satisfaction with their work, perceptions of student engagement, or reflections on institutional culture, none of these instruments measure the perceived value faculty place on select teaching practices. More importantly, these instruments do not provide insights on the ways faculty learn about teaching.

The direct path model of the educational development assessment model proposed by Condon et al. (2016) posits that the action of participating in professional development leads to faculty learning, which results in improvements in teaching. Improved teaching then leads to more in-depth or effective learning processes for students. This model assumes that (a) faculty learning occurs during professional development, (b) this learning leads directly to improvements in teaching, and (c) implementation outcomes are the preferred evidence educational developers seek to measure impact in development experiences. There are many variables that influence whether lasting faculty learning occurs and whether this learning translates into improved teaching that positively impacts student learning. But establishing that faculty learning has occurred is an important first step to understanding downstream impacts on student learning. One way to assess this learning is through measures associated with appreciation of pedagogy. Drawing support for this idea, the theory of planned behavior elucidates that people’s intentions to enact particular behaviors (e.g., effective teaching behaviors) are significantly shaped by our specific attitudes about that behavior and subjective norms (Ajzen, 1991). In this regard, the development of appreciation of pedagogy can shape attitudes about teaching-related behaviors as well as subjective norms associated with teaching (e.g., teaching is a valued and important enterprise). Without a mechanism for determining the nature of the learning that occurs during educational development experiences, developers are left wondering whether their efforts influenced teaching improvements and whether any resulting changes in student learning outcomes were connected to their specific programs or services.

The Faculty Learning Outcome Assessment Framework attempts to fill this gap by defining a set of learning outcomes that span four areas related to faculty work—academic culture, teaching, scholarship, and career development (Hurney et al., 2016). Hurney et al. (2016) applied constructive alignment to formulate faculty learning outcomes they expect faculty to make progress toward by interfacing with educational development programming (Biggs, 2014; Biggs & Tang, 2007; Wang et al., 2013; Wiggins & McTighe, 2005). The resulting outcomes related to faculty learning about teaching encourage educational developers to teach faculty about concepts such as backward course design, pedagogical transparency, effective assignments, and more. Specifically, one of the teaching faculty learning outcomes challenges educational developers to engage faculty in programming that fosters an appreciation pedagogy—the art and science of teaching and learning—as a significant higher education endeavor (Hurney et al., 2016).

The construct of appreciation refers to the development of knowledge that increases the value of something—an event, a behavior, an object—and the development of a positive emotional connection to that something (Adler & Fagley, 2005). Appreciation focuses on positive attributes, such as what one knows or feels instead of what one does not know or feel. Specific elements of the construct of appreciation include developing awareness, sense of awe, and habits to promote noticing the value of something and valuing challenges along the way as a means to heightening a sense of worth (Fagley, 2016).

Extending previous work on the Faculty Learning Outcome Assessment Framework, we created a scale to measure faculty learning as it relates to their appreciation of pedagogy (the Faculty Appreciation of Pedagogy Scale) to explore this outcome in the context of educational development. We define the construct of pedagogical appreciation using elements that reflect the cognitive and affective aspects of the construct of appreciation (Fagley, 2016). Cognitive elements of pedagogical appreciation include the development of awareness about pedagogy (knowledge) and the ability to integrate new pedagogical knowledge into the way one thinks about teaching. Affective elements of pedagogical appreciation mirror some of the elements defined by Fagley and others (Adler & Fagley, 2005; Fagley, 2016), including emotions related to teaching, beliefs about teaching, and the frequency of behaviors related to teaching along with the enjoyment and value placed on these behaviors. Thus, the Faculty Appreciation of Pedagogy Scale contains seven elements—awareness, integration, emotions, beliefs, and three dimensions of self-reported teaching-related behaviors (frequency, enjoyment, value). In the sections that follow, we describe the method through which we developed this scale and the psychometric properties of the scale and then demonstrate this scale’s convergent validity with similar outcomes and divergent validity with anticipated, unrelated outcomes as well as demonstrate the scale’s predictive capability outcomes associated with educational development processes. 

Methods

Scale Development

The Faculty Appreciation of Pedagogy Scale represents the expansion of a single faculty learning outcome from the Faculty Learning Outcome Assessment Framework developed by Hurney et al. (2016)—Faculty will make progress toward appreciating pedagogy—the art and science of teaching and learning—as a significant higher education endeavor. The current research was conducted to expand upon this individual item by constructing a questionnaire for use in the assessment of educational development practices, the scholarship of educational development, and the scholarship of teaching and learning (SoTL).

Item development included a review of the literature related to aspects of appreciation and teaching. Specifically, we explored the literature on the theory of reasoned action, perceived behavioral control and self-efficacy (Ajzen, 2002), pedagogical content knowledge (Fernández-Balboa & Stiehl, 1995; Grossman, 1990; Shulman, 1986), teacher effectiveness (Council of Chief State School Officers, 2013), and the construct of appreciation (Fagley, 2016). We also hosted informal gatherings of faculty at our institutions in which we asked them to explain their understanding of pedagogical appreciation and how we would know that they appreciate pedagogy. Next, the authors of this article applied their experience in educational development to identify additional ways that teachers might evidence appreciation of pedagogy not already uncovered in the literature review. The combined insights provided by faculty, the literature, and the authors resulted in the development of a set of 45 total items composing the Faculty Appreciation of Pedagogy Scale (Table 1).

Table 1. Items From the Faculty Appreciation of Pedagogy Scale
Item Subscale category(ies) Response range Response anchors
I am aware of evidence-based strategies used to teach students in my discipline Awareness 1–5 Strongly disagree–Strongly agree
I am aware of a range of strategies that can be used to teach students in my discipline Awareness 1–5 Strongly disagree–Strongly agree
I am aware that there are many ways to teach a particular concept in my discipline Awareness 1–5 Strongly disagree–Strongly agree
I am aware of journals that publish scholarship about teaching in my discipline Awareness 1–5 Strongly disagree–Strongly agree
I am aware of research methods used to study teaching in my discipline Awareness 1–5 Strongly disagree–Strongly agree
When I see a new teaching strategy, I think about if I would use it or not Integration 1–5 Strongly disagree–Strongly agree
When I see a new teaching strategy, I am able to integrate it with how I think about teaching Integration 1–5 Strongly disagree–Strongly agree
I get excited when I think about teaching Emotion 1–5 Not at all–A great deal
I am curious to hear new ideas about teaching Emotion 1–5 Not at all–A great deal
I marvel at the craft of teaching Emotion 1–5 Not at all–A great deal
I feel a sense of awe for what teaching can do for students Emotion 1–5 Not at all–A great deal
I believe that teaching is a learnable craft Beliefs 1–5 Not at all–A great deal
I believe that teaching is a creative process Beliefs 1–5 Not at all–A great deal
I believe that teaching is a complex endeavor Beliefs 1–5 Not at all–A great deal
I believe that teaching is a skill that can improve with practice Beliefs 1–5 Not at all–A great deal
Reading about teaching, just because Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Reading about teaching to improve your craft Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Reading the literature on how learning works (e.g., cognitive learning theories) Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Talk with people about teaching Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Write informally about teaching Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Observe other people teach Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Discover new ways to teach difficult content Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Reflect on my teaching Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Reminisce about prior teaching experiences Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree
Seek out new ways to teach in your classes Frequency, Enjoyment, Value 1–3 Strongly disagree–Strongly agree

The first 15 of these items are standalone items that fall into four categories: awareness (e.g., I am aware that there are many ways to teach a particular concept in my discipline), integration into teaching behavior (e.g., When I see a new teaching strategy, I think about if I would use it or not), emotion (e.g., I get excited when I think about teaching), and beliefs (e.g., I believe that teaching is a learnable craft). Each of these was designed to be completed on a 1 to 5 Likert-type response scale. Awareness, integration into teaching behavior, and belief items were designed to be completed with strongly disagree and strongly agree as response anchors, and the emotion items were designed to be completed with not at all to a great deal as response anchors.

The other 30 items were composed of 10 unique item stems, and participants were asked to respond to the item stem by answering three questions. First, participants were prompted to respond to How often do you engage in this behavior?, then How much do you enjoy doing this?, and finally, How much do you value this? As an example, participants would read the stem reading about teaching to improve your craft and then report how often they engaged in this behavior, how much they enjoyed doing this task, and how much they valued it.

In our data collection tool, responses for frequency, enjoyment, and value of behaviors were placed side by side, such that participants could follow a line horizontally from left to right and respond to all three questions with the same item stem. Primarily because of visual space constraints when using our data collection tool, each of these questions was designed to be completed on a Likert-type response scale from 1 to 3, with response headings of not at all, a moderate amount, and a great deal. Other researchers could use a wider response range, such as 1 to 5, if they see fit.

When possible, it is psychometrically desirable to use questionnaires with more rather than fewer items. The full 45-item Faculty Appreciation of Pedagogy Scale fits this trend, demonstrating strong internal reliability (Cronbach’s α = .90) and a normal distribution. Should others wish to use this full scale for their research or program assessment, because the subscales have different response ranges, the individual subscales should be transformed into z-scores before computing an overall mean for the scale, as we have done in this manuscript. Doing so allows all items to be compared on the same metric. Based on our overall analysis, we recommend that other researchers or programmers use either the full scale or the subscales of interest to them.

To validate and examine correlates of the Faculty Appreciation of Pedagogy Scale and its subscales, we constructed a two-item measure of teaching identity (i.e., I enjoy teaching, Being a good teacher is important to me, Cronbach’s α = .66); a three-item measure of control over choices in courses (i.e., I control the selection of content, I develop the assessments [e.g., exams, assignments], I control the teaching strategies, Cronbach’s α = .74); and a three-item measure of confidence in the ability to make choices in courses (i.e., I have confidence in my ability to select content for my courses, I have confidence in my ability to develop assessments [e.g., exams, assignments] for my courses, I have confidence in my ability to implement teaching strategies in my courses, Cronbach’s α = .78).

Participants and Procedure

Our sample included 135 individuals from three separate sources: a faculty sampling resource from the Academic Research Services division of the online survey company Qualtrics and two selective liberal arts institutions we will call “institution A” and “institution B.” The sample from institution A represents a non-randomized sample of faculty from the school. The sample from institution B represents a select group of faculty who participated in a yearlong program focused on inclusive teaching methods funded by a National Science Foundation (NSF) grant (IUSE grant no. 1611713). Of the entire sample, 15 participants reported not having earned a master’s degree or higher, all from the Qualtrics source. Because the teaching experiences of such individuals are likely much different than many teachers in higher education, data from these 15 participants were removed from the sample (though the overall results reported below remain consistent when included). This resulted in a sample of 120 participants (52 from Qualtrics, 49 from institution A, and 19 from institution B) (see Table 2 for additional demographic information).

Table 2. Demographic Information for Examined Participants
Highest degree completed % Sex %
Master’s 39.2 Female 38.3
Doctorate 60.8 Male 60.0
Other or missing 1.7
Age % Field %
Under 30 5.8 Math/computer science 13.3
31–35 22.5 Natural and physical sciences 23.3
36–40 14.2 Social sciences 20.8
41–45 5.0 Humanities 25.0
46–50 7.5 Professional or applied fields 5.8
51–55 10.8 Arts 8.3
56–60 7.5 Other 3.3
Over 60 8.3
Race % Type of institution %
Asian 5.0 Tribal college 0.8
Asian American 1.7 Special focus institution 0.8
Black or African American 4.2 Associate’s degree granting college 10.8
Latino/a or Hispanic 6.7 Public baccalaureate college or univ. 6.7
Middle Eastern 0.8 Private baccalaureate college or univ. 59.2
White 76.7 Public master’s university 5.8
Other, prefer not to respond, missing 5.0 Private master’s university 3.3
Public doctorate university 8.3
Private doctorate university 2.5
Not classified 1.7
Type of appointment %
Tenure-track faculty 59.2
Non-tenure-track faculty (full time) 23.3
Non-tenure-track faculty (part time) 13.3
Other or missing 3.2

The participants from institution A were recruited by a mass email sent to all faculty at the school. The participants at institution B were invited to participate in this survey by a faculty member at the institution who serves as the NSF grant coordinator. The faculty from Qualtrics were recruited from a variety of institution types across the country with the intent of equally representing the different academic fields included in the current study. 

Results

Subscale and Scale Psychometric Properties and Descriptive Statistics

Overall, as we will report in this section, the Faculty Appreciation of Pedagogy Scale is a valid and reliable measure of the appreciation of pedagogy; as such, it is a comprehensive measure of its underlying construct. Each subscale category of the questionnaire meets standards for internal reliability that range from acceptable (.68) to strong (.84) using Cronbach’s alpha (Table 3). (Note that in our sample, the first three participants were a part of a pilot group that completed the behavioral frequency, enjoyment, and value items using a 1–5 response range. Because all other participants completed these items using a 1–3 range, the first three participants were removed from analyses using these items.) Participant responses to most of the subscales were normally distributed, although some exhibited negative skew, such that the bulk of the responses were on the high end of the subscale, and no extreme ceiling effects emerged. More specifically, the participants’ response ranges on each of the subscales nearly reaches the theoretical response range of the subscales, and mean values are generally high (around 4.0–4.5 on the 1–5 subscales, around 2.2–2.4 on the 1–3 subscales), but values in these ranges are common on scales with socially desirable responses.

Table 3. Descriptive Statistics of the Faculty Appreciation of Pedagogy Scale and Subscales
Subscale or questionnaire Theoretical range Response range M SD Number of items Cronbach’s α*
Awareness 1–5 1.00–5.00 4.27 0.64 5 .80
Integration 1–5 1.00–5.00 4.28 0.61 2 .68
Emotions 1–5 2.00–5.00 4.14 0.73 4 .84
Beliefs 1–5 2.25–5.00 4.66 0.47 4 .72
Behavior-Frequency 1–3 1.20–3.00 2.17 0.31 10 .75
Behavior-Enjoyment 1–3 1.20–3.00 2.22 0.35 10 .79
Behavior-Value 1–3 1.60–3.00 2.38 0.34 10 .79
Faculty Appreciation of Pedagogy Scale (z-score) n/a -2.92–1.88 0.0023 0.81642 45 .90
*Note: Cronbach’s alpha (α) is a measure of internal consistency or reliability used to determine the interrelatedness of a set of items. Conventionally, Cronbach’s α values above .70 are considered acceptable, although questionnaires with fewer items make high Cronbach’s α values difficult to achieve (Kline, 2000).

Convergent and Discriminant Validity

We validated the Faculty Appreciation of Pedagogy Scale by correlating the instrument with other variables that should be associated with it. Presumably, appreciation of pedagogy should be correlated with teaching identity and confidence in ability to make choices in one’s courses but not the actual control that one has over making choices in one’s courses, which in most cases are strongly influenced by factors beyond the control of the faculty member. For both the subscales and the overall Faculty Appreciation of Pedagogy Scale, participants who reported greater appreciation of pedagogy also reported a stronger identity associated with teaching and greater confidence in the ability to make choices in their courses (Table 4). This provides evidence of convergent validity. Providing evidence of discriminant validity, the overall scale and its subscales were uncorrelated with participants’ levels of control over making choices in their courses, a variable that is likely due to factors external to the individual faculty member (e.g., program requirements, department standards).

Table 4 . Correlates of the Faculty Appreciation of Pedagogy Scale and Its Subscales With Teaching Identity, Confidence in Ability to Make Choices in Courses, and Control Over Choices in Courses
Subscale or questionnaire Teaching identity Confidence in ability to make choices in courses Control over making choices in courses
Awareness .14 .25** .11
Integration -.00 .14 .03
Emotion .36*** .41*** -.07
Beliefs .18+ .36*** .07
Behavior-Frequency .12 .28** -.06
Behavior-Enjoyment .31** .29** -.08
Behavior-Value .23* .20* -.09
Faculty Appreciation of Pedagogy Scale .33*** .45*** -.03
*** p < .001, ** p < .01, * p < .05, +p < .10

Predictive Capability

For a newly developed questionnaire to have utility, it should serve as a criterion or outcome of meaningful predictive factors. To provide an example, if an educational developer engaged in programming that was designed to increase attendees’ levels of appreciation of pedagogy, then the Faculty Appreciation of Pedagogy Scale should reflect the influence of such programming. Such an outcome could be examined by comparing institutions that utilize different types of educational development programs or by comparing appreciation of pedagogy before and after a program or series of programs (a pretest-posttest design).

To demonstrate this utility, we examined differences in appreciation of pedagogy based on the source of data collected (i.e., institution A, institution B, Qualtrics sampling). Though a comprehensive report of these statistical tests is beyond the scope and intention of this article, we provide a few examples of how the Faculty Appreciation of Pedagogy Scale and its subscales can be used to predict differences in appreciation, presumably based on educational development background and experiences.

Figure 1. Difference in Awareness Subscale Based on Data SourceFigure 1. Difference in Awareness Subscale Based on Data Source** p < .01, + p < .10

The three different sources of data demonstrated differences in awareness, F(2,117) = 4.78, p = .010, ηp2 = .077. Participants from institution A reported lower levels of awareness than did participants from institution B, Fisher’s least significant difference (LSD) p = .003 (Figure 1). Participants from institution B also had marginally higher awareness than the Qualtrics sampling participants, Fisher’s LSD p = .066.

The three different sources of data also demonstrated differences in beliefs, F(2,117) = 13.75, p < .001, ηp2 = .190. Qualtrics sampling participants were lower in beliefs than were institution A participants, Fisher’s LSD p < .001, and institution B participants, Fisher’s LSD p = .001 (Figure 2). The latter two were not different from each other, Fisher’s LSD p = .698. However, the three different sources of data did not demonstrate differences in integration, F(2,117) = 1.37, p = .258, ηp2 = .023, or emotion, F(2,117) = 0.57, p = .570, ηp2 = .010.

Figure 2. Difference in Beliefs Subscale Based on Data SourceFigure 2. Difference in Beliefs Subscale Based on Data Source*** p < .001

The three different sources of data demonstrated differences in behavioral frequency, F(2,114) = 9.19, p < .001, ηp2 = .139. Institution A participants were lower in behavioral frequency than were Qualtrics sampling participants, Fisher’s LSD p < .001, and institution B participants, Fisher’s LSD p = .004. The latter two were not different from each other, Fisher’s LSD p = .940 (Figure 3). The tests for enjoyment, F(2,114) = 5.18, p = .007, ηp2 = .083, and value, F(2,114) = 4.99, p = .008, ηp2 = .080, were also different between the three sources of data and demonstrated the same pattern of results as behavioral frequency. Finally, the three different sources of data demonstrated differences in the overall Faculty Appreciation of Pedagogy Scale, F(2,114) = 4.78, p = .010, ηp2 = .077. Institution A participants were not different in overall appreciation than were Qualtrics sampling participants, Fisher’s LSD p = .112, but they were significantly lower than institution B participants, Fisher’s LSD p = .003. Institution B participants were marginally higher in overall appreciation than the Qualtrics sampling participants, Fisher’s LSD p = .066 (Figure 4).

Figure 3. Difference in Behavioral Frequency Based on Data SourceFigure 3. Difference in Behavioral Frequency Based on Data Source*** p < .001, ** p < .01
Figure 4. Difference in Faculty Appreciation of Pedagogy Scale Based on Data Source (z-scores)Figure 4. Difference in Faculty Appreciation of Pedagogy Scale Based on Data Source (z-scores)** p < .01, + p < .10

Discussion

In this manuscript we describe the development of the Faculty Appreciation of Pedagogy Scale and its psychometric properties. We also demonstrate its convergent and divergent validity with other outcomes and its capability to serve as a predicted outcome of educational development experiences as they relate to how faculty make progress toward appreciating pedagogy. The subscales and the overall Faculty Appreciation of Pedagogy Scale demonstrate strong internal reliability and a normal distribution that has slight negative skew. As predicted, the scale and its subscales demonstrated convergent validity with related constructs (i.e., teaching identity, confidence in ability to make choices in courses) and divergent validity with constructs presumably unrelated to appreciation of pedagogy (i.e., actual control over choices in courses). Lastly, the scale and its subscales demonstrated predictive capability as an outcome, likely from previous educational development experiences. That is, appreciation of pedagogy varied based on participants’ institutional background.

We developed the Faculty Appreciation of Pedagogy Scale as a tool to be used for research and programming needs at centers for teaching and learning. We acknowledge that a 45-item measure might be too long to be implemented in some settings. With this in mind, it is worth noting that some of the patterns of findings we observed above demonstrated that the individual subscales either mirrored, or did not mirror, the pattern of the overall scale. This is typical of questionnaires with multiple subscales, and we encourage others to carefully consider which components of the tool they deem most relevant for their work. Any of the five subscales we developed for the scale—awareness, integration, emotions, beliefs, and behaviors—can be used, independently, to gain insights on the ways educational development experiences impact faculty. For example, to examine the outcome of a faculty learning community designed to influence people’s emotions associated with appreciation of pedagogy, educational developers can integrate the emotion subscale items into their evaluation plan. Additionally, education developers interested in behaviors related to the appreciation of pedagogy could integrate the 10 behavioral stems into a pre-post design to reveal changes in the frequency, enjoyment, and value of these behaviors.

The Faculty Appreciation of Pedagogy Scale can also help educational developers and administrators gain insights on the academic culture of their faculty. Pedagogy sits at the center of the teaching culture, and how faculty view pedagogy impacts more than just their students. For example, faculty who enhance their appreciation of pedagogy may impact the institutional culture when they engage in classroom observations of their peers, review tenure and promotion dossiers, and mentor new faculty. Rather than seeing pedagogy and pedagogical innovations as ways of improving their own teaching, faculty can view pedagogy through a more informed lens, a lens that helps them better understand the ways that pedagogies can support student learning in a variety of learning environments, not just their own courses. Results from the items in the awareness subscale can help inform educational developers in their efforts to develop and disseminate instructional resources and measure the impact of these efforts. 

To authentically transform teaching practice, it is crucial to support the development of a reflective stance that appreciates teaching as a complex process and that challenges instructors to avoid simply rationalizing or defending their current practice (Loughran, 2002) and bypass the opportunity to link thinking and action (Karm, 2010). The Faculty Appreciation of Pedagogy Scale begins to unpack the construct of faculty learning as an essential outcome for educational developers. While enhancing the appreciation of pedagogy may be necessary for faculty to make substantive changes in their teaching, it is likely one of several aspects of the learning about teaching that support lasting pedagogical change. Learning is a complex, iterative, and reflective process, whether students are learning new disciplinary content/skills or faculty are learning to teach more effectively. Thus, the assessment of outcomes of educational development initiatives must seek to not only measure summative outputs of educational development programs but also simultaneously measure the impact of the processes that occur during educational development, such as valuable shifts in faculty thinking, beliefs, and attitudes about teaching and learning (Chalmers & Gardiner, 2015). This shift in thinking is often described as critical reflection, a deep analysis of one’s identity, values, beliefs, and conceptions and how these factors influence subsequent changes in teaching behavior (Brookfield, 1995; Korthagen, 2016; Korthagen & Vasalos, 2005; Loughran, 2002; McAlpine et al., 2006).

One inconvenient truth in educational development is that a traditional competency-based model, focused primarily on pedagogical theory and behavior, fails to acknowledge the personhood of the teacher—including both the cognitive and the emotional and motivational dimensions of thinking that are necessary for learning about teaching (Kelchtermans & Vandenberghe, 1994; Korthagen, 2017). McAlpine et al. (2006) proposed that developers attend not only to the more cognitive zones of thinking but also to the more abstract and hidden zones of thinking, which they term the conceptual zone and the strategic zone. The conceptual zone represents values, personal expressions of worth of and/or commitment to teaching and learning, whereas the strategic zone involves the consideration of choices for teaching and may include a back-and-forth reflection on options (McAlpine et al., 2006). This joining of the cognitive and affective processes describes constructs worthy of closer examination (cf. Ajzen, 2002), such as appreciation.

Overall, the Faculty Appreciation of Pedagogy Scale provides a psychometrically sound assessment tool for educational developers that can be used to evidence the value of programs, consultations, and other educational development experiences. Specifically, the Faculty Appreciation of Pedagogy Scale presents educational developers with insights on the aspects of the appreciation of pedagogy—awareness, knowledge integration, emotions, beliefs, and behaviors—that align with the mission of their work, which can help guide strategic planning efforts, program development, and analysis of the teaching climate at their institutions. More importantly, the scale examines one of the essential ingredients in the quality of higher education—the faculty member (Beach et al., 2016). Ultimately, the Faculty Appreciation of Pedagogy Scale represents the first step in the development of a set of psychometrically sound survey constructs that measure the many aspects of teaching embraced by the Faculty Learning Outcome Assessment Framework.

Acknowledgments

This project was supported in part by three funding sources: Colby College’s Center for Teaching & Learning, start-up research funds from the Office of the Dean of the College of Arts and Sciences at Sewanee: The University of the South, and a National Science Foundation grant awarded to Eastern Mennonite University (Grant No. 1611713).

Biographies 

Carol A. Hurney is the Associate Provost of Faculty Development and Founding Director of the Center for Teaching and Learning at Colby College. She earned her PhD in Biology at the University of Virginia. She taught biology for 19 years at James Madison University, where she also directed a comprehensive faculty development center. Her scholarly interests include learner-centered teaching, active learning, and measuring the impact of educational development. 

Jordan D. Troisi is the Senior Associate Director of the Center for Teaching and Learning at Colby College. He earned his PhD in Social Psychology from the University at Buffalo (SUNY) and before arriving at Colby College, he taught as a professor of psychology for 9 years at Widener University and Sewanee: The University of the South. His disciplinary background in psychology has led to numerous advancements in scholarly work on pedagogy and faculty development, which have been presented and published in numerous venues.

Lori H. Leaman is a faculty member in the College of Education at James Madison University. Dr. Leaman earned her undergraduate degree in Special Education and an EdD in Higher Education Leadership. Her scholarly interests include instruction for diverse learning and cultural needs, faculty efficacy, and teaching and learning as a sociocultural process of identity development.  

References

  • Adler, M. G., & Fagley, N. S. (2005). Appreciation: Individual differences in finding value and meaning as a unique predictor of subjective well-being. Journal of Personality, 73(1), 79–114. https://doi.org/10.1111/j.1467-6494.2004.00305.x
  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-T
  • Ajzen, I. (2002). Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. Journal of Applied Social Psychology, 32(4), 665–683. https://doi.org/10.1111/j.1559-1816.2002.tb00236.x
  • Akkerman, S. F., & Meijer, P. C. (2011). A dialogical approach to conceptualizing teacher identity. Teaching and Teacher Education, 27(2), 308–319. https://doi.org/10.1016/j.tate.2010.08.013
  • Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (2016). Faculty development in the age of evidence: Current practices, future imperatives. Stylus Publishing.
  • Biggs, J. (2014). Constructive alignment in university teaching. HERDSA Review of Higher Education, 1, 5–22.
  • Biggs, J., & Tang, C. (2007). Teaching for quality learning at university (3rd ed.). Open University Press.
  • Booke, J., & Willment, J.-A. (2018). Teaching assumptions within a university faculty development program. Transformative Dialogues: Teaching and Learning Journal, 11(1), 1–18.
  • Borrego, M., Cutler, S., Prince, M., Henderson, C., & Froyd, J. E. (2013). Fidelity of implementation of research-based instructional strategies (RBIS) in engineering science courses. Journal of Engineering Education, 102(3), 394–425. https://doi.org/10.1002/jee.20020
  • Brookfield, S. D. (1995). Becoming a critically reflective teacher. Jossey-Bass.
  • Chalmers, D., Cummings, R., Elliott, S., Stoney, S., Tucker, B., Wicking, R., & Jorre de St. Jorre, T. (2014). Australian University Teaching Criteria and Standards Project. Australian Government, Office for Learning and Teaching. https://ltr.edu.au/resources/SP12_2335_Cummings_Report_2014.pdf
  • Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46, 81–91. https://doi.org/10.1016/j.stueduc.2015.02.002
  • Chalmers, D., Stoney, S., Goody, A., Goerke, V., & Gardiner, D. (2012). Identification and implementation of indicators and measures of effectiveness of teaching preparation programs for academics in higher education Final Report, Appendices.https://ltr.edu.au/resources/SP10_1840_Chalmers_appendices_2012_0.pdf
  • Chism, N. V. N., & Szabó, B. (1997). How faculty development programs evaluate their services. Journal of Staff, Program & Organization Development, 15(2), 55–62.
  • Cilliers, F. J., & Herman, N. (2010). Impact of an educational development programme on teaching practice of academics at a research-intensive university. International Journal for Academic Development, 15(3), 253–267. https://doi.org/10.1080/1360144X.2010.497698
  • Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty development and student learning: Assessing the connections. Indiana University Press.
  • Council of Chief State School Officers. (2013). InTASC model core teaching standards and learning progressions for teachers 1.0: A resource for ongoing teacher development. Interstate Teacher Assessment and Support Consortium. https://ccsso.org/sites/default/files/2017-12/2013_INTASC_Learning_Progressions_for_Teachers.pdf
  • Erwin, T. D. (1991). Assessing student learning and development: A guide to the principles, goals, and methods of determining college outcomes. Jossey-Bass.
  • Fagley, N. S. (2016). The construct of appreciation: It is so much more than gratitude. In D. Carr (Ed.), Perspectives on gratitude: An interdisciplinary approach (pp. 70–84). Routledge.
  • Fernández-Balboa, J.-M., & Stiehl, J. (1995). The generic nature of pedagogical content knowledge among college professors. Teaching and Teacher Education, 11(3), 293–306. https://doi.org/10.1016/0742-051X(94)00030-A
  • Grossman, P. L. (1990). The making of a teacher: Teacher knowledge and teacher education. Teachers College Press.
  • Hines, S. R. (2009). Investigating faculty development program assessment practices: What’s being done and how can it be improved? The Journal of Faculty Development, 23(3), 5–19.
  • Hines, S. R. (2017). Evaluating centers for teaching and learning: A field-tested model. To Improve the Academy, 36(2), 89–100. https://doi.org/10.1002/tia2.20058
  • Hurney, C. A., Brantmeier, E. J., Good, M. R., Harrison, D., & Meixner, C. (2016). The faculty learning outcome assessment framework. The Journal of Faculty Development, 30(2), 69–77.
  • Hurtado, S., Eagan, K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010–2011 HERI Faculty Survey. University of California, Los Angeles, Higher Education Research Institute. https://www.heri.ucla.edu/monographs/HERI-FAC2011-Monograph.pdf
  • Karm, M. (2010). Reflection tasks in pedagogical training courses. International Journal for Academic Development, 15(3), 203–214. https://doi.org/10.1080/1360144X.2010.497681
  • Kelchtermans, G., & Vandenberghe, R. (1994). Teachers’ professional development: A biographical perspective. Journal of Curriculum Studies, 26(1), 45–62.
  • Kline, P. (2000). The handbook of psychological testing (2nd ed.). Routledge.
  • Korthagen, F. A. J. (2016). Pedagogy of teacher education. In J. Loughran & M. L. Hamilton (Eds.), International Handbook of Teacher Education (pp. 311–346). Springer Science & Business Media.
  • Korthagen, F. (2017). Inconvenient truths about teacher learning: Towards professional development 3.0. Teachers and Teaching, 23(4), 387–405. https://doi.org/10.1080/13540602.2016.1211523
  • Korthagen, F., & Vasalos, A. (2005). Levels in reflection: Core reflection as a means to enhance professional growth. Teachers and Teaching, 11(1), 47–71. https://doi.org/10.1080/1354060042000337093
  • Kucsera, J. V., & Svinicki, M. (2010). Rigorous evaluations of faculty development programs. The Journal of Faculty Development, 24(2), 5–18.
  • Loughran, J. J. (2002). Effective reflective practice: In search of meaning in learning about teaching. Journal of Teacher Education, 51(1), 33–43. https://doi.org/10.1177/0022487102053001004
  • McAlpine, L., Weston, C., Timmermans, J., Berthiaume, D., & Fairbank-Roch, G. (2006). Zones: Reconceptualizing teacher thinking in relation to action. Studies in Higher Education, 31(5), 601–615. https://doi.org/10.1080/03075070600923426
  • Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14.
  • Trigwell, K., & Prosser, M. (1996). Changing approaches to teaching: A relational perspective. Studies in Higher Education, 21(3), 275–284. https://doi.org/10.1080/03075079612331381211
  • Trigwell, K., & Prosser, M. (2004). Development and use of the approaches to teaching inventory. Educational Psychology Review, 16, 409–424. https://doi.org/10.1007/s10648-004-0007-9
  • Wang, X., Su, Y., Cheung, S., Wong, E., & Kwong, T. (2013). An exploration of Biggs’ constructive alignment in course design and its impact on students’ learning approaches. Assessment & Evaluation in Higher Education, 38(4), 477–491. https://doi.org/10.1080/02602938.2012.658018
  • Wiggins, G., & McTighe, J. (2005). Understanding by design. Association for Supervision and Curriculum Development.
  • Williams, C. T., Walter, E. M., Henderson, C., & Beach, A. L. (2015). Describing undergraduate STEM teaching practices: A comparison of instructor self-report instruments. International Journal of STEM Education, 2(1), 18. https://doi.org/10.1186/s40594-015-0031-y
  • Wright, M., Horii, C. V., Felten, P., Sorcinelli, M. D., & Kaplan, M. (2018). Faculty development improves teaching and learning. POD Speaks, 2, 1–5. https://podnetwork.org/content/uploads/POD-Speaks-Issue-2_Jan2018-1.pdf