Abstract

As assessment, already well established in higher education, gains attention in the field of educational development (ED), we ask: What does it mean to practice assessment from an ED perspective? In response, we examine four principles that are central to this endeavor: (a) bridging work across communities and multiple institutional levels; (b) collective, collaborative ownership; (c) action-oriented focus on student-centered learning; and (d) intentionality about inclusiveness to recognize diverse experiences of participants and stakeholders. We apply these principles to four examples of assessment practice at different institutions and offer a rationale for why this lens has utility for the improvement of teaching and learning in higher education.

Keywords: assessment, organizational development, student learning outcomes

Assessment is an established and growing field in higher education (Fuller, Skidmore, Bustamante, & Holzweiss, 2016). Although some teaching and learning centers (CTLs) have been engaged in the enterprise for years, traditionally, assessment has been seen as the purview of centralized assessment and institutional research offices. However, according to a recent survey of educational developers, assessment of student learning outcomes is one of the top five areas of programming in CTLs (Beach, Sorcinelli, Austin, & Rivard, 2016). Furthermore, when asked to identify needed directions for the field of educational development (ED), assessment was the most frequently identified priority, and developers also named assessment as a top area toward which the field would move in the next decade. Likely explanations for this movement include increasing calls for accountability, more stringent accreditation standards, the growing involvement of CTLs in federal grants requiring educational evaluation plans, as well as increasing emphasis on evidence based practice in teaching and learning (Ewell, 2009; Handelsman et al., 2004; Lieberman, 2010). All of these trends encourage us to consider a shift toward approaching assessment from an ED perspective.

In turn, ED is a field that includes instructional development; faculty, graduate student, and postdoc professional development; and organizational development in higher education (POD Network, 2016). Developers may have a primary job description that focuses on one, some, or all of these areas, but we share a common commitment to “helping colleges and universities function effectively as teaching and learning communities” (Felten, Kalish, Pingree, & Plank, 2007, p. 93). In this article, we use ED as an inclusive term that encompasses work in all these areas and, we suggest, encompasses the work of assessment as well.

What does it mean to approach assessment from an “educational development perspective”? The literature on assessment in higher education provides extensive, helpful guidance on assessment methods, both qualitative and quantitative, direct and indirect, formative and summative (e.g., Maki, 2004; Suskie, 2009). We propose that approaching assessment from an ED perspective does not necessarily define which of these many assessment methods we use, but rather, an ED perspective guides the way we engage our institutions in using assessment to examine and improve our teaching and learning practices. Others have focused on the distinction between assessment for accountability—“to demonstrate to policymakers and the public that the enterprise they fund is effective and worth their continuing support”—and assessment for improvement, or the “use [of] assessment information to enhance teaching and learning” (Ewell, 2008, p. 9). While assessment for accountability is summative, primarily enacted for compliance purposes, assessment for improvement is formative, designed to collect evidence that can be used for course and curricular enhancement. One might argue that assessment for improvement is more resonant with the ED perspective because of the field’s focus on helping higher education institutions function effectively. However, as Ewell notes, these are paradigms, and “almost no existing assessment approach fully conforms to either of them” (p. 11). Indeed, a look at the recent assessment job postings on the POD Network discussion lists suggests that some are aligned with accountability efforts, while others are more resonant with the improvement perspective (e.g., one position would “provide needed support for … ongoing work on programmatic accreditation” (POD Network listserv, 2015, November 24), while another would strengthen a center’s “culture of evidence based teaching and our use of assessment of student learning outcomes to improve program development” (POD Network listserv, 2014, September 8]). Indeed, to approach assessment from an ED perspective will mean incorporating a broader spectrum of professional paradigms.

As authors, we have experience in multiple institutional and organizational perspectives, including roles in a business school to foster educational innovation, the Associate Vice Provost for Academic Affairs who assists with accreditation and compliance, the Office of Assessment in the Provost’s Office, the Director of Assessment housed in a teaching center with the goal of fostering action based curricular improvement, and the CTL Director. Despite these varying perspectives, over a series of conversations about our work, we have identified common aspects to our assessment work that are not yet captured by existing scholarship.

In this article, we first define assessment from an ED perspective, rooting core principles in the scholarship of ED and assessment. Next, we present four examples to illustrate key dimensions of this perspective. Finally, we close with recommendations for future research and a rationale for why this lens has broad utility for the improvement of teaching and learning in higher education.

Definition and Core Principles

Functionally, scholarship suggests that there is overlap between ED and assessment. For example, participation in assessment activities (such as curriculum design and portfolio reviews) has an important effect on instructional change and student learning—even though these may not be traditionally seen as “faculty development” (Condon, Iverson, Manduca, Rutz, & Willet, 2016). Indeed, Hutchings (2010, p. 14) argues that “assessment should be central to professional development.” Theoretically, there are also key synergies, identified by the foundational components of these perspectives. Scholars in ED and assessment clearly see their work in similar terms, with many common purposes: institutional improvement, community building and meaning making, and enhancement of teaching and learning (Table 1). We find four principles are central to the merging of these activities and serve as a basis for defining assessment from an ED perspective: (a) bridging work across communities and multiple institutional levels; (b) collective, collaborative ownership; (c) action oriented focus on student centered learning; and (d) intentionality about inclusiveness to recognize diverse experiences of participants and stakeholders.

Table 1. Key Definitions
Educational DevelopmentAssessment
Institutional ImprovementA “key lever for ensuring institutional quality and supporting institutional change” (Sorcinelli, Austin, Eddy, & Beach, 2005, p. xi)“inquiry into student learning as a systemic and systematic core process of institutional learning – to improve educational practices and, thus, student learning” (Maki, 2004, p. 2)
Community Building and Meaning Making“helping colleges and universities function effectively as teaching and learning communities” (Felten et al., 2007, p. 93)A “process of inquiry” (p. 19) that focuses on faculty’s questions of interest to gather data, engage stakeholders in meaning making, and improve teaching and learning (Jonson, Guetterman, & Thompson, 2014)
Teaching and Learning FocusActions “aimed at enhancing teaching” (Amundsen & Wilson, 2012, p. 90)“an ongoing process aimed at understanding and improving student learning” (Angelo, 1995, p. 149)

Bridging Work across Communities and Multiple Institutional Levels

Increasingly, scholarship points to the “connectivity” work being done by educational developers (Little, 2015; Sorcinelli et al., 2005), with work across multiple organizational tiers (Schroeder, 2010). Similarly, effective assessment often involves work across multiple levels, including courses, programs, and the institution (Miller & Leskes, 2005), and successful assessment involves a “collective commitment,” including college or university leaders, boards, faculty, staff, students, alumni, and employers (Maki, 2004, p. 8). Likewise, the “integrating function” of assessment—or the need to bring together disciplinary and institutional perspectives—is critical for understanding complex, multidimensional learning outcomes, such as critical thinking and civic engagement (Mentkowski & Loacker, 2002). As Wabash Center for Inquiry researchers Blaich and Wise (2011, p. 12) note, “Done correctly, using assessment to improve student learning is an entirely public process in which people with different levels of experience and different intellectual backgrounds must work together toward a common end.” Practitioners can also develop bridges within institutions, to spotlight a department’s unique contributions to student learning and assessment practices, often otherwise unknown outside the unit.

Specific assessment practices that embody a bridging perspective include curriculum mapping, which is a visual representation of an academic plan’s objectives, course sequences, and level of mastery expected at various levels (Maki, 2004). Others have suggested that participation in curriculum mapping not only spans various organizational levels (e.g., course and program levels) but also enhances collaboration and collegiality among participating faculty (Uchiyama & Radin, 2009). Another common practice includes applications of rubrics to student papers or syllabi across multiple courses to assess institutional level student learning outcomes or provide evidence of use of high impact practices (Condon, Iverson, Manduca, Rutz, & Willet, 2016; Stanny, Gonzalez, & McGowan, 2015). Finally, a third example involves bringing together departments and faculty to share examples of assessment approaches, such as campus wide colloquia (Skinner & Prager, 2015) and poster fairs that showcase public findings that can be used to promote instructional and curricular improvement (Wright, Finelli, Meizlish, & Bergom, 2011).

Collective, Collaborative Ownership

While the bridging principle involves outreach to educational communities, the principle of collective and collaborative ownership entails dispositions and behaviors to be guided by these communities’ goals and disciplinary standards of evidence. Studies of ways to best cultivate faculty and administrator support suggest that academic leaders must believe that assessment activities stem from an internally driven need (as opposed to external pressures), that they can have personal involvement over their design and implementation, and that the activities have the potential for meaningful change (Welsh & Metcalf, 2003b). In other words, assessment initiatives should stem from their own “burning questions,” involve collaboration with educational developers, and have implications for teaching and learning. Likewise, faculty need to codefine the questions in order to generate meaningful answers and offer “real results arising from instruction and efforts to improve” (Welsh & Metcalf, 2003b, p. 41).

Similar to an instructional development framework, consulting about assessment from an ED perspective involves a conversational process that “must be client centered and collaborative if it is to be effective in producing behavior change” (Brinko, 2012, p. 6; see also Brookfield, 1993; Nyquist & Wulff, 2001). In assessment, we similarly seek to facilitate a process that leads to self directed development and evidence based change, albeit with organizational clients (e.g., departments, colleges, or entire institutions). Furthermore, disciplines have different epistemological standards, and borrowing from the Scholarship of Teaching and Learning, ED assessment professionals must “address field specific issues if they are going to be heard in [other] disciplines, and they must speak in a language that their colleagues understand” (Huber, 2002, p. 2).

Specific assessment practices that embody a collective ownership perspective include flexibility in framing conversations with faculty and academic units, driven by learning goals and the types of evidence that will best foster discussions about curricular change. Excellent spaces for faculty to shape the picture of how and what students learn and to make meaning of resultant data include retreats, rubric development sessions, curriculum mapping and portfolio reviews, and meetings with faculty committees to define the purpose and scope of a project at the outset. Furthermore, even the language of how an assessment project is framed can make a difference in its success. For example, Haugnes, Holmgren, & Springborg (2012) offer “translations” between the worlds of assessment and faculty artists, and approaching assessment as an integrative narrative may be a helpful approach for contextualizing a project in the humanities (Scobey, 2009).

Action Oriented Focus on Student Centered Learning

Nearly all of the definitions of ED and assessment outlined above point to a key outcome: enhancement of student learning. ED focused assessment projects are action oriented, with the objective of generating evidence that is useful for faculty and administrators to improve courses or curricula. Some of this work may be published, but most is designed to be shared in venues such as faculty retreats or department meetings, where key curricular decisions are made. This approach is resonant with participatory action research (Lewin, 1997), a growing area in the ED field (Amundsen & Wilson, 2012; Cook, Wright, & O’Neal, 2007), or collaborative research that is motivated by reform.

Much of the “closing the loop” literature indicates that acting on assessment is critical to the improvement of student learning outcomes (e.g., Banta & Blaich, 2011). This can be challenging for assessment professionals because scholarship suggests that it can often be difficult to persuade faculty to use new evidence to improve student learning (Blaich & Wise, 2011; Handelsman et al., 2004; Tagg, 2012). However, education research also suggests that the act of assessment can contribute to student learning, such as by prompting reflection on key learning goals or even through a testing effect (Karpicke & Roediger, 2008; Mayhew, Pascarella, Trolian, & Selznick, 2015). Therefore, assessment from an ED approach is best conceptualized not only as a measurement of student learning outcomes but also as contributing to them.

A specific approach that embodies this principle involves selecting assessment tools and measures to intentionally make transparent key goals of a course, requirement, or initiative (Winkelmes, 2013). A second key practice is student input and involvement. Although there is a robust debate about the value of student self reports to assess learning gains (e.g., Pike, 2013; Porter, 2013), a student centered assessment practice recognizes the value of student input into their learning experiences. For example, several researchers note the value of conversations with students about “what has made a difference” in their college experiences or what makes a class “hard” (Blaich & Wise, 2011; Chambliss & Takacs, 2014), as well as prompting students to make meaning about their educational experiences (Barber, King, & Baxter Magolda, 2013). Some also note the importance of having students cocollect or coanalyze assessment data, with roles ranging from helping to initially define the project to presenting an assessment report to high level administrators (Cook Sather, Bovill, & Felten, 2014).

Intentionality About Inclusiveness to Recognize Diverse Experiences of Participants and Stakeholders

Assessment from an ED perspective strives to inform decisions that can help all learners, not just those who traditionally succeed. As such, assessment practitioners must be intentional about processes that are inclusive for all students, and resultant changes are based on voices heard, not those silenced, ignored, or excluded. Because curricula belong to communities—of faculty and students, but also with implications for employers and those outside the institution—those communities need to be a part of the assessment process. This principle is guided by the concept of universal design (Thurlow, Johnstone, & Ketterlin Geller, 2013) and POD Network values of inclusion, diverse perspectives, advocacy, and social justice (POD Network, 2013).

Specific practices to enhance diversity and inclusion include designing an assessment plan with “systematic efforts to capture participant voices” (Jacobson, 2015, p. 101), including diverse disciplinary perspectives. A second key tool involves the disaggregation of findings to move away from exclusive reliance on the representation of a “mean experience.” For example, some have examined how increasing course structure may differentially impact achievement for underrepresented minorities in STEM classes (Eddy & Hogan, 2014), while others examine how online discussions, compared to in person conversations, may be more equitable for women and international students in engineering teams (Fowler, 2015). Finally, assessment systems can be framed by values of multiculturalism, diversity, and inclusion. Examples include the honoring of traditional and community knowledge in assessment practices (Michelson, 1997) and use of assessment activities (e.g., student surveys and workshops to disseminate findings) to better serve culturally diverse student populations (Kerlin & Britz, 1994).

Examples

In this section, we examine four examples that offer concrete illustrations of the applications of the core principles of assessment from an ED perspective. These examples also offer illustrations of how these general principles might be adapted to a specific institutional context.

Example 1: Innovative Program Assessment at the University of Iowa

Faculty and students in Gender, Women’s, and Sexuality Studies (GWSS) share a strong sense of community and commitment to the program, but in response to a campus wide initiative to encourage program level assessment, a number of faculty expressed concerns. Many in the department expressed skepticism about the value of standard program wide assessment measures because students pursue a variety of different emphases in their coursework, and opportunities to collect uniform data from students on diverse pathways through an interdisciplinary major would be difficult to identify. Others were concerned that easy to measure learning outcomes would be prioritized over more difficult outcomes (e.g., civic engagement) or that assessment would be forced to fit into a one size fits all mold. Assessment, viewed as an externally imposed mandate to measure outcomes, looked more like an obstacle to program quality and student learning than a way to advance it.

Bridging and diversity

It would be difficult to characterize GWSS faculty concerns as an aversion to change or lack of faculty “buy in” to assessment. Rather, they considered a range of options for collecting assessment data, but they were not convinced that any single measure could adequately account for diverse learning experiences and the voices of their students. The university’s goal in its assessment initiative was to ensure that departments were engaging in systematic assessment of programs on an annual basis. While there were no expectations that departments needed to take a standardized, strictly quantitative approach, there were also few opportunities for departments to see other options demonstrated. Consultation became the bridge for beginning to address these concerns.

Consultations with the campus assessment director (Jacobson) provided opportunities to explore a range of models for program level assessment, including specific examples of approaches taken by other departments on campus that were similarly structured or that shared similar goals and commitments. Approaching assessment from an ED perspective in this example created opportunities to build bridges by allowing faculty in the department to voice their concerns, by demonstrating a need for the institution to communicate intentions more clearly, and by creating a forum for the department to consider a diverse array of practices that they might be able to adapt for their purposes.

Collective ownership

Consultation also created opportunities to explore ways in which faculty were already actively engaged in examining the quality of their program. They observed that students’ diverse, multidisciplinary pathways through the curriculum converged on a synthetic senior seminar, which included final presentations of students’ community based research projects. The projects might represent a range of disciplinary frameworks and modes of inquiry, but faculty agreed that prior coursework and program experiences should prepare students to be successful in final projects. The consultant helped faculty see that limitations observed in these presentations could credibly inform discussions about earlier curricular experiences.

Action oriented focus

Following these consultations, faculty developed a plan for assessing program level learning and using these findings to improve the curriculum. A primary agenda item at the final spring faculty meeting, held the day after the poster event, is now a discussion of student poster presentations as an indicator of program quality. This information is used to find ways in which next year’s courses could better support student learning and better prepare students for their senior projects. Department chair Rachel Williams writes, “We have used our outcomes assessment discussion to change our curriculum based on feedback from faculty and students. The discussion also sparks great debates about pedagogy and content. These are spirited, and our colleagues learn and grow a great deal in the process. We actively discuss and help each other to be better teachers.”

An ED approach to assessment in this example provided a bridge for addressing department perceptions, clarifying institutional expectations, and identifying relevant practices from other departments. This approach also recognized the diversity among faculty perspectives and student pathways through the program and gave the department collective ownership of assessment practices. Rather than seeing assessment solely as compliance with an external mandate, they shared in devising a strategy that honored the unique emphases and structures of their program. Approaching assessment as a form of ED led to an action oriented focus on improving the program for future students and, along the way, created new opportunities for faculty to learn from one another’s teaching practices. In this example, assessment has become an important ED activity for the department.

Example 2: Holistic Assessment Frameworks at the University of Wisconsin Madison

The Wisconsin School of Business at the University of Wisconsin Madison approaches assessment as a holistic and developmental process, aligned with the mission and school brand, “Together Forward.” Efforts to rethink the design, delivery, and assessment of the undergraduate business program were prompted by employer and alumni assessment data, as well as the priorities of a new dean.

Bridging and collective ownership

The process began when the school community came together to embrace a common language for talking about a holistic student learning experience framed across five dimensions of learning, known as KDBIN™. This framework not only provides traditional curricular mapping information for what students know and are able to do, but it also goes beyond by exploring other key questions: How do students learn about their being? What inspires themselves and others? How do they develop, interact, and relate with others in their network? These dimensions have proven to be a bridging framework that provides language to explore common themes across the school that build collective ownership while transcending each department’s curricular areas.

To build capacity and collective ownership for applying the framework, a small team of instructional designers and faculty development experts used “backward design” training workshops and individual consultations to engage all departments (chairs, faculty, and staff) to develop KDBIN learning outcomes for courses in every major and program. There is now a set of learning outcomes for all required courses and an easily accessible infrastructure that can be referenced by faculty and staff to make informed evidence based curricular decisions. This solid foundation now shifts assessment to multiple levels in cross disciplinary ways that were previously untenable.

Diversity

The KDBIN framework and subsequent assessment efforts have evolved from input and perspectives of a diverse array of faculty, staff, and internal and external stakeholders. A tangible example of inclusive assessment practices comes from the School’s undergraduate curriculum committee that includes cross disciplinary faculty, staff, and students. The committee developed a set of program level learning outcomes that apply to all undergraduate students and can be used to inform discussions of overall alignment at a departmental level. One specific diversity related learning outcome is the intention that all students will be able to “communicate effectively across diverse social and professional settings.” The inclusive process used to develop this learning outcome is aligned with our overall approach for assessing student learning about diversity and inclusion. That is, diversity issues are not relegated solely to the Diversity Affairs Office. Instead, specific plans for working toward the diversity and inclusion learning outcomes are being considered by the General Business course instructors, the cross disciplinary undergraduate curriculum committee, admissions and precollege programs, academic and prebusiness advising, international programs, career services, and student life. This systematic process of inclusion in our assessment approach aligns with language that resonates with our business culture by addressing its importance as a “business case for diversity” (Kaplan & Donovan, 2013) rather than other approaches to diversity and inclusion, such as social justice or compliance. Simply put, we strive for inclusive processes as a core tenet to our daily business functions.

Action oriented focus

Faculty identified Business Analytics as an opportunity to add coherence to the undergraduate curriculum and reinforce integrative learning experiences. A school wide collaboration mapped more than 100 courses to assess the integrative nature of our curriculum across 13 Business Analytics content areas. This comprehensive undergraduate curriculum map shows faculty, students, and advisers where, how, and to what depth students learn, reinforce, apply, and build upon concepts from the introductory course sequence all the way through to senior year capstone experiences. There is now a baseline to assess the need and impact of future curricular changes based on a more comprehensive picture of the undergraduate curriculum.

The four elements of the ED approach are fundamental to the success thus far of the efforts described above. A student centered focus guided assessment efforts away from emphasizing teaching content and toward inspiring learning. Bridging and collective ownership built commitment and allowed for broad inclusion of diverse perspectives that give depth and momentum to sustain the efforts to now move from idea to action.

Example 3: Assessment of International Experiences at the University of Michigan

In 2009, the University of Michigan’s Stamps School of Art and Design implemented a new international experience graduation requirement for undergraduates, requiring them to complete one of four types of “study away” (Sobania & Braskamp, 2009) experiences. The School was interested in understanding whether the requirement was meeting the key goals set for it and, because this would be requisite for all students, if there would be differences in student experience by key subgroups.

Bridging and collective ownership

As a first step, a consultant (Wright) from the university’s teaching center met with the School’s International Committee. The key objectives of initial meetings were to understand what the School hoped to learn from the assessment and to better understand the School’s goals for student learning through the requirement. Using this information, the consultant wrote a four year assessment plan, which the Committee endorsed. The plan involved gathering input from a number of key stakeholders at various levels in the School:

  • Year 1. Focus groups with faculty and staff, as well as student surveys, to get feedback on whether there should be a requirement and if the initially defined goals were the appropriate aims.

  • Years 1–4. Collection and analysis of data (Registrar data, Global Perspectives Inventory survey [Braskamp & Engberg, 2011], and analysis of artists’ statements) to understand student outcomes, comparing requirement and nonrequirement cohorts, as well as variations in student experiences within these groups.

  • Year 3. Focus groups with students to gather narratives and to better understand possible barriers to participation or growth through the experiences.

Diversity

Disciplinary and student diversity were two important lenses that needed to be applied to this project. First, the assessment findings needed to resonate with those in a creative discipline, and as Haugnes et al. (2012) note, there needs to be a careful translation between the worlds of assessment and faculty artists. The use of artist statements as a key artifact in this project is an outcome of an application of this lens.

Second, because the international experience would apply broadly, it was important to disaggregate findings by several categories—including student demographics (e.g., race and gender) and student experience (e.g., prior GPA, academic focus, and prior study abroad experience)—to ensure that the curricular requirement was meeting the needs of all students. Furthermore, it was important to separate findings by location of study abroad experience because other research suggests that cultural dissimilarity matters in shaping international experience outcomes, with greater “stretches” prompting more student growth (Jon & Fry, 2009; Vande Berg, Connor Linton, & Paige, 2009).

Action oriented focus

The results of the assessment were generally affirmative of the requirement and its positive impact on a diversity of students. However, the analysis of artist statements was particularly illuminating because initial measurements showed very little growth. This lack of movement prompted a change in the School’s approach for prompting reflection about the experience, to offer more structure, make it more visually oriented, and make the reflections more public to guide other students’ choice of study abroad locations (current reflections are available online at: http://stamps.umich.edu/international/blogs). Additionally, the School created a required sophomore level course to help prepare students for their upcoming international experience. These “reflective bookends” are a critically important piece of experiential learning, and their presence will help maximize the impact of the experience moving forward (Engle & Engle, 2004; Stebleton, Soria, & Cherney, 2013).

In sum, an ED approach was focal to this four year, school wide assessment project. Consultation and feedback from various stakeholders on key goals for the requirement embodied bridging and collective ownership. Lenses of diversity, disciplinary, and student helped define the evidence that would speak to faculty, staff, and administrators in the School and suggested the disaggregation of findings to check that the international experience requirement was meeting the needs of all students. Finally, an action oriented focus allowed for application of findings to enhance academic scaffolding to maximize students’ learning through their international experience.

Example 4: Preparing Graduate Students for the Professoriate at Duke University

This example takes a different angle, to address graduate student professional development around the core principles of assessment through an ED approach. Graduate students need instruction about how to conduct appropriate assessments in their own future classrooms and departments, and faculty developers can facilitate this instruction by modeling assessment in programs that aim to prepare future faculty (Hutchings, 2010). Duke University’s Certificate in College Teaching (CCT) is one example that exemplifies this endeavor.

Bridging and collective ownership

The graduate students enrolled in the CCT come from across campus, and therefore, classroom conversations about assessment are multidisciplinary. Additionally, students often bring classroom conversations about the value and utility of assessment back to their advisors and their colleagues in their home departments, furthering the networking function of educational assessment developers. Furthermore, assignments are rooted in graduate students’ own “burning questions” about teaching and learning, which shapes their work on products such as syllabi, needs assessments, curricular mapping, and course outcomes.

Action oriented focus

As noted above, the act of assessment can generate student learning, and this principle is central to the program. Faculty members teaching in the CCT Program transparently model assessments and reference the theories in practice in real time. As instructors demonstrate assessment of their own teaching and subsequent classroom/course changes that result from the assessment findings, students have examples of both how their feedback is used and how to make changes in teaching or classroom policies based on student feedback.

Diversity and inclusion

Due to assessment findings, the CCT Program recently made changes addressing multiple dimensions of diversity: of students, inclusive course design, and institutional types. First, a new course was added, which explicitly emphasizes that students should see assessment as a way to build and maintain inclusive courses and programs. Furthermore, through case studies, classroom discussions, common readings, and other active learning sessions, faculty in the CCT expose students to and model diverse assessment strategies for the students enrolled in the program. Students who are preparing to teach a wide array of students in diverse institution types have opportunities to weigh strengths and weaknesses of various assessment methodologies from multiple perspectives.

By exposing graduate students to assessment from an ED perspective while they are still preparing to join the professoriate, the future academy members participate in and see the benefits of bridging, student centered learning, collective ownership, and diversity and inclusion before they commence their first faculty appointment. As new faculty often enter their new role with little understanding of assessment, programming for graduate students is an investment in a future faculty far more knowledgeable and comfortable with the tools of ED.

Conclusion

What does it mean to do assessment from an “educational development perspective”? As the examples above demonstrate, assessment for ED is context dependent, varying by audience and institutional context. As with other ED offerings, real world assessment problems resist one size fits all solutions. However, we suggest that there are consistent anchoring principles to an ED perspective in assessment—bridging, student centered learning, collective ownership, and diversity—that should be applied in a context sensitive manner.

This lens has utility for two reasons. First, some of the staunchest critics of the assessment movement have challenged it because they see it as centrally driven, externally imposed, and nonresponsive to the genuine concerns and questions of faculty and students. When assessment is characterized by an ED perspective, it is hard to make charges like these stick because they are contrary to the central practices we demonstrate as educational developers. In this manner, assessment is a powerful tool for improving the quality of higher education.

For example, a bridging perspective emphasizes the “connective tissue” or “hub” role that spaces like CTLs or provost office units often have, bridging departmental boundaries and institutional layers to help departments identify allies, models, and best practices among peers. The prioritization of collective ownership helps mitigate documented challenges of faculty and administrator involvement in assessment processes (Ndoye & Parker, 2010; Welsh & Metcalf, 2003a, 2003b) by demonstrating that decisions about how to carry out assessment ultimately belong to departments. The challenge of using evidence of student learning to improve teaching has been described as relatively intractable (Blaich & Wise, 2011; Handelsman et al., 2004). However, by leveraging an action oriented approach, the activity fosters faculty conversations about curricular change or contributes to student learning directly. Finally, as higher education becomes more diverse (National Center for Educational Statistics, 2013), assessment from an ED perspective challenges educators to ensure that assessment practices are attuned to the voices of all learners and represent the values and concerns of faculty, disciplinary, and institutional stakeholders.

Second, this framework stakes out an approach that redefines assessment in terms of its contributions to faculty, departmental, and institutional development. Therefore, we anticipate that this framework will help guide the field due to the increasing emphasis on assessment in colleges and universities (Hart Research Associates, 2016) in the ED field (Beach et al., 2016) and in the concomitant rise in the number of positions located in ED centers.

One limitation of this study is that our examples, although multi institutional, are derived from experiences at large research universities. As assessment through an ED perspective grows, we anticipate future research that examines how these principles might transfer to other institutional contexts or how new principles might apply. Furthermore, we anticipate that future research might further define and elaborate on each of these principles in the vein of competency models developed for other ED professional activities (Dawson, Britnell, & Hitchcock, 2009).

Acknowledgments

The authors thank Ron Cramer, Learning Design Consultant, DoIT Academic Technology at University of Wisconsin Madison; Hugh Crumley, Assistant Dean of The Graduate School, Duke University; Suzanne Dove, Assistant Dean for Academic Innovations at University of Wisconsin Madison; Matt Kaplan, Executive Director of the Center for Research on Learning and Teaching at the University of Michigan; and Rachel Williams, Chair of Gender, Women’s, and Sexuality Studies at the University of Iowa

References

  • Amundsen, C., & Wilson, M. (2012). Are we asking the right questions? A conceptual review of the educational development literature in higher education. Review of Educational Research, 82(1), 90–126.
  • Angelo, T. A. (1995). Reassessing and defining assessment. AAHE Bulletin, 48(3), 149.
  • Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22–27.
  • Barber, J. P., King, P. M., & Baxter Magolda, M. B. (2013). Long strides on the journey to self authorship: Substantial developmental shifts in college students’ meaning making. Journal of Higher Education, 84(6), 866–896.
  • Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (2016). Faculty development in the age of evidence: Current practices, future imperatives. Sterling, VA: Stylus.
  • Blaich, C. F., & Wise, K. S. (2011). From gathering to using assessment results: Lessons from the Wabash National Study (NILOA Occasional Paper No. 8). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.
  • Braskamp, L. A., & Engberg, M. E. (2011). How colleges can influence the development of a global perspective. Liberal Education, Summer/Fall, 34–39.
  • Brinko, K. T. (2012). The interactions of teaching improvement. In K. T. Brinko (Ed.), Practically speaking: A sourcebook for instructional consultants (2nd ed., pp. 3–7). Stillwater, OK: New Forums Press.
  • Brookfield, S. (1993). Understanding consulting as an adult education process. In L. Zachary & S. Vernon (Eds.), The adult educator as consultant. New Directions for Adult and Continuing Education (Vol. 58, pp. 5–13).
  • Chambliss, D. F., & Takacs, C. G. (2014). How college works. Cambridge, MA: Harvard University Press.
  • Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty development and student learning: Assessing the connections. Bloomington, IN: Indiana University Press.
  • Cook, C. E., Wright, M. C., & O’Neal, C. (2007). Action research for instructional improvement: Using data to enhance student learning at your own institution. In D. R. Robertson & L. B. Nilson (Eds.), To improve the academy (Vol. 25, pp. 123–138). San Francisco, CA: Jossey Bass.
  • Cook Sather, A., Bovill, A., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. New York: Wiley.
  • Dawson, D., Britnell, J., & Hitchcock, A. (2009). Developing competency models of faculty developers: Using world café to foster dialogue. In L. B. Nilson & J. E. Miller (Eds.), To improve the academy (Vol. 28, pp. 3–24). San Francisco, CA: Jossey Bass.
  • Eddy, S. L., & Hogan, K. A. (2014). Getting under the hood: How and for whom does increasing course structure work? CBE Life Sciences Education, 13, 453–468.
  • Engle, L. & Engle, J. (2004). Assessing language acquisition and intercultural sensitivity development in relation to study abroad program design. Frontiers: The Interdisciplinary Journal of Study Abroad, 10, 219–236. Retrieved from http://www.frontiersjournal.com/issues/vol10/index.htm
  • Ewell, P. T. (2008). Assessment and accountability in America today: Background and context. In V. M. H. Borden & G. R. Pike (Eds.), New directions for institutional research (Vol. S1, pp. 7–18). New York: Wiley.
  • Ewell, P. T. (2009). Assessment, accountability and improvement: Revisiting the tension. Occasional Paper No. 1. Champaign, IL: National Institute for Learning Outcomes Assessment.
  • Felten, P., Kalish, A., Pingree, A., & Plank, K. (2007). Toward a scholarship of teaching and learning in educational development. In D. Robertson & L. Nilson (Eds.), To improve the academy: Resources for faculty, instructional and organizational development (Vol. 25, pp. 93–108). San Francisco, CA: Jossey Bass.
  • Fowler, R. (2015). Talking teams: Increased equity in participation in online compared to face to face team discussions. ASEE Computers in Education Journal, 6(1), 21–44.
  • Fuller, M. B., Skidmore, S. T., Bustamante, R. M., & Holzweiss, P. C. (2016). Empirically exploring higher education cultures of assessment. The Review of Higher Education, 39(3), 395–429.
  • Handelsman, J., Ebert May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., … Wood, W. (2004). Scientific teaching. Science, 304(5670), 521–522.
  • Hart Research Associates. (2016). Trends in learning outcomes assessment: Key findings from a survey among administrators at AAC&U member institutions. Washington, DC: AAC&U and Hart Research Associates. Retrieved from https://www.aacu.org/sites/default/files/files/LEAP/2015_Survey_Report3.pdf
  • Haugnes, N., Holmgren, H., & Springborg, M. (2012). What educational developers need to know about faculty artists in the academy. In J. E. Groccia & L. Cruz (Eds.), To improve the academy: Resources for faculty, instructional and organizational development (Vol. 31, pp. 55–68). San Francisco, CA: Jossey Bass.
  • Huber, M. T. & Morreale, S. P. (2002). Situating the scholarship of teaching and learning: A cross disciplinary conversation. In M. T. Huber & S. P. Morreale (Eds.), Disciplinary styles in the scholarship of teaching and learning: Exploring common ground (pp. 1–24). Washington, DC: AAHE and Carnegie Foundation for the Advancement of Teaching and Learning.
  • Hutchings, P. (2010). Opening doors to faculty involvement in assessment. Occasional Paper No. 4. Champaign, IL: National Institute for Learning Outcomes Assessment.
  • Jacobson, W. (2015). Sharing power and privilege through the scholarly practice of assessment. In S. Watt (Ed.), Designing transformative multicultural initiatives: Theoretical foundations, practical applications and facilitator considerations (pp. 89–102). Sterling, VA: Stylus.
  • Jon, J., & Fry, G. W. (2009, November 7). The long term impact of undergraduate study abroad experience: Implications for higher education. Paper presented at the ASHE Annual Conference, Vancouver, Canada.
  • Jonson, J. L., Guetterman, T., & Thompson, R. J. (2014). An integrated model of influence: Use of assessment data in higher education. Research & Practice in Assessment, 9, 18–30.
  • Kaplan, M., & Donovan, M. (2013). The inclusion dividend: Why investing in diversity and inclusion pays off. Brookline, MA: Bibliomotion.
  • Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968.
  • Kerlin, S. P., & Britz, P. B. (1994). Assessment and diversity: Outcome and climate measurements. In T. Bers & M. L. Mittler (Eds.), New directions for community colleges (Vol. 88, pp. 53–60).
  • Lewin, K. (1997). Resolving social conflicts. Washington, DC: American Psychological Association (Original work published 1948).
  • Lieberman, D. (2010). Collaboration and leadership between upper level administrators and faculty developers. In C. Schroeder (Ed.), Coming in from the margins: Faculty development’s emerging organizational development role in institutional change (pp. 60–76). Sterling, VA: Stylus.
  • Little, D. (2015, November 5). President’s address made at the Annual POD Network Conference, San Francisco, CA.
  • Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.
  • Mayhew, M. J., Pascarella, E. T., Trolian, T., & Selznick, B. (2015). Measurements matter: Taking the DIT 2 multiple times and college students’ moral reasoning development. Research in Higher Education, 56, 378–396.
  • Mentkowski, M., & Loacker, G. (2002). Enacting a collaborative scholarship of assessment. In T. W. Banta (Ed.), Building a scholarship of assessment (pp. 82–99). San Francisco, CA: Jossey Bass.
  • Michelson, E. (1997). Multicultural approaches to portfolio development. In A. Rose & M. A. Leahy (Eds.), New directions for adult and continuing education (Vol. 75, pp. 41–53).
  • Miller, R., & Leskes, A. (2005). Levels of assessment: From the student to the institution. Washington, DC: Association of American Colleges and Universities.
  • National Center for Educational Statistics. (2013). Fast facts. Retrieved from http://nces.ed.gov.proxy.lib.umich.edu/fastfacts/display.asp?id=98
  • Ndoye, A., & Parker, M. A. (2010). Creating and sustaining a culture of assessment. Planning for Higher Education, 38(2), 28–39.
  • Nyquist, J. D., & Wulff, D. H. (2001). Consultation using a research perspective. In K. Lewis & J. Povlacs (Eds.), Face to face: A sourcebook of individual consultation techniques for faculty/instructional developers (2nd ed., pp. 45–62). Stillwater, OK: New Forums Press.
  • Pike, G. R. (2013). NSSE benchmarks and institutional outcomes: A note on the importance of considering the intended uses of a measure in validity studies. Research in Higher Education, 54, 149–170.
  • POD Network. (2013). Strategic plan. Retrieved from http://podnetwork.org/about us/mission/
  • POD Network. (2016). What is educational development? Retrieved from http://podnetwork.org/about-us/what-is-educational-development/
  • Porter, S. R. (2013). Self reported learning gains: A theory and test of college student survey response. Research in Higher Education, 54, 201–226.
  • Schroeder, C. M. (2010). Faculty developers as institutional developers: The missing prong of organizational development. In C. M. Schroeder (Ed.), Coming in from the margins: Faculty development’s emerging organizational development role in institutional change (pp. 17–46). Sterling, VA: Stylus.
  • Scobey, D. (2009 , March 19). Meanings and metrics inside higher education. Retrieved from https://www.insidehighered.com/views/2009/03/19/scobey
  • Skinner, M. F., & Prager, E. K. (2015). Strategic partnerships: Leveraging the center for teaching and learning to garner support for assessment of student learning. Assessment Update, 27(3), 4–13.
  • Sobania, N., & Braskamp, L. A. (2009). Study abroad or study away: It’s not merely semantics. Peer Review, 11(4), 23.
  • Sorcinelli, M. D., Austin, A. E., Eddy, P. L., & Beach, A. L. (2005). Creating the future of faculty development: Learning from the past, understanding the present. San Francisco, CA: Jossey Bass.
  • Stanny, C., Gonzalez, M., & McGowan, B. (2015). Assessing the culture of teaching and learning through a literature review. Assessment & Evaluation in Higher Education, 40(7), 898–913.
  • Stebleton, M. J., Soria, K. M., & Cherney, B. T. (2013). The high impact of education abroad: College students’ engagement in international experiences and the development of intercultural competencies. Frontiers: The Interdisciplinary Journal of Study Abroad, 12 .Retrieved from http://www.frontiersjournal.com/frontiersxxiiwinter2012spring2013.htm
  • Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey Bass.
  • Tagg, J. (2012). Why does the faculty resist change? Change: The Magazine of Higher Learning. Retrieved from http://www.changemag.org/Archives/Back%20Issues/2012/January February%202012/facultychange full.html
  • Thurlow, M. L., Johnstone, C. J., & Ketterlin Geller, L. R. (2013). Universal design of assessment. In S. E. Burgstahler & R. C. Clay (Eds.), Universal design in higher education: From principles to practice (pp. 163–176). Cambridge, MA: Harvard University Press.
  • Uchiyama, K. P., & Radin, J. L. (2009). Curriculum mapping in higher education: A vehicle for collaboration. Innovative Higher Education, 33, 271–280.
  • Vande Berg, M., Connor Linton, J., & Paige, R. M. (2009). The Georgetown Consortium Project: Interventions or student learning abroad. Frontiers: The Interdisciplinary Journal of Study Abroad, 18. Retrieved from http://www.frontiersjournal.com/issuesvolumexviiifall2009.htm
  • Welsh, J. F., & Metcalf, J. (2003a). Faculty and administrative support for institutional effectiveness activities: A bridge across the chasm? Journal of Higher Education, 74, 445–468.
  • Welsh, J. F., & Metcalf, J. (2003b). Cultivating faculty support for institutional effectiveness activities: Benchmarking best practices. Assessment & Evaluation in Higher Education, 28(1), 33–45.
  • Winkelmes, M. (2013). Transparency in teaching: Faculty share data and improve students’ learning. Liberal Education, 99(2). Retrieved from https://www.aacu.org/publications research/periodicals/transparency teaching faculty share data and improve students
  • Wright, M. C., Finelli, C. J., Meizlish, D., & Bergom, I. (2011). Facilitating the scholarship of teaching and learning at a research university. Change: The Magazine of Higher Learning, 43(2), 50–56.