Colleges and universities have a commitment to improve the student experience, increase persistence, and provide paths to degree completion. Course redesign, focused on student success, is a promising strategy for realizing that commitment. This article examines some of the particulars when course redesign is explicitly linked to student success. These particulars include the types of redesign outcomes, why courses should be the locus of student success initiatives, identifying which courses to redesign, and the characteristics and scope of impact of redesigned courses. The article concludes with suggestions for next steps for student success course redesign.

Keywords: gateway courses, course redesign, student success, educational development

To further persistence and degree completion goals, higher education is replete with initiatives focused on increasing retention. The stakes are high, and many states have implemented performance-based funding measures. These measures financially penalize public universities who cannot demonstrate increased persistence, such as retention rates, or completion, such as certifications or graduation rates (Dougherty et al., 2016).

To strengthen the likelihood of persistence and completion performance increases, high-impact practices (HIPs) for teaching and learning (Kuh, 2008a) have been identified. These practices include initiatives such as first-year seminars, common intellectual experiences, e-portfolios, and internships (Kuh, 2008a). What makes these practices high impact is a shared set of common traits that form the foundation of their influential thread. Those influential traits include ideas such as setting high expectations, providing frequent and timely feedback, and creating opportunities for structured reflection and integration (Kuh et al., 2008a).

Why, then, if we know which practices are promising, and the traits those practices should include, do universities still struggle to help students successfully complete their degrees? The American Academy of Arts and Sciences (2017) has argued that the challenge in higher education has shifted from one of quantity—increasing student enrollment—to one of quality—providing an educational experience stocked with the attributes students need to succeed. Similarly, the HIP literature includes an important, but often overlooked, caveat in their recommendations: that HIPs must be “well done” in order to demonstrate educational benefits. In their recent response to a critique about HIPs (see Valbrun, 2018), Kuh and Kinzie (2018) reinforced the idea that HIPs do not yield high-impact results if implementation is incomplete or inadequate. Too often, institutions hastily increase their suite of initiatives, never ensuring that each practice actually embeds the foundational traits that influence student success. Kuh et al. (2007) stated that “simply offering such programs and practices does not guarantee that they will have the intended effects on student success” (p. 116).

Also problematic is that many initiatives implemented under the HIP umbrella are not foundational to teaching and learning. Programs such as study abroad, common reads, and residential learning communities can be implemented on the periphery of the classroom and/or with little faculty collaboration. Alexander and Gardner (2009) stressed this, noting that initiatives are often implemented “on the margins” (p. 18), therefore limiting their ability to impact student success. These “on the margins” initiatives are often implemented from a deficit paradigm. This model holds that there are inherent student characteristics (i.e., deficits) that negatively interact with college experiences, resulting in academic failure (Pistilli et al., 2012; Zhao, 2016). Thus, initiatives to “program out” (Prime, 1982) the perceived weaknesses inherent in transfer, first-generation, or underrepresented minority students are implemented outside the classroom, with little to no faculty involvement.

However, there is evidence that student characteristics do not need to be “programmed out.” Martin et al.’s (2017) study demonstrated that pre-college background and preparation only explained part of the students’ academic achievement. In fact, their results indicated that altering student perceptions of the classroom could have a larger effect on success than student characteristic variables. Similarly, as Kinzie (2012) noted, the processes by which students engage, including in the classroom, result in student success, not their entry characteristics.

To avoid the limitations of implementing programs that are “on the margins” or represent a deficit paradigm, course redesign has recently emerged as a key student success strategy. A course redesign typically involves faculty participants engaged in an institute or workshop-like environment for the purpose of designing or redesigning their course using evidence-based, student-centered ideas (Palmer et al., 2016). Through course redesign, a university can fulfill its obligation to provide a “well done,” “quality” academic experience. Because course redesign is foundational to teaching and learning, a redesigned course is more likely to embed the HIP characteristics associated with positive impacts on student success.

Course redesign for student success supports campus mission statements. Almost ubiquitous is the articulation in a campus mission statement of the college’s duty to increase opportunity and access to higher education. Those statements often include mention of ideas such as “access to higher education,” “enrich lives,” “create opportunities,” “accessible quality education,” “improve the well-being,” and “preparing students” for leadership, engagement, discovery, global society, service, and/or meaningful lives and careers. These broad mission statements translate into institutional goals that can be realized through a course redesign initiative focused on student success outcomes. Figure 1 represents this relationship flow.

Figure 1. Course Redesign Relationship FlowFigure 1. Course Redesign Relationship Flow

Using course design institutes (CDIs) as a model, this article strives to provide a beginning framework for the considerations needed when course redesign is pointedly used as a student success initiative. In this context, several operational definitions guide the discussion:

  • Student success is defined as course completion with a grade of an “A,” “B,” or “C.”
  • Course redesign will be broadly defined as any type of workshop-style endeavor focused on the design relationships between curriculum, instruction, and learning.
  • Student success course redesign (SSCR) is defined as a CDI in which the outcomes of the redesign are focused on student learning outcomes.

The discussion of the considerations specific to an SSCR will begin with the types of redesign outcomes. The article will then deconstruct course redesign initiatives into the following particulars: courses as the locus of a student success initiative, identifying courses for an SSCR, redesign characteristics, and the scope of impact from SSCR courses. The implications of the student success focus, relative to each of the particulars will be presented. Figure 2 provides a model of a course redesign for student success.

Figure 2. Overview of Student Success Focused Course RedesignFigure 2. Overview of Student Success Focused Course Redesign

Student Success Outcomes for Course Redesign

The first particular of an SSCR is that the outcomes for the redesign may be different than for a CDI. Specifically, an SSCR attempts to impact three broad types of outcomes: institutional, student, and cultural. Institutional outcomes typically focus on campus-wide accountability such as persistence, completion, or cost reduction. This kind of redesign might address “completion bottlenecks,” courses that impede students’ ability to matriculate to graduation (Delamater, 2018; Education Advisory Board, 2018; Koch et al., 2016). An SSCR might also strive to address “access bottlenecks” (Education Advisory Board, 2018), occasions when course capacity is insufficient for students to meet program requirements. An example of an SSCR focused on institutional persistence outcomes would be Gateways 2 Completion (G2C), a 3-year self-study plus consultant model that is directed toward completion bottlenecks (John N. Gardner Institute for Excellence in Undergraduate Education, 2015).

Another SSCR model focused on institutional outcomes is the National Center for Academic Transformation (NCAT) model (NCAT, n.d.), which is now managed through the University of Central Florida (Lieberman, 2019). The NCAT model focuses on the cost savings realized when students pass a course the first time and can subsequently continue to persist at the university. Savings are reaped from reducing the costs associated with maintaining course capacity to accommodate repeaters as well as by decreasing the recruitment and replacement burden associated with student attrition. As noted by George Mahaffey (cited in Field, 2018), “It’s substantially cheaper to keep the students you have than go find new ones.” The NCAT model (NCAT, n.d.) also directly addresses reducing course delivering costs through strategic redesign. This model holds that cost reduction and academic success are synergistic institutional outcomes and focuses on redesign strategies that accomplish both outcomes simultaneously.

Student outcomes for SSCRs typically focus on learning, motivation, values, and beliefs. This is due to the strong evidence of a correlation between student learning in introductory courses and college completion (Delamater, 2018; Koch et al., 2016). Research by Complete College America (2012) indicates that remedial coursework has failed to scaffold students for non-remedial requisites. Their research indicates that it is more effective to focus on student learning and success in the courses that contribute to degree completion. Others note that a focus on student outcomes makes sense because our knowledge about both reaching today’s student and effective teaching has recently increased (Association of College and University Educators, 2018; Turner, 2009).

Thus, an SSCR seeks to change the quality of course delivery (American Academy of Arts & Sciences, 2017) by directly influencing student engagement through pedagogical redesign. Helping faculty craft activities that are “educationally purposeful” (Kuh et al., 2008c) using the more recent literature on effective pedagogies (e.g., Ambrose et al., 2010; Association of College and University Educators, 2018; Barkley, 2010; Brown et al., 2014; Lang, 2016; McGuire, 2015; Miller, 2016; Willingham, 2009; Winkelmes et al., 2019) forms the basis of an SSCR focused on student outcomes. These types of redesign projects tend to focus on individual components of a course (e.g., assessments, homework, lecture) and can be assessed by course-level grades or within course measures such as test scores, satisfaction, or skill development (John N. Gardner Institute for Excellence in Undergraduate Education, 2015).

An SSCR focused on student outcomes may also address changing student motivation and beliefs about a specific course or discipline. Brookins and Swafford (2017) discussed how course redesign can help to ease the stigma that a course or discipline may acquire when its reputation becomes associated with failure, difficulty, or irrelevance. Thus, redesign projects that help create disciplinary relevance may be able to help woo students into the major or at least foster motivation by helping non-majors see the value of the information (Nomme & Birol, 2014).

Redesign outcomes related to culture focus on deepening the value of and student access to higher education within the larger campus community. Enhancing institutional understanding about the in-class and out-of-class course teaching, support, policies, and assessment/evaluation practices for introductory courses is one way to change institutional cultural norms (John N. Gardner Institute for Excellence in Undergraduate Education, 2015). Realizing a cultural outcome in course redesign may also be achieved by including scholarship of teaching and learning (SoTL) projects in the course redesign model. For example, after completion of the Association of College and University Educators (ACUE) course in Effective Teaching Practices at Northern Arizona University, the ACUE faculty graduates created interdisciplinary collaborations to assess implementation of the pedagogical practices. Adding this type of research component legitimates and brings value to redesign outcomes by motivating faculty through evidenced-based practices and opportunities to expand their scholarship effort and output.

Implications of Student Success Outcomes for Course Redesign

An SSCR seeks to address the “devastating” (Koch, 2017b) and “under-prioritized” (Delamater, 2018) barriers to student progress that affect multitudes of students, particularly the least privileged, first-generation, and historically underrepresented students (Koch, 2017b). Therefore, it is important for the institution to be clear what their priorities are prior to developing an SSCR initiative. For example, if the institutional priority is to reduce DFW (D grade, fail, withdraw) rates in introductory courses, a general call for faculty volunteers may not connect the initiative with the high DFW courses.

The desired outcomes also determine which stakeholders should be involved. For example, a redesign for institutional persistence and completion might include a VP for academic affairs and the office of institutional research. However, a course design to shift student motivation needs only to involve the faculty who are trying to foster increased appreciation and engagement in their discipline.

Finally, it is important to declare and communicate consistently and frequently the desired outcomes for the SSCR. Mismatches between the campus’s priorities and the faculty’s concerns can easily occur. For example, a campus might be implementing an intensive SSCR with big expectations for increased graduation rates. But the faculty’s engagement in that program may only result in the adoption of higgledy-piggledy changes to the syllabus or one-off active learning strategies. In this scenario, the redesign would not fully impact core beliefs and practices related to learning.

Courses as the Locus of Change

A somewhat unique particular related to course redesign for student success is that the course itself is considered the locus of the change. This is different than other types of student success initiatives that might address student characteristics (e.g., food insecurity) or academic weaknesses (e.g., tutoring). Thus, an SSCR shifts the focus from the periphery to effective pedagogies and high-impact pedagogical traits that are actualized in and around the core classroom experience. High-impact traits are those that require “considerable time and effort, facilitate learning outside of the classroom, require meaningful interactions with faculty and students, encourage collaboration with diverse others and provide frequent and substantive feedback” (Kuh, 2008b). Felton (2018) describes these traits as the “heart” of HIPs, as they characterize the teaching and learning pedagogies that foster the transfer of learning. These traits, when enacted in an undergraduate course experience, can matter the most (Koch, 2017a).

An SSCR, in which the course is the locus of change, is also different than other types of CDIs because course development, rather than participant outcomes, is the focus. Palmer et al. (2016) and Chism et al. (2012) described CDIs in terms of participant outcomes, such as growth of knowledge of pedagogy and evidence-based practices as well as developing participants sense of community and personal growth. To be sure, these are appropriate goals and goals that might implicitly increase student success. But unlike an SSCR, a CDI stops short of outcomes that require evidence of increases in student success.

In addition, by centering the conversation on course redesign, the hurdle of blaming faculty for bad teaching can be circumvented. Rather, a course redesign can operate more like a musical master class, with the shared ethic that teaching, like a performance, can always be improved.

Middendorf (2000) noted that faculty have to be ready to “accept change,” and most models of course redesign either assume this readiness, by virtue of faculty volunteerism, or the model fosters readiness by focusing on the “why.” For example, the G2C model (John N. Gardner Institute for Excellence in Undergraduate Education, 2015) fosters this readiness by exposing faculty to their own DFW data, by section, delineating the differences in student success among different underserved groups. In this way, student success is seen as a social justice issue, and faculty, valuing that “why,” engage in the work of course redesign. Similarly, Turner (2009), in describing the Next Generation Course Redesign Project, described a “collective guilty conscience” related to poor student outcomes that is leveraged to develop faculty motivation for involvement in course redesign.

Willingham (2019) in a blog post about “What should funders fund?” advised that initiatives designed to impact student success should “stick as close to the classrooms as you can.” This is sensical when one considers that the classroom is the most direct influence on student success (Hearn, 2006). Because faculty control the curriculum, pedagogy, and assessments, they have the greatest ability to embed high-impact traits into course experiences. Systematizing those practices across all sections of a course further maximizes the potential of the course, as the locus of change, to be fruitful.

Implications of Courses as the Locus of Change

Within a course, there are a plethora of different opportunities for change. A short list of change points includes the curriculum, pedagogy, in- and out-of-class experiences, course policies, sequencing, and co-curricular resources. This complexity provides ample opportunity for changes that do not really alter the course experience. For example, marketing the Learning Assistance Center on the syllabus is a great practice, but expecting that statement to affect the university’s retention rate is far-fetched. Furthermore, a syllabus statement does little to operationalize high-impact traits if the students are directed to resources outside of the course rather than encouraged to interact with the faculty.

Identifying Courses for Student Success Redesign

Identifying which courses to target for student success redesign is another particular that needs to be considered. Courses selected for SSCR share some common features: they are typically foundational, high risk as recognized by high DFWI grade rates, and/or high enrollment (Koch & Rodier, 2014; as cited in Koch, 2017{~?~WU: a or b?}a). They are often referred to as “survey” or “introductory” courses (Brookins & Swafford, 2017) and may carry the derogatory label of being a “bottleneck” (Education Advisory Board, 2018), as failure in the course prevents students from making progress to degree completion. The gentler label “gateway” (John N. Gardner Institute for Excellence in Undergraduate Education, 2015) is also used to connote courses for which failure closes a metaphorical gate for future academic progress. The term gateway will be used here as it affirms both the barrier to completion as well as the aspiration of the pathway.

While a CDI typically impacts only individual faculty or single course sections, an SSCR is marked by a commitment to redesigning all sections of the course. This holistic redesign addresses issues for courses that are offered in multiple sections and/or taught by different faculty, as academic outcomes can vary dramatically between sections of high-enrollment courses. Indeed, Education Advisory Board (2018) data demonstrates completion rates varying as much as 24% between sections of the same course. These variances among sections of the same course create competition for particular sections, burdening advising or privileging those students who had earlier enrollment dates or more flexible schedules (Delamater, 2018). Moreover, holistic redesign helps to maintain integrity of the learning outcomes across all sections. As Delamater (2018) and Turner (2009) expressed, student success should not be a random outcome based on a chance encounter with those faculty willing to tinker with pedagogical change.

Implications of Identifying Courses for Redesign

When identifying courses for redesign for student success, selection criteria needs to be determined. Those criteria should refer to at least one of the three redesign outcomes: institutional, student, or cultural. Institutional criteria might include selecting courses with high DFWI rates or negative correlations with time to graduation. Student criteria might include courses with negative professional exam scores or courses that are not serving their role as prerequisites to advanced coursework. Cultural criteria might include introductory courses with “weed-out” reputations.

The rationale for pre-determining course selection criteria is that in the absence of such criteria, the usual suspects will volunteer. The volunteer base for educational development is a warm and engaged group, but they may not be the faculty who actually teach the gateway courses. Including a few volunteers in the mix as enthusiastic consumers of the redesign program might be a great strategy. However, if a lot of the participation is volunteer, redesign resources are not likely to be focused on the courses and faculty that matter the most. Individual volunteers can only impact the sections that they teach. Teams that include all faculty involved in all sections of a course-line have the opportunity to achieve greater impacts on student success and systemic culture change on campus.

Program admissions criteria may also need to be included in the list of considerations for selection of the SSCR courses. In some cases, a gateway course might just be a nicer way of describing a weed-out course. Weed-out courses are intentionally structured as difficult and serve as prerequisites to other mandatory courses in a degree program. A classic example of this type of course would be Anatomy & Physiology for nursing majors. Weed-out courses may put very necessary controls on matriculation into professional degree programs that have selective admissions (e.g., nursing) or limited capacity.

Aspects of Course Redesign

Course redesign programs aim to help instructors create a rich, active, and supportive classroom environment, grounded in evidence-based practices (Palmer et al., 2016). Similar to traditional CDIs, length, leadership, and curriculum are some of the particular aspects that must be considered.

Regarding length, both shorter and longer models could be used. The shorter length, multiday CDI model, offered as intensive workshops to faculty across cultural and disciplinary contexts (Johnson et al., 2012) could be used as a length framework for an SSCR. There are also examples of more lengthy models suitable for an SSCR. For example, the ACUE’s year-long course on effective teaching practices (Association of College and University Educators, 2018) or the self-study, online community the Global Skills for College Completion project described in Taking College Teaching Seriously: Pedagogy Matters! (Mellow et al., 2015) could be adapted for an SSCR. G2C’s 3-year, self-study model was specifically designed to impact student success outcomes (John N. Gardner Institute for Excellence in Undergraduate Education, 2015).

The leadership of either type of course redesign program could come from local campus expertise, often from the faculty community or center for teaching and learning (CTL). The shorter, campus-led CDI is usually held between semesters and does not last for more than a week. For example, both Turner (2009) and Wheeler and Bach (2018) have well-developed institutionally based CDIs. These models may also require continued faculty participation, but that follow-up tends to be brief and focused on reflection and assessment of the redesigned pedagogies.

Leadership for course redesign may also come from a third party and could vary from the inclusion of a consultant, a formal consultancy process such as G2C or NCAT, or a subscription to a course such as ACUE. Consultants either commit to a one-shot, in-person approach, spanning from days to a week, or may guide the process both in-person and remotely for longer periods of time.

The curriculum of a course redesign program may be manifested from the expertise of the consultant, the attraction of a promising idea, or from the desired outcomes for the redesign. Often, a book such as Lang’s Small Teaching (2016), McGuire’s Teach Students How to Learn (2015), or Harrington and Thomas’s Designing a Motivational Syllabus (2018) creates an impetus for redesign. Because these books address straightforward pedagogical changes that are generally consistent with student success, their ideas and authors are seen as possible resources for the redesign.

Implications of Aspects of Course Redesign

The effect of a course redesign program on student success is very dependent on the program’s length, leadership, and curriculum. For example, week-long programs for faculty volunteers tend to only impact the faculty who participate and can create inconsistency across sections of the same course. It is also logistically challenging to assemble all faculty related to a single course-line at the same time, when that time is between regular semesters.

Thus, longer programs, in which participants can engage as faculty teams, have the advantage of making more systemic and enduring changes across a course, impacting the culture related to how course pedagogy affects student success. A faculty team, as opposed to individual faculty participants, can be empowered to act as a change agent within a department, college, or campus. An extreme example of this team effect is the American Historical Association’s national, discipline-specific course redesign model. This program, the Tuning Project (2012–2015), assembled history faculty from more than 60 different colleges and universities from across the United States (Brookins & Swafford, 2017). On a smaller scale, leaders of the Doyle Program (Walsh et al., 2013) describe their model as “sustained interdisciplinary faculty cooperation that could be utilized to address many different sorts of classroom challenges.” However, faculty commitment to longer programs can be harder to garner.

The course redesign curriculum also significantly influences whether student success outcomes are affected. When the outcome is used as the polestar for the redesign program, the redesign curriculum and strategies are more likely to make a significant enough impact on course achievement outcomes. But if a book or speaker is the center, particularly when only generally related to student success, the redesign efforts may not be focused enough to facilitate change. For example, any one of Lang’s (2016) small teaching techniques (e.g., asking students what they already know or providing examples of failure) or Harrington and Thomas’s (2018) motivational syllabus additions (e.g., assignment rationales) by themselves are not likely to change a gateway course’s ability to reduce its DFWI rates. That said, for this new type of redesign, it is unclear how much change is necessary to alter course outcomes.

The Scope of Impact from Course Redesign

For the scope of impact for a course redesign project to influence student success, the level and type of faculty participation matter. The level of faculty participation could merely involve individual faculty or could be organized to influence more uniform changes (e.g., DFWI, disciplinary challenges, and course-level concerns) across a course-line. Individualistic course redesign is characterized by a lone faculty member who reworks only how they will be teaching their particular sections of a course-line. These changes are not broad in scope as they are not typically coordinated with other faculty teaching the same course and there is no requirement for coordination or collaboration with proximal colleagues. In contrast, uniform course redesign has a larger scope of impact as all sections of a course-line and all relevant faculty participate. In this situation, it is understood that everyone will move forward with the new design.

The scope of impact is also dependent on whether faculty participation is volunteered versus mandated. Voluntary course redesign occurs when faculty recognize the need for pedagogical change and initiate such change. That effort could be solitary or could include collaborating with a more established redesign program. Mandated course redesign occurs when academic leadership requires the change. In that situation, a group of faculty finds themselves in the spotlight and subsequently engaged in course redesign. Figure 3 provides some examples of each of these types of redesign strategies relative to their scope of impact.

Figure 3. Course Redesign Implementation Strategies & Scope of ImpactFigure 3. Course Redesign Implementation Strategies & Scope of Impact

Because of inherent difficulties with uniform and mandated models, Alexander and Gardner (2009) advised that course redesign programs must hold local knowledge and judgment dear. This creates a process that has the power to facilitate cultural shifts based on long-lasting beliefs in “personal accountability for change and improvement” (p. 18). As Brookins and Swafford (2017) reported regarding their Tuning course redesign project, faculty did implement individual changes but meaningful cultural shifts for larger groups of colleagues or departments were varied. Collaboration among the faculty, the department, and the institution must be a cornerstone of an SSCR. This increases the likelihood for meaningful cultural shifts in the understanding of the critical correlation between student success and pedagogy (McGowan et al., 2017).

Implications of the Scope of Impact

To ensure the largest scope of impact, a redesign initiative for student success should include all sections and all faculty related to a course-line. Caution should be used when allowing faculty to volunteer to avoid providing educational development resources to faculty or courses that are not actually problematic. For example, in the ACUE model, individual faculty could volunteer, or course-line faculty could assemble as a team. The scope of impact for an individual faculty volunteer would be restricted to only their teaching, while the scope of impact for a team could branch out to all sections of the course-line as well as other courses within the discipline.

Caution should be used when courses and faculty teams are identified administratively and participation is mandated. This situation can cause faculty resistance, which results in the identification of marginally impactful or shallow effort strategies. For example, faculty who are resistant to course redesign could change things like course prerequisites, add/drop policies, and learning outcomes. While potentially important, those types of changes swirl around the margins of the actual teaching and learning experiences. Similarly, pedagogical strategies, such as “early and often,” can be implemented with minimal or shallow results. Early and often assessment is more effective when used as feedback from a sequence of practice opportunities (Lang, 2016; Middendorf & Shopkow, 2018). Simply increasing homework levels in the first few weeks of the course (the “heck yeah, I can test them more!” approach) without requiring metacognitive reflection on homework feedback is a shallow and ineffective implementation of this strategy.


A course redesign focused on student success offers tremendous promise for increasing access to higher education, improving persistence and degree completion rates, changing the campus culture about student success, and shifting students’ perceptions of different disciplines. However, with Kuh’s (2008a) caution regarding the quality of the implementation of HIPs in mind, course redesign for student success must be “done right.” Done wrong, it can become yet another program that gets checked off each year.

This article presented four particulars that should be considered for an SSCR “done right.” These particulars demonstrate that a course could be the locus of change if the student success outcomes are known; if redesign curriculum, length, and leadership are carefully crafted to support those outcomes; and if faculty participate as part of a course-line team to impact all sections of the course.

Next steps for exploring course redesign as a student success initiative should expand consideration to additional particulars. For example, the promising practice of including students as partners in curriculum design (Cook-Sather et al., 2014; Healey et al., 2016) would enable reality checks on how proposed changes would be interpreted by the actual student audience. Other particulars not considered here that deserve exploration might include a deeper dive into content sequencing (e.g., the depth versus breadth balance in introductory courses), higher-order cognitive abilities (e.g., the juxtaposition between having foundational prior knowledge and using it during higher-order thinking activities), or the inclusion of basic learning strategies and skills (e.g., note-taking, reading, test-taking).

As this promising approach to student success emerges, assessment should also be considered. Helpful cues can be taken from Palmer et al.’s (2016) argument that a CDI can be a high-impact practice. In their model, high impact was operationally defined as the degree to which the faculty participants were influenced as well as whether the post-CDI course syllabus provided evidence of change. Thus, Palmer et al. (2016) assessment used backward design to determine if the intended goals were realized.

The notion of why a distinction between a CDI and an SSCR is needed should be critically examined. Is student success not the implicit goal of any course redesign? Notably, Palmer et al.’s (2016) articulated goals for CDIs do not include any student outcomes. However, they are implied from goals such as participants must “scaffold and space learning activities appropriately to support and maximize learning” (Palmer et al., 2016, p. 2). Yet that type of goal could be assessed without looking at student data. For example, one could see evidence of scaffolding in a lesson plan. Scaffolding should foster better student outcomes, but is scaffolding alone enough to help a student pass the class? Without directly measuring student outcome data, we will not know.

Redesign programs for student success must take great care that they too are not implemented “on the margins” but rather deeply embed the high-impact traits that are actualized in and around the core classroom experience. When HIP traits are used and assessed, the gateway course experience has a chance to shift from a bottleneck to a “catapult” (Venn et al., 2019) into higher education and all of the opportunities that that affords.


This research was supported by the generosity of the John N. Gardner Institute for Excellence in Undergraduate Education.


Rebecca Campbell, PhD, is a Professor of Educational Psychology at Northern Arizona University and President’s Distinguished Teaching Fellow. She teaches courses on the theory of teaching and learning in the College of Education. Generally, her work focuses on how pedagogy and academic policies facilitate student success in higher education. Her interests include looking at best practices in first-year seminar design, academic probation interventions, common reading programs, and gateway courses.

Benjamin Buck Blankenship is a PhD student in the College of Education’s Curriculum and Instruction program at Northern Arizona University. He is also a full-time lecturer in First Year Experience, a retention program, working with various first-year student groups. His research interests include retention, first-year students, at-risk students, and mentoring interventions.


  • Alexander, J. S., & Gardner, J. N. (2009). Beyond retention: A comprehensive approach to the first college year. About Campus, 14(2), 18–26. https://doi.org/10.1002/abc.285
  • Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart taching. Jossey-Bass.
  • American Academy of Arts and Sciences. (2017). The future of undergraduate education: The future of America. https://www.amacad.org/sites/default/files/publication/downloads/Future-of-Undergraduate-Education.pdf
  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). Jossey-Bass.
  • Association of College and University Educators. (2018). Effective teaching practices. https://acue.org/?acue_courses=effective-teaching-practices
  • Barkley, E. F. (2010). Student engagement techniques: A handbook for college faculty. Jossey-Bass.
  • Brookins, J., & Swafford, E. (2017). Why gateway-course improvement should matter to academic discipline associations and what they can do to address the issues. New Directions for Higher Education, 180, 75–85.
  • Brown, P. C., Roediger, H. L., III, & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Harvard University Press.
  • Chism, N. V. N., Holley, M., & Harris, C. J. (2012). Researching the impact of educational development: Basis for informed practice. In J. Groccia & L. Cruz (Eds.), To improve the academy: Resources for faculty , instructional, and organizational development (Vol. 31, pp. 129–145). Jossey-Bass/Anker.
  • Complete College America. (2012). Remediation: Higher education’s bridge to nowhere.
  • Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. Jossey-Bass.
  • Delamater, A. (2018, July 24). It matters who is teaching your 101 classes—and how: 3 ways to reduce course and instructor variation. EAB. https://www.eab.com/blogs/institutional-analytics-blog/07/course-section-variation
  • Dougherty, K., Jones, S., Lahr, H., Natow, R., Pheatt, L., & Reddy, V. (2016). Performance funding for higher education. Johns Hopkins University Press.
  • Education Advisory Board. (2018). Navigate the bottleneck courses in your institution. https://www.eab.com/technology/academic-performance-solutions/resources/infographics/bottlenecks
  • Felton, P. (2018). Assessing the heart of high-impact practices: Transfer of learning. 2018 Assessment Institute Indianapolis, Indianapolis, IN, United States.
  • Field, K. (2018). A third of your freshmen disappear: How can you keep them? The Chronicle of Higher Education, 35, 8. https://www.chronicle.com/article/a-third-of-your-freshmen-disappear-how-can-you-keep-them/
  • Harrington, C., & Thomas, M. (2018). Designing a motivational syllabus: Creating a learning path for student engagement. Stylus Publishing.
  • Healey, M., Flint, A., & Harrington, K. (2016). Students as partners: Reflections on a conceptual model, followed by student response by Lianne van Dam. Teaching and Learning Inquiry, 4(2), 8–20. https://doi.org/10.20343/TEACHLEARNINQU.4.2.3
  • Hearn, J. C. (2006). Student success: What research suggests for policy and practice. The National Postsecondary Education Cooperative. https://nces.ed.gov/npec/pdf/synth_Hearn.pdf
  • John N. Gardner Institute for Excellence in Undergraduate Education. (2015). Gateways to Completion: Overview, evidence of strength of components & summary of outcomes to date. https://s3.amazonaws.com/jngi_pub/hlc16/Overview+of+G2C+Evidence+of+the+Strength+of+the+G2C+Components+&+G2C+Outcomes+to+Date.pdf
  • Johnson, T., Linder, K., Nelms, G., & Palmer, M. (2012, October). Exploring the range of multi-day course design institutes [Conference session]. The 37th Annual POD Conference, Seattle, WA, United States.
  • Kinzie, J. (2012). Introduction: A new view of student success. In L. A. Schreiner, M. C. Louis & D. D. Nelson (Eds.), Thriving in transitions. A research-based approach to College student success. Sterling, VA: Stylus Publishing
  • Koch, A. K. (2017a). It’s about the gateway courses: Defining and contextualizing the issue. New Directions for Higher Education, 2017(180), 11–17. https://doi.org/10.1002/he.20257
  • Koch, A. K. (2017b). Many thousands failed: A wakeup call to history educators. Perspectives on history, 55, 18–19. https://www.historians.org/ publications-and-directories/perspectives-on-history/may-2017/many-thousands-failed-a-wakeup-call-to-history-educators
  • Koch, A. K., Rife, M. C., & Hanson, M. (2016). Killer course correction: Using self-studies to transform gateway courses. Paper presented at the meeting of the Higher Learning Commission, Chicago, IL.
  • Koch, A. K., & Rodier, R. (2014). Gateways to completion guidebook. John N. Gardner Institute for Excellence in Undergraduate Education.
  • Kuh, G. D. (2008a). High-impact educational practices: What they are, who has access to them, and why they matter. The Association of American Colleges & Universities.
  • Kuh, G. D. (2008b). Why integration and engagement are essential to effective educational practice in the twenty-first century. Peer Review, 10(4), 27–29. http://www.aacu.org/publications-research/periodicals/why-integration-and-engagement-are-essential-effective-educational
  • Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008c). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education, 79(5), 540—563.
  • Kuh, G. D., & Kinzie, J. (2018, May 1). What really makes a “high-impact” practice high impact? Inside Higher Ed. https://www.insidehighered.com/views/2018/05/01/kuh-and-kinzie-respond-essay-questioning-high-impact-practices-opinion
  • Kuh, G. D., Kinzie, J., Buckley, J. A., Bridges, B. K., & Hayek, J. C. (2007). Piecing together the student success puzzle: Research, propositions, and recommendations. ASHE Higher Education Report, 32(5), 1–182.
  • Lang, J. M. (2016). Small teaching: Everyday lessons from the science of learning. Jossey-Bass.
  • Lieberman, M. (2019, February 20). Central Florida absorbs course redesign organization. Inside Higher Ed. https://www.insidehighered.com/digital-learning/insights/2019/02/20/university-central-florida-absorbs-resources-course-redesign
  • Martin, N. D., Spenner, K. I., & Mustillo, S. A. (2017). A Test of leading explanations for the college racial-ethnic achievement gap: Evidence from a longitudinal case study. Research in Higher Education, 58(6), 617–645. https://doi.org/10.1007/s11162-016-9439-6
  • McGowan, S., Felten, P., Caulkins, J., & Artze-Vega, I. (2017). Fostering evidence-informed teaching in crucial classes: Faculty development in gateway courses. New Directions for Higher Education, 2017(180), 53–62. https://doi.org/10.1002/he.20261
  • McGuire, S. Y. (2015). Teach students how to learn: Strategies you can incorporate into any course to improve student metacognition, study skills, and motivation. Stylus Publishing.
  • Mellow, G. O., Woolis, D. D., Klages-Bombich, M., & Restler, S. (2015). Taking college teaching seriously: Pedagogy matters! Stylus Publishing.
  • Middendorf, J. K. (2000). Finding key faculty to influence change. To Improve the Academy, 18(1), 83–93. https://doi.org/10.1002/j.2334-4822.2000.tb00364.x
  • Middendorf, J. K., & Shopkow, L. (2018). Overcoming student learning bottlenecks. Stylus Publishing.
  • Miller, M. D. (2016). Minds online: Teaching effectively with technology. Harvard University Press.
  • National Center for Academic Transformation. (N.d.). A brief history of the national center for academic transformation. http://www.thencat.org/NCATHistory.html
  • Nomme, K., & Birol, G. (2014). Course redesign: An evidence-based approach. The Canadian Journal for the Scholarship of Teaching and Learning, 5(1). https://doi.org/10.5206/cjsotl-rcacea.2014.1.2
  • Palmer, M. S., Streifer, A. C., & Williams-Duncan, S. (2016). Systematic assessment of a high-impact course design institute. To Improve the Academy, 35(2), 339–361. https://doi.org/10.1002/tia2.20041
  • Pistilli, M. D., Arnold, K. E., & Bethune, M. (2012, July 17). Signals: Using academic analytics to promote student success search. EDUCAUSE Review Online. https://er.educause.edu/articles/2012/7/signals-using-academic-analytics-to-promote-student-success
  • Prime, D. G. (1982). Last word: A missing element in the retention discussion. Black Issues in Higher Education, 18(21), 6.
  • Reynolds, H. L., & Kearns, K. D. (2017). A planning tool for incorporating backward design, active learning, and authentic assessment in the college classroom. College Teaching, 65(1), 17–27.. https://doi.org/10.1080/87567555.2016.1222575
  • Turner, P. M. (2009). Next generation: Course redesign. Change: The Magazine of Higher Learning, 41(6), 10–16. https://doi.org/10.1080/00091380903297642
  • Valbrun, M. (2018, April 25). Maybe not so “high impact”? Inside Higher Ed.https://www.insidehighered.com/news/2018/04/25/study-questions-whether-high-impact-practices-yield-higher-graduation-rates
  • Venn, M., Koch, A., & Denley, T. (2019, December). Deepening learning through gateway course transformation [Conference session]. 2019 Annual Meeting of the Southern Association of Colleges and Schools Commission on Colleges, Houston, TX, United States.
  • Walsh, M. L., Lewis, J. S., & Rakestraw, J. (2013). Faculty collaboration to effectively engage diversity: A collaborative course redesign model. Peer Review, 15(1), 21–24.
  • Wheeler, L., & Bach, D. (2018). Making assessment matter: Linking interventions, instructional practices, and academic achievement [Paper presentation]. 2018 POD Network Conference, Portland, OR, United States.
  • Willingham, D. T. (2009). Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for the classroom. Jossey-Bass.
  • Willingham, D. (2019, January 14). What should funders fund? Daniel Willingham—Science and Education.http://www.danielwillingham.com/daniel-willingham-science-and-education-blog/what-should-funders-fund
  • Winkelmes, M., Boye, A., & Tapp, S. (2019). Transparent design in higher education teaching and leadership: A guide to implementing the transparency framework institution-wide to improve learning and retention. Stylus Publishing.
  • Zhao, Y. (2016). From deficiency to strength: Shifting the mindset about education inequality. Journal of Social Issues, 72(4), 720–739. https://doi.org/10.1111/josi.12191