Abstract

By combining recommendations for effective assignment design with principles of transparency and the value-expectancy theory of achievement motivation, we developed a rubric capable of for assessing the quality and guiding the design of assignment descriptions. This rubric defines criteria characteristic of well-designed assignments; breaks the criteria down into concrete, measurable components; and suggests what evidence for each component might look like. While the full rubric is valid for major, signature assignments, it can accommodate a diverse range. It can also provide summative, quantitative information to educational developers for research and formative, qualitative feedback to instructors for gauging the quality of their assignments.

Keywords: assignment, higher education, learning-focused, motivation, rubric, transparency

Drawing on a significant body of literature, Boud (1988) argues that “assessment methods… probably have a greater influence on how and what students learn than any other single factor” (p. 35). This influence, which may have either positive or negative consequences for student engagement and learning, depends on the nature of the assessment and how students interpret its tasks and context (Boud, 1995). For example, many traditional forms of assessment are auditive in nature (Wiggins, 1998). Their purpose is to judge or evaluate learning and to provide a means for instructors to issue a score or a grade. They ask students to look back and demonstrate their ability to recall or remember. To complete such assessments, students typically adopt less effective learning strategies—often labeled as “surface” learning approaches (Marton & Säljö, 1976). Students minimize their study or work time, often resorting to cramming and memorization. This “learning” process is unreflective and may feel like a chore assigned solely for the purpose of passing a test. In contrast, other forms of assessment are educative (Wiggins, 1998). Their purpose is to inform the learning process and to improve student performance. They are authentic and complex. They ask students to look forward and to demonstrate their ability to synthesize, predict, analyze, and evaluate. To succeed, students must adopt “deeper” approaches to learning: connecting ideas, thinking critically about content, striving for understanding, and reflecting on their own learning. The learning process, in this scenario, is intrinsically valued and rewarded.

The benefits of adopting more educative, learning focused forms of assessment are clear, and a number of excellent resources already exist to help educators develop meaningful assignments that foster these deep approaches to learning (e.g., Boye, n.d.; Bean, 2011; Nilson, 2010; Walvoord & Anderson, 2010; Wiggins, 1998). Some of the recommendations include aligning the purposes of assignments with learning objectives; ensuring authentic performance; scaffolding complexity; developing and sharing standards and criteria; providing immediate, discriminating, and forward looking feedback; and giving students opportunities to self reflect and use feedback to improve future performance.

One consistent theme running through these recommendations is the characteristic of transparency. In the context of teaching and learning in higher education, transparency is the act of making explicit to students the underlying—and often hidden—features of the learning environment (Winkelmes, 2013). Instructors who adopt a transparency framework clearly and consistently articulate to their students what they are learning, why they might want to learn those things, and how they might best navigate learning experiences to ensure success. The world of rhetoric, composition, and writing instruction has long been concerned with transparency in writing assignments, though it is usually referred to as “clarity.” Research by Anderson, Anson, Gonyea, and Paine (2009) concluded that “the use of writing to promote deep learning depends less on the amount of writing assigned in a course than on the design of the writing assignments themselves” (Bean, 2011, p. 97). Well designed assignments, Anderson et al. (2009) found, should include “clear explanations of writing expectations,” such as the purpose and grading criteria. Bean (2011) and other popular writing instruction resources (e.g., Gottschalk & Hjortshoj, 2004) also suggest that instructors clarify an assignment’s purpose, audience, format, and task.

Working with a range of assignment types, including writing focused examples, Winkelmes (2013) and her colleagues argue that transparency is connected to explicit articulations of an assignment’s purpose, tasks, and criteria. Using this definition, they measured the impact of transparent assignment descriptions on several predictors of student success (2016). In a multi institutional study involving 35 instructors and 1,800 students, they found that students who perceive a greater degree of transparency in the purposes, tasks, and criteria of their assignments report significant gains in academic confidence, sense of belonging, and mastery of the skills that employers value most when hiring (e.g., the ability to apply learning to new problems and situations). The value of these gains is reinforced by other studies that have connected academic confidence and sense of belonging to greater persistence, retention, and higher grades (Aronson, Fried, & Good, 2002; Paunesku et al., 2015; Walton & Cohen, 2011).

Researchers have observed similar benefits when the basic principles of transparency are applied to other teaching documents. For example, Palmer, Wheeler, and Aneece (2016) showed that when instructors create more transparent, learning focused syllabi (Palmer, Bach, & Streifer, 2014)—those characterized by clearly stated learning goals and objectives, robust assessment and activity descriptions, detailed course schedules, and a focus on student success—students have more positive perceptions of the document, the course, and the instructor. Specifically, students viewed a learning focused syllabus as a useful, organizing document; the associated course as an interesting, relevant, and rigorous learning experience; and the instructor as a caring and supportive individual integral to their learning process.

Is there a theoretical underpinning that could explain the benefits observed for transparent assignments, syllabi, and other artifacts of instruction? One likely candidate is motivation—specifically, the value expectancy theory of achievement motivation (Wigfield & Eccles, 2000). This theory posits that an individual’s choices, persistence, and performance are a factor of the value they place on an activity and their beliefs about how well they will do on that activity. Students might derive value from the importance or meaningfulness of the activity, their personal interest in or enjoyment of it, or its usefulness for their future plans. Students’ beliefs about how well they might perform depend on their past experiences, self concept of ability, drive for competency, skill matching to other related activities, their confidence, as well as the support, encouragement, and feedback the instructor offers.

Comparing both components of the theory—value and expectancy—to Winkelmes et al.’s (2016) articulation of transparent assignments, the value component appears to map directly on to what she and her research team call “purpose.” The relationship between expectancy and what Winkelmes et al. label as “task” and “criteria” is less direct, but can be inferred. When instructors clearly lay out their expectations for their assignments (i.e., articulate the task) and describe in detail how the assignment will be assessed (i.e., define the criteria), it is reasonable to assume that the instructor directly influences students’ beliefs about their potential success on the assignment. The fact that Winkelmes and her colleagues see positive gains in academic confidence and sense of belonging when students are presented with transparent assignments supports this inference, as confidence and belonging are also important markers of self efficacy (Bandura, 1997; Covington, 1992).

If the underlying theoretical framework supporting the observed benefits of transparency is motivation, then not all assignment descriptions articulating the purpose, task, and criteria will be equal. The purpose(s) of some assignments are more authentic, practical, relevant, and valuable to students; some tasks are more clearly defined, better articulated, sequenced, scaffolded, and otherwise supported by the instructor; some criteria are better aligned with purposes, are more discriminating, have clearer standards, and provide more effective formative feedback. In other words, transparent assignments necessarily attend to purpose, task, and criteria, while learning focused assignments also attend to the underlying motivational features that are linked to purpose, task, and criteria.

By combining the recommendations for effective assignment design with principles of transparency and the value expectancy theory of achievement motivation, we have developed a rubric to guide the design and assessment of learning focused assignments. This rubric defines broad criteria characteristic of well designed assignments; breaks these criteria down into a set of concrete, measurable components; and suggests what evidence for each component might look like in an assignment description. In the following sections, we describe our design process, the rubric itself, as well as important details about validity, uses, scoring, inter rater reliability, pre /post analysis, and other considerations.

The Design Process

To develop the rubric, we followed a multistage, iterative process. We drew from the available literature on assignment design, built upon Winkelmes et al.’s transparency model and important research findings, and considered the value expectancy theory of achievement motivation to create a first draft. We modeled the specific structure and scoring system of the assignment rubric on the valid and reliable learning focused syllabus rubric of Palmer et al. (2014).

Following best practices in rubric design (e.g., Brookhart, 2013; Stevens & Levi, 2013), we then refined our rubric by involving students themselves. We conducted a series of Institutional Review Board approved student focus groups at two large, public, research focused institutions to help us better understand what features of assignment descriptions matter most to students. Participants annotated a diverse set of assignment descriptions, discussed their reactions, and completed a brief survey. Using the resulting data, we adjusted the initial criteria, components, and level of importance for certain components of the rubric. For example, the rubric’s emphasis on clear definitions, in particular of genres and standards, reflects student sensitivity to the fact that any two instructors can and often do mean different things by terminology that may seem obvious or self explanatory, such as “research paper,” “well written,” or “appropriate.”

A revised draft of the rubric was then shared during an interactive session at the 2016 POD Network Conference (Palmer, Gravett, & LaFleur, 2016); participant feedback during this session provided the basis for further refinements. For example, we had initially labeled the criterion for general learning focused practices (e.g., positive tone, well organized) “Accessibility.” Yet, we were reminded, this term is usually tied to inclusive teaching practices and universal design principles, especially in relation to disability. As a result, we relabeled the original criterion “Additional Learning Focused Qualities,” and we also added a component specifically focused on inclusive teaching practices.

Finally, we then followed Walvoord and Anderson’s (2010) advice to base rubrics on past performances. We applied our rubric to approximately 20 assignment descriptions that varied in their focus on learning. The samples came from three sources: the Transparency in Teaching and Learning in Higher Education project (Winkelmes, 2014), the National Institute for Learning Outcomes Assessment’s DQP Assignment Library (2014), and the University of Virginia’s week long Course Design Institute (2015). This final, crucial step allowed us to determine the validity constructs, define a process for ensuring inter rater reliability, and finalize the language of the components and scoring system of the rubric.

The Rubric

The assignment rubric, shown in Table 1, was designed to help quantitatively and qualitatively assess the descriptions of major, or “signature,” assignments in higher education. It accounts for nuances in assignments while also maintaining relevance to courses in a diverse range of disciplines, academic levels, and institutions.

Table 1. Learning Focused Assignment Rubric Describing Main Criteria, Specific Components, and Suggestions for Where to Find Evidence for each Component.
CriterionWhat the component looks like in the written documentIdeas for where to look and examples of what to look for (not all need to be present)
Purpose: The assignment description clearly states what knowledge or skills students will gain and what practice they will get.
 Purpose
  • 1. Measureable student learning objectives for the assignment are articulated.***
  • Learning objectives may be embedded in an introductory statement of purpose, in a description of the assignment, or in their own easily identifiable section.

  • Objectives are written using specific, measurable action words (e.g., compare, evaluate).

  • Learning objectives focus on what the students will need to do, not the assignment, course, or instructor.

  • Ideally, the assignment learning objectives should align with the course learning objectives, but this is difficult to know without looking at the syllabus.

  • 2. The assignment is authentic, practically useful, and/or relevant to students’ lives beyond college.***
  • The value of the assignment is usually found in the introductory statement or description of the assignment.

  • Authentic assignments place students in real or realistic scenarios in which they perform work similar to that of experts or professionals in the discipline/field.

  • Students might be asked explicitly to inhabit a role or context beyond a student in a course.

  • The assignment makes a connection between the activities or practical, transferrable skills that it involves and those that students will use now or after college.

  • 3. The relevance of the assignment in the context of the course is clearly articulated.*
  • A statement of relevance to course material (e.g., “As we have discussed in class…”) is usually found in the introductory statement or description of the assignment.

  • This component may be difficult to assess since the relevance may be stated in the description of the assignment on the syllabus.

  • 4. Learning objectives are appropriately pitched to the course level, class size, position of the assignment within the course, and the characteristics of the students taking the class.*
  • This component can be difficult to assess for anyone except the instructor or someone with extensive knowledge of the course, discipline, curriculum, and institutional context. When used for research purposes, it may be necessary to exclude this component. In this case, the scoring system must be adjusted.

Task(s): It is clear what the students will do and how they will do it.
 Task(s)
  • 5. The task is aligned with the purpose.***
  • The task selected is well suited to fulfill the purpose of the assignment.

  • 6. The type(s) or genre(s) of the assignment is clear and defined.***
  • The type (e.g., essay, digital media project, infographic) is usually discovered in the name or title of the assignment, but it is sometimes indicated under another separate section.

  • The assignment describes or defines the genre for students, rather than assuming that they will know what, for example, a “research paper” means in that course.

  • The assignment may contain multiple types or genres, but these must be clearly defined and contribute to the overall purpose.

  • 7. The sequence of the assignment seems logical and well paced and the major steps within that sequence are described.***
  • Steps may be delineated using numbers, bullet points, checklists, or transitional words (e.g., first, second, next, then, etc.).

  • How to approach each step is clear.

  • The presence of multiple due dates may indicate the assignment has been broken into a logical sequence with different steps.

  • The sequence seems well paced, with not too many tasks occurring or due all at once.

  • It is noted which parts of the process students will learn more about later.

  • 8. Formatting requirements or restrictions, the weight or worth of the assignment, and/or any important due dates or deadlines are specified.**
  • These details usually appear in their own separately labeled sections.

  • Instructors may use special formatting (e.g., bold, underline, italics) to emphasize important details of the assignment.

  • While the weight or worth of the assignment is often articulated in the syllabus, it is good practice to reiterate it on the assignment description.

  • 9. Tips for successfully completing the task, beyond the assessment criteria, are provided.*
  • These tips may appear as a list or a table.

  • Tips might include, for example, comments from past students, recommended resources, or common mistakes to avoid.

  • This may be difficult to assess because the tips may appear in supplementary material, as part of an in class discussion, or on the syllabus.

Criteria/assessment: The criteria describe what excellence looks like and allow students to effectively self evaluate.
 Criteria/assessment
  • 10. The criteria by which the assignment will be assessed are indicated.***
  • These criteria may appear in the form of a checklist, rubric, or textual descriptions.

  • 11. The criteria specify characteristics that represent high quality work.***
  • The criteria may be presented holistically (where only the highest level of performance is articulated) or analytically (where multiple levels of performance are articulated).

  • The language describing the criteria is clearly defined, easily understood, and framed in a positive way.

  • 12. The assessment criteria are aligned with the assignment’s purpose and task(s).***
  • The criteria should be clearly derived from and supportive of the purposes and the task(s). For example, if part of the purpose of the assignment is for students to demonstrate their ability to closely read a text, then the skills associated with close reading need to be represented in the assignment’s assessment standards.

  • 13. There are opportunities to practice and to receive formative feedback, according to the criteria, before final submission.**
  • Opportunities for feedback may be indicated by separate steps and important dates.

  • Formative feedback can be provided by the instructor, as well as through peer feedback or critical self reflection.

  • 14. The assignment refers students to multiple annotated examples of work that fulfill the criteria.*
  • Asking students to discover such examples may be explicitly included as part of the assignment.

  • There may be examples included on or attached to the assignment.

  • The examples should be annotated, in writing or verbally, in or out of class.

  • The availability and/or quality of the examples may be difficult to assess as these can appear as supplementary materials or part of in class discussions.

Additional learning focused qualities: The document is written with learners in mind, helping to organize, engage, and challenge them.
 Additional learning focused qualities
  • 15. The tone of the assignment is positive, respectful, inviting, and directly addresses the student as a competent, engaged learner.***
  • The positive, respectful, inviting tone is conveyed throughout the document.

  • Personal pronouns (e.g., you, we, us) are used, rather than “the students” or “they.”

  • 16. The assignment is well organized and easy to navigate.**
  • The assignment is readable and the organization is clear and seemingly logical.

  • The presentation of the assignment elicits no major questions or confusions.

  • Layout, formatting, and organization emphasize the most important aspects of the assignment, rather than focusing students’ attention on more minor logistical details (e.g., page length, margins).

  • 17. The assignment is designed to be inclusive of and accessible to all students.**
  • The assignment description is presented to students in multiple formats (e.g., hard copy, oral presentation, digitally, and is fully accessible for students with disabilities).

  • The assignment is flexible enough to allow students to compose or communicate the final product in a variety of modalities (e.g. print, oral presentation, multimedia).

  • Students are encouraged to create work that is accessible to other students (e.g. electronic work is screen readable or video projects have accompanying transcripts or closed captioning).

  • The assignment avoids unnecessarily asking students to imagine, assume, or speak from stereotypical or stigmatizing roles.

  • For group assignment, the instructor makes clear the value of diverse teams and ensures their formation.

  • 18. The assignment communicates high expectations and projects confidence that students can meet those high expectations through hard work.*
  • The purpose, task, and criteria all indicate a high level of academic rigor (e.g., a purpose that promotes higher order thinking, a task that mimics the types of work expert professionals perform, etc.).

  • The assignment communicates the belief that each student can succeed.

  • 19. The assignment is engaging.*
  • The assignment is likely to pique students’ interest because it seems interesting, different, intriguing, provocative, fun, and/or creative.

Note. Essential components are indicted with ***, important with **, and less important with *.

The rubric focuses on four criteria characteristic of learning focused assignment descriptions: (a) purpose, (b) task(s), (c) criteria/assessment, and (d) additional learning focused qualities. These criteria do not necessarily map onto any specific section of an assignment description; instead, users of the rubric are directed to search for evidence across the document. This allows an assignment description to be assessed without having to rely on a prescribed or templated format.

We break down each criterion of the rubric into multiple components. The four components in the purpose section describe the ways in which the assignment description articulates what knowledge or skills students will gain and what practice they will get. The five components in the task(s) section describe the ways in which the assignment description articulates the steps required to complete the assignment and how students might best approach them. The five components in the criteria/assessment section describe the ways in which the assignment description articulates what excellent student work looks like and how their work will be assessed. Finally, the five components in the additional learning focused qualities section describe the ways in which the assignment description attends to organization, motivation, inclusivity, and other learning focused principles.

Lastly, we designate each component as essential, important, or less important. While all components contribute to more learning focused assignment descriptions, this coding scheme emphasizes the elevated importance of some components, helping to differentiate between assignments and also focusing instructors’ improvement efforts on the components that make the biggest difference. This also helps users decide which rubric components to consider when adapting for their own specific needs.

Considerations

Validity

The rubric was designed to assess the quality of assignment descriptions in higher education. We define quality in terms of the description’s focus on learning. Though we use Winkelmes et al.’s basic framework for describing transparency—purpose, task, criteria—we further define these characteristics and emphasize the importance of additional learning focused qualities, such as organization, motivation, and inclusivity.

This full rubric is best applied to major, or “signature,” assignments that are substantive in scope and scale. They might be higher stakes, scaffolded, project based, multistage, and/or capstone assignments, such as end of the semester research papers, final oral presentations, or digital media projects. The rubric can also be applied to shorter, in class, or formative assignments by only scoring subsets of components. For example, the assignment description for a short, non graded, in class assignment where students complete a worksheet should include clear articulations of the purpose and task, but not necessarily any other components. Alternatively, for a minor summative assignment that caps off a week long course unit and contributes to students’ grades, all the essential components should be present with high quality. The exact subset will depend on the assignment, but in many cases, it will minimally include the essential components for purpose, task(s), and additional learning focused qualities. Table 2 describes a few of the most common assignment types and lists the components most important to consider.

Table 2. Recommended Rubric Component Subsets Based on Assignment Type
ComponentQuick, formative, in class assignmentIn depth, formative, in class assignmentMinor summative assignmentMajor summative assignment
1***xxxx
2***xxx
3*x
4*x
5***xxxx
6***xx
7***xx
8**x
9*x
10***xxx
11***xx
12***xx
13**x
14*x
15***xxxx
16**x
17**x
18*x
19*x
Note. Essential components are indicted with ***, important with **, and less important with *.

Uses

We designed the assignment rubric for two primary purposes: as a formative/educative tool and as a research tool. As a formative tool, the rubric may be useful to both instructors and educational developers. Instructors can score their own assignments to see where on the continuum—Unacceptable to Exemplary—their assignment descriptions fall and use the rubric to revise existing assignments or develop new ones. Instructors may even find it useful to share the rubric with their students, as a way to increase students’ awareness about the important components of an assignment and to hone students’ meta cognitive abilities. Educational developers might use this rubric to provide formative feedback to instructors on their assignment descriptions during consultations, to train CTL staff on how to give feedback, or to incorporate it into workshops or other programs. The rubric might even be productively shared with students in the context of CTL student–faculty partnerships for developing course content or simply to be more transparent about the assignment design process.

Likewise, scholars could pursue various research projects using the rubric. For instance, researchers might study students’ perceptions of two different assignments at opposite ends of the spectrum; perceptions of the instructor, the course, and other elements of the learning environment could also be studied. Another obvious avenue for research would be an extensive analysis of a large sample of assignments; while we tested our rubric on dozens of assignment descriptions, more could be done. Finally, the rubric could be used as a pre /post assessment tool for educational development initiatives, such as workshops, faculty learning communities, institutes, or other opportunities, wherein instructors are focused specifically on improving the learning centeredness of their assignments.

Scoring

Each of the 19 components on the rubric is designated as essential (components #1, 2, 5, 6, 7, 10, 11, 12, and 15), important (components #8, 13, 16, and 17), or less important (components #3, 4, 9, 14, 18, and 19) and is scored on the strength of supporting evidence. Strong evidence indicates that many (but not necessarily all) of the characteristics of the component are present and match the criteria closely. Moderate evidence indicates that a few of the characteristics of the component are present and/or only partly match the criteria. Low evidence indicates that very few of the characteristics of the component, if any, are present and/or do not match the criteria.

The actual scoring mechanism used depends on purpose. For formative purposes, noting the presence or absence of components, along with indications of quantity, is likely sufficient. With this information, an instructor would be able to easily identify opportunities to increase the learning focus of their assignment description. For research purposes, where quantitative data is valuable, the following scoring mechanism ensures that only high quality assignment descriptions, which attend to at least the essential and important components, score favorably:

To generate a score for an assignment, each essential component is awarded three points; important, two points; and less important, one point, regardless of the strength of evidence. After scoring all of the components, each column is summed and scaled by the appropriate factor: the strong evidence subtotal is multiplied by 2, the moderate evidence subtotal is multiplied by 1, and the low evidence subtotal is multiplied by 0. This multidirectional weighting scheme, also used in the Palmer et al. (2014) syllabus rubric, ensures that the final score reflects the presence and quality of essential components. An assignment could not score high if, for example, it does not include meaningful student learning objectives (component #1). It could score high, however, if it exhibited strong evidence for most of the essential and important components, but lacked evidence for the less important ones, such as tips for successfully completing the task (component #9).

When using the full rubric, the maximum score possible for an assignment description is 82. Exemplary assignment descriptions typically exhibit strong evidence for all essential and important components and fall in the range 70–82. Accomplished assignment descriptions typically exhibit strong evidence for at least all essential components, though not necessarily the important components, and fall in the range 54–69. Emerging assignment descriptions typically exhibit at least moderate evidence for all essential and important components and fall in the range 35–53. Unacceptable assignment descriptions typically lack evidence for most of the essential and important components and fall in the range 0–34.

If one of the components subsets shown in Table 2—or another variant—is used for research purposes, the quantitative scoring system will need to be adjusted accordingly.

Inter Rater Reliability

When using the rubric for research purposes, we recommend the following standard process to ensure inter rater reliability (Palmer, Streifer, & Williams Duncan, 2016):

  • Each assignment description should be initially scored against the rubric independently by at least two raters.

  • Component level and overall scores should be compared between raters. All components defined as essential in the rubric having a rater difference greater than 0 and all other components having a rater difference greater than 1 should be rescored by the researchers.

  • Rescoring should be done collaboratively, without any knowledge of the original scores, until consensus is reached through conversation.

This process should produce differences in the total scores between raters less than or equal to 8 points (or less than 10% of the total score possible). The total score for each syllabus should then be determined to be the average of the raters’ total scores.

Pre /Post analysis

If a researcher wishes to analyze assignment description data for pre–post pairs, we recommend calculating normalized gains (⟨g⟩) for each pair as described by Hake (1998): ⟨g⟩ = 100 × (posttotal score – pretotal score)/(82 − pre total score), where 82 is the maximum score possible. This number takes into account the possible gain between pre and post scores for each instructor. (Note. If the full rubric is not used, the total maximum score in this equation would need to be adjusted accordingly.)

We define the region of low gain to be less than or equal to 0.3, moderate gain between 0.3 and 0.7, and high gain greater than or equal to 0.7. The overall normalized gain (⟨⟨g⟩⟩) should be calculated by averaging the normalized gains for all pairs analyzed. This calculation allows one to predict the gain in assignment description score that an average instructor would expect to achieve after redesigning an assignment, regardless of their starting point.

Final Thoughts

Although the rubric was iteratively designed, drawing upon research literature, other rubrics, real assignments, and student perspectives, some important considerations about its usage remain. First, we recognize that there are elements in the rubric that may be difficult to discern from looking at an assignment description alone. Instructors may present or clarify parts of an assignment verbally in class, on the syllabus, or in other supplementary materials that raters will not be able to access. (In fact, we recommend this practice, so that students are receiving important information from multiple sources and so that all of this information is mutually reinforcing.) Not every element from the rubric may appear on a single assignment description; such an assignment could still be considered Exemplary given the broader context. Users of the rubric must rely on their best judgment to discover available evidence.

Relatedly, the column of the rubric titled “Ideas for where to look and examples of what to look for” does not provide an exhaustive list of every piece of evidence potentially related to the criterion. Rather, it provides a set of common examples to guide instructors, researchers, and/or educational developers. We recognize that many other forms of evidence are possible and, indeed, likely. For example, the evidence one might look for to support the accessibility and inclusivity features of an assignment (component #17) are likely to vary greatly depending on the type and genre of assignment.

Finally, when printed, the rubric is several pages long, well over the typical length of even a signature assignment description. This length may leave some with the impression that we are unnecessarily complicating assignment design, when the point is to be clearer and simpler. To fully capture the scope of what we believe it means for an assignment to be learning focused, however, required great detail. We also believe the length of our rubric conveys the seriousness with which we feel instructors and educational developers should approach assignment design. Assignments are an important—maybe the most important—part, as Boud suggests, of course design, assessment, motivation, and learning focused classrooms. Crafting a quality assignment is no easy task. The length of our rubric, and its accompanying level of detail, underscores this significance.

Acknowledgments

We kindly thank Mary Ann Winkelmes for her many insights during the early stages of this project and for her help arranging some of the student focus groups from afar.

References

  • Anderson, P., Anson, C., Gonyea, B., & Paine, C. (2009). Using results from the consortium for the study of writing in college. Retrieved from http://nsse.indiana.edu/webinars/TuesdayswithNSSE/2009_09_22_UsingResultsCSWC/Webinar%20Handout%20from%20WPA%202009.pdf
  • Aronson, J., Fried, C., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38, 113–125.
  • Bandura, A. (1997). Motivational determinants of risk taking behavior. Psychology Review, 64, 201–252.
  • Bean, J. (2011). Engaging ideas: The professor’s guide to integrating writing, critical thinking, and active learning in the classroom (2nd ed.). San Francisco: Jossey Bass.
  • Boud, D. (1988). Moving toward autonomy. In D. Boud (Ed.), Developing student autonomy in earning (2nd ed., pp. 17–39). London, England: Taylor & Francis.
  • Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.), Assessment for learning in higher education (pp. 35–48). London, England: Kogan.
  • Boye A. (n.d.). How do I create meaningful and effective assignments? Retrieved from https://www.depts.ttu.edu/tlpdc/Resources/Teaching_resources/TLPDC_teaching_resources/CreatingEffectiveAssignments.php
  • Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, VA: ASCD.
  • Covington, M. V. (1992). Making the grade: A self worth perspective on motivation and school reform. New York, NY: Cambridge University Press.
  • Gottschalk, K., & Hjortshoj, K. (2004). Elements of teaching writing: A resource for instructors in all disciplines. Boston: Bedford/St. Martin’s.
  • Hake, R. R. (1998). Interactive engagement versus traditional methods: A six thousand student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64–74.
  • Marton, F., & Säljö, R. (1976). On qualitative differences on learning: I—Outcome and process. British Journal of Educational Psychology, 46, 4–11.
  • National Institute for Learning Outcomes Assessment. (2014). DQP assignment library. Retrieved from http://www.assignmentlibrary.org
  • Nilson, L. B. (2010). Teaching at its best: A research based resource for college instructors (3rd ed.). San Francisco: Jossey Bass.
  • Palmer, M. S., Bach, D. J., & Streifer, A. C. (2014). Measuring the promise: A learning focused syllabus rubric. To Improve the Academy, 33(1), 14–36.
  • Palmer, M. S., Gravett, E., LaFleur, J. (2016, November). Measuring the transparency of assignment descriptions. Interactive session presented at the national conference for the Professional and Organizational Development Network in Higher Education, Louisville, KY.
  • Palmer, M. S., Streifer, A. C., & Williams Duncan, S. (2016). Systematic assessment of a high impact course design institute. To Improve the Academy, 35(2), 339–361.
  • Palmer, M. S., Wheeler, L. B., & Aneece, I. (2016). Does the document matter? The evolving role of syllabi in higher education. Change: The Magazine of Higher Learning, 48(4), 36–47.
  • Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mindset interventions are a scalable treatment for academic underachievement. Psychological Science, 26(6), 784–793.
  • Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics (2nd ed.). Sterling, VA: Stylus.
  • University of Virginia. (2015). Course design institute. Retrieved from http://cte.virginia.edu/programs/course design institute
  • Walton, G. M., & Cohen, G. L. (2011). A brief social belonging intervention improves academic and health outcomes among minority students. Science, 331, 1447–1451.
  • Walvoord, B. E., & Anderson, V. J. (2010). Effective rading (2nd ed.). San Francisco: Jossey Bass.
  • Wigfield, A., & Eccles, J. S. (2000). Expectancy value theory of achievement motivation. Contemporary Educational Psychology, 25, 68–81.
  • Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey Bass.
  • Winkelmes, M. (2013). Transparency in learning and teaching: Faculty and students benefit directly from a shared focus on learning and teaching processes. NEA Higher Education Advocate, 30(1), 6–9.
  • Winkelmes, M. (2014). Transparency in learning and teaching in higher education. Retrieved from https://www.unlv.edu/provost/teachingandlearning
  • Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(Winter/Spring). https://www.aacu.org/peerreview/2016/winter-spring/Winkelmes