Page  5

Michigan Journal of Community Service Learning Spring 2002, pp. 5-14

Cognitive Outcomes of Service-Learning: Reviewing the Past and Glimpsing the Future

Pamela Steinke and Stacey Buresh

Central College

This article critically reviews the research literature on cognitive outcomes of service-learning over the past decade with an emphasis on how convincing the results are to faculty. Self-report measures produce the most consistent positive findings yet are one of the least persuasive measures to faculty. The use of problem-solving protocols shows promise in measuring both student knowledge and the complexity of student thinking but needs further development. Recent work in the learning sciences provides direction for future outcome research and suggests how service-learning will help to transform education.

At the start of the 1990s it seemed as though this decade might mark the “cognitive revolution” for research on service-learning. In March 1991, Wingspread conference participants discussed the research agenda for the 1990s in service-learning as they sought to identify critical research questions (Giles, Honnet, & Migliore, 1991). Among the research questions that were generated about the effects of service-learning on the individual student were, “What is the effect of service-learning on students as learners?” and “What knowledge do students gain as a result of service-learning?” (p. 6). Indeed, Wingspread participants expressed hope that research on student learning outcomes could help to transform education, concluding that, “At the center of current educational reform is attention to student outcomes—the knowledge and skills we want students to have as a result of their education” (p. 15).

Yet as the decade came to a close, a crisis of confidence about cognitive outcomes seemed to be brewing and a major sticking point seemed to surround the issue of the quality of the outcome data. Although there has been great progress in the last decade toward answering questions related to cognitive outcomes, several researchers in the field have noted the lack of progress in providing convincing evidence for skeptical faculty, and argued that without this evidence, service-learning’s future is in jeopardy (Eyler & Giles, 1999; Osborne, Hammerich, & Hensley, 1998; Troppe, 1995). Eyler (2000) expresses researchers’ own evaluation of the current state of research on cognitive outcomes: “Intellectual outcomes—knowledge, cognitive development, problem-solving skills, and transfer of learning—are at the heart of the school and college mission and yet we know relatively lit

tle about how they are affected by service-learning” (p. 11). Moore’s (1999) comments on experiential education in general capture the problem with service-learning practitioners’ reliance on belief rather than data:

When it works, experiential education is a fabulous, exciting pedagogy with the power to transform individuals and institutions. But I think we need to take the risk of saying out loud that it does not always work. Our posture of true belief looks like Dorothy’s faith that the Wizard of Oz could supply the Scarecrow’s brain, the Tin Man’s heart, and the Lion’s courage; it obscures our problems and distracts us from doing something about them. (p. 23)

It is in this spirit of excitement and reflection that Howard, Gelmon, and Giles (2000) assert, “The time is right for a renewed call for service-learning research” (p. 5).

Given that the main course objectives for faculty are student learning objectives, the more servicelearning is shown to enhance traditional classroom learning, the more service-learning will be viewed as legitimate among educators (Troppe, 1995). Therefore, the outcome measures of service-learning that faculty will find most convincing will assess the cognitive or intellectual outcomes of their students specific to course content. As stated by Eyler and Giles (1999),

“Although faculty might agree that community service contributes to students’ personal and social development and that it makes them better citizens, many are dubious about its value in the academic program, where the most important goal is learning subject matter.” (pp. 57-58).

Page  6 Steinke and Buresh

Similarly, Osborne, Hammerich, and Hensley (1998) note the lack of research’s perceived relevancy on non-academic outcomes for many faculty, asserting that “most faculty will be persuaded to incorporate service-learning into their courses to the extent that service-learning can be shown to impact the learning of course content [emphasis in original]” (p. 6). Without well-documenting servicelearning’s effect on student learning, more and more faculty and administrators will become critical of service-learning’s role in higher education (Cohen & Kinsey, 1994).

Documenting the relationship between student learning and service-learning, however, has not been an easy task for researchers. The difficulties lie in finding a valid way to define service-learning’s cognitive outcomes and, once defined, in developing a convincing way to measure them (Eyler, 2000). In the past, researchers have used various measures including: self-report measures of learning, course evaluation measures, general measures of critical thinking, and general measures of creativity; more recently researchers have coded open-ended responses related to course content including problem solving protocols (Eyler & Giles, 1999).

Self-Report Measures of Learning

These measures are the most widely used and have produced the most positive findings on service- learning’s academic outcomes. Several studies have found students in service-learning courses report greater learning benefits from their servicelearning experiences than non-service-learning students report from alternative, traditional assignments (Berson, 1997; Markus, Howard, & King, 1993; Steinke et al., 2000). In one of the most frequently cited studies demonstrating service-learning’s positive effects on classroom learning, Markus, Howard, and King distributed a questionnaire before and after participation in a political science course.

Two of eight sections of discussion groups were designated as service-learning or experimental groups. Control discussion sections did not incorporate service-learning but students were required to write a longer term paper based on library research. Service-learning students reported that they could apply course principles to new situations more than did control students.

Berson (1997) studied the relationship between service-learning participation and urban community college students’ academic success. Sixteen course sections (286 students, 7 faculty) representing five different humanities courses were studied. Half of the sections participated in service-learning

(20 meaningful hours of community service with reflection activities in addition to the traditional course material) and half did not (traditional course material only). Students who participated in service- learning self-reported higher ratings of learning than did control students.

Eyler and Giles (1999) conducted a national survey of over 1,500 students (including 1,100 service- learning students) from 20 colleges and universities, following these results with extensive interviews with a subset of the students. Among the most important cognitive outcomes identified by service-learning students were a deeper understanding of course material, a better understanding of the complex problems people face, and the ability to apply course material to real world problems.

Steinke et al. (2000) researched five different outcomes from service-learning and non-servicelearning courses taught at 12 private colleges. One cognitive outcome measure was a composite selfreport scale consisting of eight items which included items similar to those used by Eyler and Giles (1999). Pre-test service-learning and non-servicelearning students’ perceived outcomes from previous courses did not differ. However, post-test service- learning students’ perceived outcomes from the current course were higher than were pre-test ratings of both groups and post-test ratings of nonservice- learning students.

While self-report measures do provide researchers with important information and can be valid measures of students’ beliefs (Shields & Steinke, 1992), they do not provide the most convincing evidence for faculty who want independent, objective confirmation of student learning that goes beyond what students simply believe about their learning. It is possible that students’ enjoyment of the service-learning experience produces an overall “halo effect” such that they rate everything about the class more positively. Indeed Berson (1997) found that students who participated in service-learning also reported higher satisfaction levels with the instructor, the grading system, reading assignments, and the course than did students who did not participate in service-learning. If present, this effect can be reduced by asking about specific cognitive outcomes in a question format that is different from attitudinal questions (Steinke et al., 2000).

Course Evaluation Measures of Learning

Some researchers using final course grades to measure student learning have found that servicelearning students achieve higher outcomes than comparable non-service-learning students (Berson, 1997; Markus et al., 1993). However, other studies

Page  7 Cognitive Outcomes of Service-Learning

have failed to replicate these results (Kendrick, 1996; Miller, 1994).

In the Markus et al. (1993) study, service-learning students also received higher course grades than did control students. Berson’s (1997) study used course grade, course attendance, and course completion to measure academic success. Berson found no significant differences between the service- learning and control groups on withdrawal rates or course completion rates but did find that, in aggregate, service-learning students earned higher course grades than did control students.

However, not all results on service-learning’s impact on student learning as assessed by course grades have been positive. Miller (1994) examined two undergraduate courses, social and developmental psychology, with a service-learning option for each course. Contrary to the researcher’s predictions, course grades were not significantly different between the two groups. Kendrick (1996) compared service-learning and control sections in an introductory sociology course. Students in the service-learning section completed 20 hours of field work in community social service agencies, whereas control students read articles from the New York Times designed to help them apply course concepts to real world occurrences. Course grades did not differ between service-learning and non-service- learning students.

Course grades do go beyond student beliefs; however, they can be problematic in other respects. One problem with using grades noted by Eyler and Giles (1999) is that the course grades of servicelearning and non-service-learning students, even if the students are from the same class, will be based on different assignments (e.g., paper and presentation on service-learning experiences versus a longer term paper). Another problem with using course grades is that they will necessarily reflect students’ motivation to perform well in the class. In a comparison between experiential service-learning projects and non-experiential projects, Cohen and Kinsey (1994) reported higher self-report of motivation as well as perceived effectiveness of service-learning as a learning tool from students engaged in experiential projects than from students engaged in non-experiential projects. While an increase in student motivation is a worthwhile goal and will necessarily contribute to cognitive outcomes, the claim that participation in service-learning increases student motivation is not the same as the claim that involvement in service-learning enhances student learning due to its unique pedagogical elements.

General Measures of Critical Thinking

Past research studies on service-learning outcomes focusing on the basic thinking processes, such as problem solving, open-mindedness, and critical thinking have reported gains in these skills when students were given opportunities to discuss their experiences with others involved in similar community efforts (Conrad & Hedin, 1991). More recently, in a longitudinal study, Astin, Vogelgesang, Ikeda, and Yee (2000) used a national sample of over 20,000 undergraduates to compare students involved in service-learning with students involved only in community service. These researchers found that service-learning students reported more growth in writing and critical thinking skills from their college career than did the community service students. However, these same researchers found no significant effects of either service-learning or volunteer community service on standardized test scores used for admission to graduate and professional schools (e.g., GRE, LSAT, MCAT), which could also be considered measures of general academic skills including critical thinking.

The main problem with any general measure of critical thinking, however, is that while these measures do inform educators about how well students are meeting overall goals in undergraduate education, they do not provide strong support for individual teaching faculty who want to know whether service-learning improves understanding specific course content. As evidence of the utility of a more course-specific approach to measuring these skills, Eyler and Giles (1999) examined critical thinking about course related issues and found that higher quality service-learning experiences were related to better critical thinking skills.

General Measures of Creativity

In addition to critical thinking skills, creativity may be enhanced by service-learning experiences, which often require students to apply knowledge to novel situations in settings that have few resources. Osborne, Hammerich, and Hensley (1998) studied 92 undergraduate students enrolled in a required communication course within the School of Pharmacy at Butler University, Indiana. Two of four sections of the course were randomly assigned as service-learning sections, while the other two incorporated a traditional research project. These researchers found that at post-test, service-learning students had higher scores than non-service-learning students on the Remote Associates Test (RAT), a standard measure of creativity. While more research on creativity is needed, additional indirect evidence for the relationship between servicelearning and creativity comes from Steinke, Fitch,

Page  8 Steinke and Buresh

Johnson, and Waldstein (in press), who found that among community members working with servicelearning students, student creativity judgments were positively related to opinions about the project’s success. General measures of creativity, however, suffer from the same problem as general measures of critical thinking; they are not specific to course content. Future efforts in this area should focus on measures of creativity specific to course content.

Coding of Open-ended Measures of Course Content

While these measures are specific to course content, they have yielded inconsistent results. Some research relying on objective, open-ended measures specific to course content has produced positive results. Although Kendrick (1996) did not find overall differences between service-learning and non-service-learning students in course grades, he did find that service-learning students demonstrated higher achievement on essay exams (but not multiple-choice exams) and a greater ability to apply course concepts than did traditional students. Kendrick concluded that perhaps “service-learning promotes quality of thought, even though it may not improve knowledge content” (p. 79). Similarly, Strange (2000) compared students’ test performance in a Child Development course who took the course when service-learning was included, with students who took the course before servicelearning was included, and found that servicelearning students performed higher on essay exam questions (but not on multiple-choice questions) than did non-service-learning students.

Other research has not produced positive results using objective, open-ended measures specific to course content. In addition to using a number of scales and inventories including the RAT discussed above, Osborne et al. (1998) collected students’ written work and analyzed it for complexity of communication, integration of practical examples into communications, sensitivity of communications and awareness of diversity. One of the only measures not showing a more favorable change for service-learning than for non-service-learning students was the complexity of communication as analyzed from students’ written work. Steinke and Buresh (1999), using a general knowledge protocol found service-learning students generated no more knowledge when asked about an applied conceptual issue than did non-service-learning students in the same class. Yet, faculty will be looking for evidence that demonstrates at face value student learning as represented in specific coursework, such as

ability to match knowledge with the instructor or complex written work.

One measure that seems to hold the most promise for providing a good measure of students’ ability to apply course content to novel situations is the problem-solving protocol. For example, Eyler and Giles (1999) found that a problem-solving protocol worked well in understanding cognitive outcomes for their national study. In a problem-solving protocol participants are given an applied problem that is directly relevant to their course and through a series of questions are asked to generate causes, solutions, and personal strategies for these problems. Problem solving protocols have the flexibility to be used in interview format (Eyler & Giles), open-ended survey format (Schmiede, 1995; Steinke et al., 2000) or focus group format (Schmiede). Responses to the protocols can be coded in various ways including locus of problem and solution, causal and solution complexity, knowledge application, complexity of personal strategies, and critical thinking (Eyler & Giles), as well as amount of knowledge generated, number of causal connections and amount of knowledge matching instructor’s knowledge (Steinke). In addition, a problem solving protocol can be integrated into classroom reflection practices (Schmiede), which is consistent with Eyler’s (2000) recommendation that cognitive outcome measures be incorporated into classroom activities.

Interpretations of Review

Reviewing the research on cognitive outcomes suggests that students often report an increase in learning from participation in service-learning but that objective measures have provided inconclusive support for the claim that service-learning promotes improved course material learning over alternative assignments. One explanation for these inconsistent findings is many service-learning efforts’ lack of quality (Moore, 1999). Moore states, “As much as I believe in the educational potential of experiential learning, I’ve come to believe that too much of what goes by that name suffers from a serious problem in quality (p. 1).” Moore’s reflections on the state of experiential education in general have direct relevance to the state of service-learning more specifically.

A number of studies have found characteristics related to the quality of the service-learning project predict better cognitive outcomes. Eyler and Giles (1999) found both quality of placement (e.g., variety and challenge of work) and characteristics of instruction (e.g., frequent writing and discussion about the experience) predicted better cognitive outcomes as assessed by an interview problem

Page  9 Cognitive Outcomes of Service-Learning

solving protocol. In addition, participation in service- learning programs that were high quality predicted a more complex understanding of causes of, and solutions to, problems. Mabry (1998) also found that written reflection, discussion, service hour numbers, and amount of contact with beneficiaries predicted better academic outcomes among service-learning students as assessed by selfreport. Steinke et al. (in press) found that characteristics of instruction and students’ perceived choice about the service-learning project both predicted better cognitive outcomes as assessed by self-report.

Another interpretation of the inconsistent findings is that the increased learning that students report is not the same as reproducing the instructor’s knowledge, which is often what traditional evaluation instruments measure (Steinke et al., in press). Troppe (1995) attributes evaluation problems to service-learning’s different model from the traditional classroom, creating difficulties when trying to evaluate its effectiveness. Whereas traditional classrooms are evaluated based on knowledge students gain from the teacher as expert, service- learning classrooms must be evaluated based on cognitive and behavioral gains students make in integrating their knowledge and experience with the student serving as the initiator and the teacher serving as the facilitator. Cognitive skill-based outcomes are not easily captured by traditional assessment instruments, which tend to test recall of factual content. Supporting this interpretation, both Kendrick (1996) and Strange (2000) found servicelearning students performed better than non-service- learning students on essay questions (which are more skill-based) but not on multiple-choice questions.

Before better outcome measures can be developed, researchers need to better define relevant cognitive outcomes based on cognitive and learning scientists’ work (Eyler, 2000). Research on differences between experts and novices highlights the importance of skill-based knowledge, such as the ability to think flexibly about problems, to transfer existing knowledge to new situations, and to use metacognitive skills in problem solving (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). Similarly, Schank, Berman & Macpherson (1999) make the case for focusing student learning goals on knowing “how” rather than “what.” These researchers are not suggesting that content be ignored; rather they contend that content alone is not enough and that students are more likely to learn content when pursuing meaningful, intrinsically interesting learning goals. Similarly, research by Schwartz and Bransford (1998) sug

gests that comprehending content “told” to students is contingent upon students’ active cognitive preparation. Therefore, better outcome measures for service-learning should assess not only that students have the content, but also that they can use the content intelligently.

Problem solving protocols currently hold the most promise for assessing both content and intelligently using content. As they have been used so far, however, problem solving protocols still do not fully capture the kind of learning that servicelearning faculty and students report. These reports go well beyond simple “halo effects” as captured in the following student observation:

My analogy is that the class is like a piece of paper, and then being able to do the community service animates that picture. So you have a piece of paper with maybe a cartoon on it, and you can read the cartoon and understand the cartoon but when you do the community service, it animates the cartoon and turns the cartoon piece of paper into an actual movie, and then you can experience the movie and maybe you’re even a part of the movie. So it’s like the class is the piece of paper, and the community service brings it to life and makes sense of why you’re even there. (Giles & Eyler, 1999, p. 57)

Although student insights like this are difficult to translate directly into cognitive outcomes, it is clear that students perceive they are getting more out of service-learning than simply being better able to recite a discipline’s “facts.” Directions for how to improve cognitive outcome measures can be gleaned from recent work on learning from the cognitive sciences. Three cognitive constructs seem particularly relevant to the academic gains students report with service-learning: transformations in deep structures of their knowledge organization, ability to engage more easily in their knowledge’s analogical transfer, and increased metacognition about how their new insights fail to fit with their previous expectations.

Future Development of New Outcome Measures Based in the Cognitive Sciences

Knowledge Organization

Clearly the cognitive perspective on learning has much to offer in making improvements to higher education (Bruning, 1994). Cognitive psychology has long emphasized the importance of accessing prior knowledge when learning new knowledge. Prior knowledge has been conceptualized as schemata or knowledge structures that provide a mental framework for organizing knowledge, suggesting that cognitive learning can be assessed in

Page  10 Steinke and Buresh

part by how closely knowledge organization and knowledge transfer match that of the instructor as expert.

Expertise requires both a strong knowledge base and understanding the structure of that knowledge base. Bransford et al. (2000) review the research on how experts differ from novices, beginning with the mistaken notion that experts simply have more breadth and depth of knowledge. In fact, research in numerous areas, from physics to teaching, has demonstrated that experts are more able to notice meaningful patterns and to organize knowledge around big concepts than are novices. Without meaningful organization, experts remember no more pieces of information (e.g., places of pieces on a chess board) than do novices. Novices often do not even see meaningful patterns and relationships that are present, and when they are asked to organize information, do so based on surface characteristics such as diagram type used to represent physics problems. Even when experts demonstrate no more knowledge than novices about the facts in their field (i.e., out of their area of expertise), they are still better able to reason about possible interpretations about new facts. Furthermore, experts exhibit more fluent or automatic retrieval of information than novices and know when to retrieve relevant knowledge; expert knowledge is “conditionalized” to include the contexts that are most useful

(p. 43). Future cognitive outcome measures will need to be able to evaluate how students organize their knowledge structure representations in order to get a good objective measure of the “deeper” understanding that service-learning students report (Eyler & Giles, 1999). The problem solving protocol seems to provide the best promise for explicating knowledge structures; however, amount of knowledge shared with instructor provides only a crude measure of similarity of knowledge structures between experts and novices. A knowledge structure approach used in other cognitive science areas, such as text comprehension and question answering (Graesser & Clark, 1985), will need to be adapted to the problem-solving protocol to better measure the depth of students’ knowledge structure representations.

Knowledge Transfer

Understanding the structure of knowledge students are gaining, rather than just surface facts, allows teachers to be better at teaching students how to be flexible problem solvers and how to transfer knowledge to new domains. Simply assessing knowledge structures’ existence alone, however, will not provide an adequate measure

ment of transfer. True understanding also involves knowing when to use the knowledge structures appropriately, as is assessed by analogical transfer. As Gardner (1997/1999) writes, “we do not care about the elegance of a mental representation if it cannot be activated when needed” (p. 73). Knowledge transfer is an active, ongoing process requiring both a certain level of knowledge and active learning to complete.

Bransford et al. (2000) emphasize how learning defined as ability to transfer knowledge is not the same as learning defined as remembering, and that various tasks will encourage these learning types differently. Learning elements that promote transfer include degree of mastering the original subject, having time to process new information, and active monitoring and feedback, including potential transfer implications for what students are learning. These elements are consistent with the principles of good practice and elements of quality instruction found to promote better service-learning cognitive outcomes (Eyler & Giles, 1999; Mabry, 1998; Steinke et al., in press). Bransford et al. also highlight the importance of intrinsic motivation to learn in promoting transfer, suggesting that one intrinsic motivation predictor is whether the work students are engaged in will contribute to the well-being of others. This suggestion is consistent with Eyler and Giles’ finding that students’ perceptions that their service-learning projects contributed to the community predicted better service-learning outcomes.

The learning goals of service-learning seem particularly relevant to transfer. Service-learning students report that they can apply course material to new situations and real world problems more than do non-service-learning students (Eyler & Giles, 1999; Markus, Howard, & King, 1993). In some sense, however, all learning involves transfer because as students learn, they are always drawing on old information and attempting to transfer it to the new learning context (Bransford et al., 2000). As Bransford et al. write, “the ultimate goal of schooling is to help students transfer what they have learned in school to everyday settings of home, community, and workplace” (p. 73). Bransford et al. assert that analyzing real world, everyday situations or environments (as is the case with service-learning) can have “potential implications for education that are intriguing but need to be thought through and researched carefully” (p. 77). The authors conclude that “the most effective transfer may come from a balance of specific examples and general principles, not from either one alone” (p. 77).

This is in fact the model for course-integrated service-learning, as students are asked to apply

Page  11 Cognitive Outcomes of Service-Learning

general principles they are learning in the course to the specific examples they are encountering through their service-learning projects. Yet, cognitive outcome measures have not adequately assessed transfer. The problem of assessing transfer in service-learning students is further complicated by the need to take into account the social and cultural context in which the learning occurs (Bacon, 1999). Therefore, better outcome measures for service- learning should focus not only on content knowledge structure representations but also on the ability to use content appropriately in novel situations (including cross-cultural contexts) as is measured by analogical transfer. Once again, the problem solving protocols have gone the farthest so far in attempting to measure students’ ability to use knowledge to help conceptualize and solve problems presented in novel contexts. Further measures will need to go beyond students’ ability to write about or discuss their cognitive processes relevant to transfer to assessing student performance in a simulated or real situation.

Metacognition and Expectation Failure

Another characteristic of experts is that they tend to make good use of their metacognitive skills, thinking both about what they know about a certain domain and what they don’t know and therefore need to find out (Bransford et al., 2000). Cognitive psychologists emphasize the importance of metacognitive skills that involve “(1) individuals’ knowledge about their own thought processes and

(2) their ability to use this knowledge to regulate their own cognitive processes” (Bruning, 1994, p. 11). In support of the importance of metacognitive abilities in improving cognitive learning outcomes of service-learning, several studies have found that amount of structured reflection incorporated into service-learning courses is related to student learning (e.g., Eyler & Giles, 1999; Mabry, 1998). Yet, researchers and practitioners in service-learning “know relatively little about how to structure reflection for maximum effect” (Eyler, 2000, p. 14). Recent work in cognitive science on Goal-Based Scenarios (GBSs) supports the importance of metacognitive skills to reforming the traditional classroom (Schank, 1994; Schank, Berman & Macpherson, 1999; Schank & Joseph, 1998) and provides some direction for service-learning researchers and practitioners. Schank et al. (1999) emphasize the importance of GBSs, which involve using goal-based simulations designed to help students practice target skills and use relevant content knowledge to pursue a goal. These cognitive scientists argue that GBSs’ components should serve as

the basis for instruction. So far the GBSs approach has been used for developing both computer- and instructor-led simulations in job training and educational settings; however, their use can be extended to the real life problems posed by service-learning projects.

The GBSs approach is based on the assumption that motivation to learn begins with a goal and that learning occurs as a result of what happens as students attempt to achieve that goal (Schank et al., 1999). Expectation failure is a crucial part of the learning process; mistakes, which are inevitable in any complex task, allow for learning. Therefore, without expectations in the first place, students will not learn. In order to explain the expectation failure, students will search for an adequate explanation. This search may involve a learning transfer as students retrieve old cases to assist in solving the new problem. By focusing students’ learning on explanations or principles that explain expectation failure, the GBSs approach teaches students metacognitive skills.

The GBSs approach fits well with the basic assumptions of service-learning as a pedagogy. Schank and Joseph (1998) recommend that educators take prior expectations into account and choose skill-based learning objectives. An expectation mismatch will lead to surprise and then curiosity in learning, which will allow for new knowledge structures to be created. It is important to note that according to these authors, “a student may need support to understand when a surprise has happened” (p. 53). This is often the function of faculty responses to students’ written work and faculty- led discussions in service-learning courses. According to this view of learning, providing single answers to students’ questions will in effect dampen students’ curiosity, which will in the long run lead to less flexible mental structures. Because all learning involves doing, educators need to think about what they are teaching students to do when they set up learning environments. These authors suggest that in the ideal learning environment, “goals students genuinely believe in and care about provide a focus for learning skills and knowledge we want them to have” and should be connected with real life (p. 54). Reflection and feedback are important educational process components.

The GBSs approach is also consistent with recent efforts to find more skill-based approaches to assessing cognitive outcomes. This approach is in direct contradiction to traditional “fact learning” approaches, which most often occur outside of any meaningful context (Schank & Joseph, 1998), but is completely consistent with service-learning. The world is changing so quickly and the amount of

Page  12 Steinke and Buresh

“facts” increasing so rapidly that it is no longer even possible to expect every educated person to have the same base of “facts.” The authors suggest this change necessitates larger reform in the educational system. As follows from this revised understanding of learning and education, standardized tests are “completely incompatible with the kind of intelligent instruction” (p. 63) advocated by these authors.

The GBSs approach also provides direction for service-learning practitioners looking for ways to improve service-learning practice. Seven components are crucial to setting up a successful GBS (Schank et al., 1999). The first component is clearly specifying the learning goals, both process and content, as identified by the instructor. The second component is an interesting goal or mission that will incorporate the learning goals. Regarding a service-learning course, this would be the servicelearning project goal. The third component is the cover story that creates the need for the mission. Regarding a service-learning course, this would be the background knowledge required to understand the need for providing service. The fourth component is the student’s role in the scenario or servicelearning project. The fifth component consists of scenario operations or activities the student does in order to meet the mission goal or the service-learning project goal. The sixth component consists of the resources or information the student needs to achieve the mission goal. The seventh component is the feedback provided to the student when the student experiences expectation failure.

Finally, the GBSs approach to learning is consistent with other cognitive approaches to learning. Schwartz and Bransford (1998) argue that giving students active ways to contrast cases provides students with an opportunity to develop the kind of differentiated knowledge needed for deeper understanding, and puts students in a learning state in which they can be successfully taught content through “telling.” Contrasting cases is similar to expectation failure; “telling” is similar to providing resources and feedback, and contrasting cases necessarily involves a contrast between the expected case and the actual case.

Clearly using GBSs suggests specific ways that service-learning can be enhanced and provides some direction for reform in cognitive outcome measures specifically, and education more generally. The greater challenge presented by integrating GBSs into service-learning pedagogy is for researchers in cognitive outcome areas to develop assessment measures relevant to student responses to expectation failure. Specifically, measures will need to be constructed to assess students’ metacog

nitive awareness levels of expectation failure and the appropriateness or flexibility of students’ responses to expectation failure within the desired learning domain.

From Measuring Outcomes to Reforming Education

The need to improve and reform cognitive outcome measures goes beyond issues of methodology or documenting students’ ability to better reproduce facts, to issues of what service-learning is about and what practitioners and students claim that it does. Kinsley (1997) notes that servicelearning has “evolved as a vehicle to strengthen students’ learning, to reconnect them with their communities, to counter the imbalance in our current society between learning and living, and to repair the broken connections between learning and community” (p. 1). As researchers and practitioners continue to explore how to best operationalize these goals, their search will necessarily lead to attempting to ask and answer some basic questions about learning that cognitive scientists ask. The search for answers suggests reforms are needed in education goals, which therefore necessitate reforms in how to assess cognitive outcomes. Conrad and Hedin (1991) assert that one basic rationale for implementing service into an educational curriculum at any level is to reform education. Their perspective is based on the assumption that service-learning furthers student development by helping students to “come up with more satisfying and complex ways to understand and act on their world.” (p. 745). These claims regarding service- learning goals are consistent with recent claims from the learning sciences about education goals more generally. The task still at hand is to translate these goals into better cognitive outcome measures that really do get at what service-learning claims to be about. Perhaps then the Wingspread participants’ original hopes will be fulfilled, and research on learning outcomes of service-learning will transform education.

Concluding Comments

Reviewing past research on service-learning cognitive outcomes reveals a tendency for researchers to rely on the easiest and most consistently positive method to assess cognitive outcomes: self-report. While self-report measures do provide some insight into students’ beliefs about their learning and therefore provide direction for future measures, by themselves they are among the least convincing measures to faculty considering implementing service-learning into their courses.

Page  13 Cognitive Outcomes of Service-Learning

Other measures, including course outcomes and general measures of creativity and critical thinking, have been used less frequently with more inconsistent results. As newer measures are being developed, such as problem-solving protocols, researchers are being forced to confront the bigger issue of explicating the learning goals of service-learning in particular, and education in general. With the help of recent work in the cognitive sciences, progress in this area will improve cognitive outcome measures and lead to a greater understanding of how service-learning can help to transform education.

References

Astin, A.W., Vogelgesang, L.J., Ikeda, E.K., Yee, J.A. (2000). How service learning affects students. Los Angeles: University of California, Higher Education Research Institute.

Bacon, N. (1999). The trouble with transfer: Lessons from a study of community service writing. Michigan Journal of Community Service Learning, 6, 53-62.

Berson, J.S. (1997). A study of the effects of a servicelearning experience on student success at an urban community college. Unpublished doctoral dissertation. Florida International University, Miami.

Bransford, J.D., Brown, A.L., Cocking, R.R., Donovan, M.S., Pellegrino, J.W. (Eds.). (2000). How People Learn: Brain, Mind, Experience, and School. (Expanded ed.). Washington, D.C.: National Academy Press.

Bruning, R. H. (1994). The college classroom from the perspective of cognitive psychology. In K.W. Prichard &

R.M. Sawyer (Eds.), Handbook of college teaching (pp. 3-22). Westport, CT: Greenwood Press. Cohen, J. & Kinsey, D. (1994). ‘Doing good’ and scholarship: A service-learning study. Journalism Educator, 48(4), 4-14.

Conrad, D.,& Hedin, D. (1991). School-based community service: What we know from research and theory. Phi Delta Kappan, 72, 743-749.

Eyler, J. (2000). What do we most need to know about the impact of service-learning on student learning? Michigan Journal of Community Service Learning, (Special Issue), 11-17.

Eyler, J. & Giles, D.E. (1999). Where’s the learning in service- learning? San Francisco:Jossey-Bass Publishers.

Gardner, H. E. (1999). Multiple approaches to understanding. In C.M. Reigeluth (Ed.), Instructional-design theories and models (Vol. 2, pp. 69-89). Mahwah, N.J.: Lawrence Erlbaum. (Original work published 1997)

Giles, D., Honnet, E.P., & Migliore, S. (Eds.). (1991).

Research agenda for combining service and learning in the 1990s. Raleigh, NC: National Society for Internships and Experiential Education.

Graesser, A. C., & Clark, L. F. (1985). Structures and procedures of implicit knowledge. Norwood, N.J.: Ablex.

Howard, J.P.F., Gelman, S.B., & Giles, D.E. (2000). From yesterday to tomorrow: Strategic directions for servicelearning research. Michigan Journal of Community Service Learning, (Special Issue), 5-10.

Kendrick, J. (1996). Outcomes of service-learning in an introduction to sociology course. Michigan Journal of Community Service Learning, 3, 72-81.

Kinsley, C. (1997, October). Service learning: a process to connect living and learning. Bulletin, 1-7.

Mabry, J.B. (1998). Pedagogical Variations in servicelearning and student outcomes: How time, contact, and reflection matter. Michigan Journal of Community Service Learning, 5, 32-47.

Markus, G., Howard, J. & King, D. (1993). Integrating community service and classroom instruction enhances learning: Results from an experiment. Educational Evaluation and Policy Analysis, 15(4), 410-419.

Miller, J. (1994) Linking traditional and service-learning courses: Outcome evaluations utilizing two pedagogically distinct models. Michigan Journal of Community Service Learning, 1, 29-36.

Moore, D.T. (1999). Behind the wizard’s curtain: A challenge to the true believer. NSEE Quarterly, 25 (1), 1, 23

27. Osborne, R.E., Hammerich, S., & Hensley, C. (1998). Student effects of service-learning: Tracking change across a semester. Michigan Journal of Community Service Learning, 5, 5-13. Schank, R. C. (1994). Goal-based scenarios. In R. C. Schank & E. Langer (Eds.), Beliefs, reasoning, and decision making: Psycho-logic in honor of Bob Abelson (pp. 1-32). Hillsdale, N.J.: Lawrence Erlbaum. Schank, R.C., Berman, T.R., & Macpherson, K.A. (1999). Learning by doing. In C.M. Reigeluth (Ed.), Instructional-design theories and models. (Vol. 2., pp. 161-181). Mahwah, N.J.: Lawrence Erlbaum. Schank, R.C. & Joseph, D.M. (1998). Intelligent schooling. In R.J. Sterberg & W.M. Williams (Eds.), Intelligence, Instruction and Assessment, 43-65. Mahwah, N.J.: Lawrence Erlbaum. Schmiede, A. (1995). Using focus groups in service-learning: Implications for practice and research. Michigan Journal of Community Service Learning, 2, 63-71. Schwartz, D.L. & Bransford, J.D. (1998). A time for telling. Cognition and Instruction, 16(4), 475-522. Shields, S. & Steinke, P. (1992). Self-report as a research method: Innovations from limitations. Unpublished Manuscript. Steinke, P. & Buresh, S. (1999, June). Using a knowledge structure approach to assess cognitive outcomes of service- learning. Poster session presented at the annual meeting of the American Psychological Society, Denver, CO.

Page  14 Steinke and Buresh

Steinke, P., Fitch, P., Johnson, C., & Waldstein, F. (in press). An interdisciplinary study of service-learning outcomes. In A. Furco & S.H. Billing (Eds.), Advances in service-learning research, Vol. 2.

Strange, A.A. (2000). Service-learning: Enhancing student outcomes in a college-level lecture course. Michigan Journal of Community Service Learning, 7, 5-13.

Troppe, M. (Ed.) (1995). Forward. In M. Troppe (ed.), Connecting cognition and action: Evaluation of student performance in service learning courses. Providence, RI: ECS / Campus Compact.

Authors

PAMELA STEINKE is an Associate Professor of Psychology at Central College. She is actively involved in using service-learning in her courses, doing research on service learning’s cognitive outcomes and promoting service-learning on her campus and across Iowa.

STACEY BURESCH is a 1999 graduate of Central College where she did her Senior Honors project on service-learning outcomes among youth shelter residents. She is currently taking graduate classes at Drake University.