maize 13545970.0001.001 in

    21. Evidence-Based Approaches to Public Participation in Design Decisions


    People appreciate being asked for their input on the design of environments in which they live, work, and play. Inviting people to participate in the design process also provides a critical opportunity for them to contribute to something meaningful and use their skills.[1] However, in order to evaluate design options and provide useful feedback, participants need to be able to visualize the setting and understand the alternatives.
    Using the Reasonable Person Model (RPM) as a guiding framework, three studies were conducted to assess the participants’ perspective on the effectiveness of various participatory design ­approaches.Findings show that presentation format, drawing type, and amount of information matter a great deal for understandability. The studies provide support for using the photo questionnaire as a tool for gathering people’s input. Based on these findings, recommendations are provided on ways to create a supportive environment for participation. This research empowers designers by helping them see that they can make a difference in creating an effective participation process using RPM as a guide.

    Public participation in the context of design and planning projects is promoted for its many potential benefits: better-suited designs, greater project support and cooperation, enhanced sense of community, and increased likelihood of use and long-term stewardship (Grese, Chapter 19). It is also often disdained by the public as well as the professionals involved. For the public there are frustrations about being provided with information that is difficult to understand. The participants express concerns that they were not heard and that their input did not seem valued. The professionals, by contrast, may walk away from the process with little information about what the public prefers and a sense that citizens cannot give useful feedback. Such impasses may be avoidable.

    Designers seek to create supportive environments, or physical spaces that meet the users’ needs. One can also think about supportive environments in terms of the process by which designers involve the public in the design of these places. This chapter offers concrete suggestions for ways to create a supportive environment for people to participate in the design process. While such suggestions have been offered in the past, the ones included here differ in being evidence-based. They are drawn from three studies that specifically evaluated participants’ reactions to the participation process. The studies are based on the Reasonable Person Model (RPM) and at the same time offer an assessment of RPM as a framework. Each of the three studies measures laypeople’s understanding of design options, engagement in the process, and sense of meaningful participation. These measures were derived from the main components of RPM: model building, being effective, and meaningful action. A design process that meets these needs is more conducive to participation and people’s satisfaction in the experience.

    To provide the context for the recommendations, each of the three studies is discussed briefly in the initial section, including research methods used, measures used to assess the core domains, and general findings. These aspects, along with additional details provided in the appendices, are important when considering the generalizability of the results in terms of the scale and scope of design situations. (Full accounts of the studies and analyses can be found in Phalen, 2011.) However, the main focus of the chapter is on the recommendations that stem from the studies. These concrete approaches for facilitating a supportive environment for participation are organized around the key components of RPM—model building, being effective, and meaningful action.

    The Three Studies and RPM

    The three studies are similar in their focus on small-scale nature settings, such as parks, nature trails, and outdoor seating areas. They also have in common that the approaches used to test effectiveness were based on survey items assessing understandability, engagement, and participation. This section defines the variables and describes their relationship to the main components of RPM: model building, being effective, and meaningful action. Appendix A provides examples of the items.

    Measuring Effectiveness


    Understandability addresses the participants’ ability to make sense of the visual graphics and the kinds of places they depict. A person with a good understanding of the proposed nature setting would be able to envision it from multiple perspectives, imagine movement through the setting, and have a sense of what it would be like to be there. In the first two studies, understandability also encompassed the knowledge gained regarding the range of design possibilities.

    Design drawings aim to help participants form mental models of the proposed settings so they can recognize features of the design, make predictions about what might happen there, and evaluate the design option. Nonetheless, the final outcome of a design that you thought you understood can be surprising and even frustrating and costly. This research, however, assessed the participants’ perceived understanding and did not test how their understanding compared to the actual setting once built or to the designer’s intentions.


    Engagement refers to the extent to which the participatory design approach or drawings held the participants’ attention. In the first two studies, engagement also included the ability to explore design possibilities and whether the material was relevant to their interests and concerns.

    Engagement in this context relates to RPM’s “being effective” component, including sustaining attention and building a sense of competence. A drawing or presentation that holds the participants’ attention is less taxing on their directed attention capacity, leaving more resources for considering alternatives and evaluating design options. Drawings that depict nature settings may have the added benefit of providing a source of fascination for attention restoration. As Duvall (Chapter 20) points out, engagement has been linked not only to increased attentional focus but also to feelings of effectiveness and competence.

    As previously mentioned, exploration is an important aspect of “building mental models.” Drawings can afford participants the visual imagery needed to explore the setting in their mind. However, exploration of design alternatives can take on a more active form than just viewing design drawings. For example, Grese (Chapter 19) reports success in inviting participants to build or manipulate crude three-dimensional models, directly revise or change the designer’s drawings, and create collages with magazine images. Duvall (Chapter 20) identifies research suggesting that engagement is more likely when activities are personally relevant and involve fascinating processes.


    Participation refers to the participants’ perception of the ease in providing their input and their appreciation of being asked. Participation is more likely perceived as meaningful when participants feel that their concerns were heard, their input was needed, and their involvement made a difference. In the third study, participation was measured in terms of how confident the participants would be if given the opportunity to discuss the design with the landscape architect based on their comfort with the drawing.

    In Basu and Kaplan (Chapter 1), participation is described as both the antithesis and the antidote to helplessness. Participation in design provides people with the opportunity to contribute to a meaningful purpose. Listening, respect, fairness, and feeling, heard also, are important aspects of RPM’s meaningful action.

    People are more likely to participate when they have a sense of competence. They can be more effective in providing their input when they feel that they have the knowledge and skills needed to do what is asked of them.

    Study Descriptions

    The three studies differed in many respects. Table 21.1 provides an overview of some of these. Each of the studies is discussed in more detail, and some highlights of the results are provided below.

    Study 1: Design Session including Three Designs and Photo Questionnaire

    In Study 1, as would be true in participatory situations conducted early in the design process, employees were presented with three alternative design solutions in the context of an outdoor setting adjacent to their workplace. Participants were asked to record their reactions to each of the three designs as well as to complete a photo questionnaire that included photographs of the kinds of nature settings that were feasible at the workplace site. In addition, the participants rated a number of items measuring the effectiveness of the design presentations and the photo questionnaire as methods for gathering their input. Figure 21.1 provides examples of the slide formats and visual graphics used in the design presentations.

    TABLE 21.1 Overview of the three studies
    nContextParticipantsParticipation Approach Evaluated
    Study 1 – Design Session28Proposed nature trails at medical campusEmployees who attended a design session at the medical center.Design sessions (3 proposed designs) and photo-questionnaire (PQ)
    Study 2 – Photo-questionnaire154Proposed nature trails at medical campusEmployees, visitors, and patients of medical centerPhoto-questionnaire (16 scenes depicting nature settings)
    Study 3 – Drawing Types495*Multiple small-scale nature projectsNetwork of parishioners of a Catholic Church, friends, and family recruited via email and Facebook.Four types of traditional landscape design drawings
    *n = 404 laypeople and 91 experts.

    Participants generally found the three design presentations to be understandable and engaging and to promote meaningful participation (Table 21.2). The photo questionnaire was as effective as the three design presentations on these measures as well. No differences were found among the presentations for engagement or participation. However, the designs differed in understandability, with a significant difference between presentations A and C (Table 21.2).

    A comparison of the design presentations reveals the important role that presentation format, organization, and graphics play in understandability. The most understandable presentation (C) was also the one with the fewest number of slides, each containing only three or four short bullet points (Figure 21.1, third row). The least understandable presentation (A), on the other hand, included a great deal of information on each slide (Figure 21.1, first row). The small font size and style of the text on some of the slides made it difficult to read. Also, in the most understandable presentation, less emphasis was placed on drawings with a plan view.

    Does understandability make a difference? For the design with the lowest understandability rating (A), participants’ sense that they were meaningfully participating in the process was relatively unrelated to their sense that they understood the design. In other words, for a presentation that was generally well understood (4.02 out of 5), a small amount of difficulty understanding the presentation did not seem to affect the participants’ sense of participation. In the other two design presentations, the ratings of understandability and participation were significantly correlated, as RPM would predict.

    Figure 21.1. Example slides and graphics from the design presentations.
    Figure 21.1. Example slides and graphics from the design presentations.

    Perhaps more pertinent to the question, however, is the finding with respect to the relationship between understanding and participants’ preference for a design. Participants also rated their preference for, or how much they liked, the design presented. Presentation C received the highest preference rating, at 4.4 out of 5. Presentation A received significantly lower preference ratings (3.63) than Presentation C.

    TABLE 21.2 Mean ratings for effectiveness variables
    PresentationNFeaturesMeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
    A2617 slides, substantial amount of text per slide, small font, strong focus on site plan4.02 a0.784.230.644.500.56
    B2423 slides, 3-4 bullet points per slide, same format for some slides, strong focus on site plan4.070.664.290.514.340.62
    PQ2216 photographs depicting nature trail options4.370.584.450.55**
    C279 slides, 3-4 short bullet points per slide, consistent formatting across slides4.48 a0.504.480.454.540.52
    Rated on a 5-point scale from 1 - “not at all,” to 5 - “very well.”
    Comparison based on estimated marginal means. No significant differences found at p<.01, except the pair marked with an alphabetic superscript.
    *The photo-questionnaire (PQ) was not included in this analysis due to poor internal consistency of the items used to measure participation. However, an analysis of one participation item, “ability to provide input,” revealed no significant differences among the PQ and three presentations.

    Editors’ Comment: This also serves as empirical evidence for the link between clarity and positive affect.

    A regression analysis was conducted to test whether the effectiveness of the presentation influenced participants’ preference for the design (see Appendix B). Understandability of the presentation played a strong role in how much participants liked the designs in Presentations A and C, accounting for 64% of the variation in preference for the least understood Presentation A and 26% of the variation in preference for Presentation C. The more difficulty participants had understanding the presentation, the less they reported liking the proposed design. This relationship could be explained by the clarity mechanism (Ivancich, Chapter 5), which describes the strong association between confusion and displeasure and its effect on people’s choices and behavior.

    Study 2: Photo Questionnaire

    The same photo questionnaire that was included in Study 1 was used in this study. The participants were employees, patients, and visitors of the medical facility where Study 1 was conducted. They rated sixteen full-color photographs representing a variety of options for the nature trails, including paths of various widths and materials, man-made bridges, seating arrangements, and scenic views (Figure 21.2). They also assessed the photo questionnaire as a method for gathering their input using the same evaluation ratings described in Study 1.

    Participants found the photo questionnaire to be an effective tool for providing their input on the design options. The task of rating the photographs was engaging and easy to do, and they had no trouble visualizing the settings depicted in the photographs. The photo questionnaire particularly excelled in promoting meaningful participation. The items related to the participants’ sense of participation had the highest mean ratings of all items. Participants appreciated being asked for their input and found it easy to provide their input.

    The photo questionnaire was slightly less effective in providing a sense of the bigger picture of design possibilities. Participants’ ratings for the ability to explore different design possibilities and their awareness of the range of choices were slightly lower, at 3.8 and 3.9 on a 5-point scale (1 being “not at all” and 5 being “very well”). Also, the photo questionnaire performed the lowest (3.76) on its relevance to the participants’ concerns.

    Figure 21.2. Examples of photographs used in the photo questionnaire.
    Figure 21.2. Examples of photographs used in the photo questionnaire.

    Study 3: Four Types of Design Drawings

    The third study focused on four types of design drawings that are characteristically used to provide the public with imagery about a proposed design. The study used a systematic approach to test the effectiveness of plans, sections, perspective drawings, and photorealistic drawings (Figure 21.3). For each of these four types, participants were asked to consider between four and six examples—presented in random order. They rated the drawings in terms of how understandable, engaging, and abstract they were. They also rated how confident they would be discussing the design with the landscape architect based on their comfort with the drawing.

    Figure 21.3. Examples of the four types of drawings included in the study.: Sources: Photorealistic and perspective drawings provided by SmithGroupJJR,, Ann Arbor, MI; section drawing provided by Conservation Design Forum,, Elmhurst, IL; and plan drawing provided by Jenna Arlene Jones.
    Figure 21.3. Examples of the four types of drawings included in the study.

    Sources: Photorealistic and perspective drawings provided by SmithGroupJJR,, Ann Arbor, MI; section drawing provided by Conservation Design Forum,, Elmhurst, IL; and plan drawing provided by Jenna Arlene Jones.

    Study 3 included 404 individuals, the layperson group, who indicated that they had no design-related experience. This group found photorealistic and perspective drawings to be more understandable and engaging and to promote greater confidence in discussing the design than plans and sections (Table 21.3). However, there were several instances where plans and sections performed quite well. One plan drawing in particular (Figure 21.4, Plan S) was rated highly among all drawings and well above the rest of the plans for all measures of effectiveness. This was also true for participants with design experience (n=91).

    What set these more understandable plans and sections apart from the rest? The amount of information in these drawings is not overwhelming, and the level of detail and size of the features are appropriate for the given scale. The drawings are simple and neat. The paths and various sections of the park can easily be distinguished from each other. Writing and labels are easy to read. Also, the colors of the trees are green, as one might expect.

    TABLE 21.3 Laypeople’s mean ratings by drawing type (n=404)
    DrawingMeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
    Mean ratings are based on a 5-point scale (from 1 – “not at all” to 5 – “very much”.)
    Mean differences between types are significant at p<.001 for all pairs within a variable except pairs sharing the same numeric superscript.

    A comparison of the most and least understandable plan is provided in Figure 21.4. The most understandable plan used simple circles of a consistent size and color to represent the trees, organized neatly into rows and columns. The least understandable plan was highly detailed, with very small features, and was described as being too messy or busy.

    The study also compared responses of participants with design experience to those with little to no experience. The expert group found plans and sections to be more understandable and more engaging than the layperson group found them to be. For photorealistic drawings, however, the layperson group gave higher understandability ratings than the expert group did. Comments provided by some experts indicated confusion about what was being proposed and what was existing in the photorealistic drawings. The two groups rated perspective drawings similarly.

    Figure 21.4. Most (left) and least (right) understandable plan drawing.: Source: Plan S and Plan R provided by Conservation Design Forum,, Elmhurst, IL.
    Figure 21.4. Most (left) and least (right) understandable plan drawing.

    Source: Plan S and Plan R provided by Conservation Design Forum,, Elmhurst, IL.

    Testing the Reasonable Person Model

    This research was one of the first attempts to empirically test RPM and its predictions. The research methods used in the design studies may be useful to practitioners and researchers alike. Assessing citizens’ understandability, sense of engagement, and satisfaction with the participation format can provide insights into the responses obtained with respect to the designs.

    To test RPM, the study required development of adequate measures of each of the key domains. (Appendix A provides a list of the items used in Studies 1 and 2.) Since this was the first time these measures were used, statistical tests were performed to determine the fit or coherence of the groups of items. The results indicated that the measures for understandability and engagement performed quite well. On the other hand, the items used to measure participation were less reliable. Specifics of the statistical procedures used in studies are provided in Appendix B. All three studies used the same linear mixed-model procedure to analyze comparisons.

    The studies permitted examination of the interrelationships among the RPM domains. Two of the three studies tested these relationships. RPM states that the domains are highly interrelated, and the analyses support this while also showing that some of the relationships are stronger than others. The strength of these relationships also tends to vary based on the situation.

    As already mentioned, ratings of understandability (Study 1) were strongly related to participation for the two designs that participants found strong on understandability but did not reach significance for design “A.” The relationship between understandability and engagement, by contrast, was strong across all presentations. In other words, participants who were able to visualize the design options and understand the visual graphics also found the presentation engaging. Their ability to explore different design possibilities was linked to understandability.

    Participation and engagement were also significantly related (p<.01) for all design presentations, although the strength of this relationship varied across the designs. The results provide support for RPM’s prediction that participants who are engaged in the process are more inclined to provide their input and feel heard. Also, the opportunity to provide input, and the form this participation takes, may be engaging in and of itself.

    The same analyses for Study 3 showed that in the context of evaluating the effectiveness of drawings, ratings of understandability, engagement, and confidence are strongly related across all drawing types, with correlation coefficients ranging from 0.64 to 0.80. While supporting RPM’s assumption of the interconnectedness of the domains, the results also offer some useful insights about nuanced differences. The relationship between understandability and confidence discussing the design was strongest for all drawing types. In other words, being able to make sense of the drawing was highly related to feeling confident in being able to discuss the drawing with the designer.

    The relationship between understandability and engagement for all four drawing types was weaker relative to the other relationships but was still fairly strong. The results indicate that drawings that capture the audience’s attention can fall short on understandability. In addition, plans and sections that are relatively effective in communicating design ideas can be considered quite boring. In other words, engagement and understandability did not always go hand in hand for drawings. Additional studies that explore this relationship could be useful in determining situations in which this might be the case.

    Evidence—Based Recommendations

    A main goal of this research is to provide insight into ways in which designers can create a people-friendly participation process. RPM can be used as a conceptual framework to guide designers’ decisions in facilitating the participation process. This section provides recommendations on how RPM can be applied to participation in design based on the studies’ findings. It discusses the findings in terms of the three components of RPM—building mental models, being effective, and taking meaningful action. Each component is considered from the perspectives of both the designer and the participant.

    Building Mental Models

    Professional training provides the designer with many tools for visualization and experience with using them. Citizens, however, lack this experience and may not find the material that is presented by the designer easy to understand. The ability to understand the material is a fundamental requirement for building mental models of feasible alternative solutions for a project. Furthermore, understandability can affect people’s preference for a design. Results from each of the studies provide insights into ways designers could enhance the layperson’s ability to understand the visual material.


    When trees are pink and the sky is divided into large blue and white squares, people have more difficulty interpreting the material because it is contrary to their intuition. When labels are hard to read or too many, people can get frustrated or impatient because they can’t make sense of what is presented. However, when drawings are relatively simple and clear, it is easier for people to explore the space and grasp what is intended.

    As Study 3 showed, plans and sections are harder for laypeople to understand than perspective and photorealistic drawings. However, plans and sections can communicate design ideas quite well and even better than perspective and photorealistic drawings if they meet certain criteria. The ones that were found to be understandable were simple, neat, legible, and coherent. Features in the drawings were easily identifiable, and paths were clearly marked. Writing and labels were easy to read.

    The recommendation may seem simplistic, yet these issues are frequently violated and thus impede the task of model building. Build on what people already know and expect (such as green rather than pink trees). Even when just imagining being there, clear paths, landmarks, and signage enhance our ability to find our way through a setting. Coherent drawings aid in understanding the spatial relationships among the different areas in the setting, thereby strengthening the mental model in our heads.


    Trusting our own intuitions when we are experts is dangerous. As described by S. Kaplan (Chapter 3), experts see things differently as a result of their mental models becoming more compact over time. Since this process is gradual, experts often do not recognize how their perceptions have changed over time. As a result, a designer may not be the best judge of what drawings are most useful and simple enough for the layperson to understand. Therefore, the recommendation to check the designer’s intuitions requires acquiring feedback from nondesigners.

    The comparisons between the laypeople group and those with some design expertise in Study 3 provided evidence of some of the differences in how drawings are perceived. While experts were more facile in interpreting plans and sections, they were not more adept than laypeople when interpreting perspective and photorealistic drawings. Their expertise led them to see the effectiveness of these drawings differently than laypeople did. Designers are trained to use drawings as a means of designing a setting and communicating how it will be built or implemented. With regard to photorealistic drawings, comments from the expert group in the study indicated some difficulty determining aspects of the setting that were being proposed versus existing features. Their confusion seemed to stem from an implementation or design-build perspective. Laypeople, on the other hand, found the realistic-looking images easy to understand for their purpose of imagining how they would experience the setting.

    This finding is supported by research revealing significant differences in the way laypeople and experts frame or approach design problems. In a study by Van Herzele (2004), laypeople judged the design of a proposed park based on how well it fit within the community. Designers, on the other hand, tended to work from the “inside out,” thinking about the park as the central focus and then addressing the challenge of fitting it into the surrounding context. Laypeople identified potential uses of the park based on their experiences in the community, whereas designers envisioned potential uses based on how their design could be used. Also, laypeople identified potential management issues and misuses of the setting early in the planning process, while designers were more focused on developing an overall plan or a big picture for the park in this early stage of planning.

    Experts and laypeople appear to have different goals or purposes for design drawings, which may be reflected in their assessment of the drawings’ effectiveness and the inferences they draw from them. Knowing how experts’ and laypeople’s perceptions differ is a powerful way to mitigate some of the problems that can arise from expertise. This information is useful not only when choosing which drawings to use but also when interpreting participants’ reactions and feedback.


    Photographs are infrequently used in presenting design possibilities. Yet the public has extensive experience with photographs, and they can be very effective in providing a variety of examples for a common theme. Designers and researchers have reported success in using the photo questionnaire to acquire useful information about the participants’ perceptions in a variety of contexts, including landscape design (R. Kaplan, 1977, 1993; S. Kaplan & Kaplan, 1989), land-use planning (Ryan, 2002, 2006), the design of outdoor spaces at a hospital (Carpman & Grant, 1993), and scenic assessments (Kearney et al., 2008). S. Kaplan & Kaplan (1989) and R. Kaplan (1977) provide useful information on how to carry out a photo questionnaire. A critical step is choosing photographs that adequately represent a sample of the environment being tested. In addition to informing participants about design possibilities, a variety of carefully chosen examples will allow the researcher to discover how the participants see the environment and identify differences among groups of participants.

    Study 2 evaluated the photo questionnaire from the participants’ perspective. It showed that the photo questionnaire is engaging and easy to use. It also showed that the photographs in this study were somewhat less effective in providing a sense of the bigger picture of design possibilities. The finding suggests that a variety of approaches can achieve what any one of them cannot.


    All of us are readily attracted to a highly articulated three-dimensional model. These, however, are not a cost-effective way of gaining citizen feedback early in the design process. When it comes to drawings, by contrast, the studies show that understandability can impact preference. It’s harder to like something that one finds confusing. In Study 1 there was substantial difference in how much participants liked the designs when analyzed in terms of their ratings of the designs’ understandability. The more difficulty participants had understanding the design presentation, the less they liked the design option presented. This suggests that designers and planners might find it useful to measure the public’s understanding of what is presented to help them interpret why some options are more preferred than others. This is especially important, as expertise may cloud designers’ perceptions of participants’ understanding.

    Being Effective

    People derive satisfaction from sharing their knowledge and using their skills effectively. This motivation is adaptive, since it leads people to seek situations in which they feel competent and avoid those in which they do not. Designers rarely have training in facilitating a participation process. Lacking such skills can make the process daunting, frustrating, and ineffective when participants do not seem engaged or fail to provide input that the designer finds valuable. Worse yet, if participants are vocal and hostile, the results can be discomforting for all involved. The research presented throughout this chapter can contribute to designers’ confidence in their ability to facilitate a participatory process that promotes information sharing, thereby fostering a supportive environment for participation. Grese (Chapter 19) offers additional insights on how to include community participation in the training of landscape architects.


    Given the limited capacity of attention as described by Sullivan (Chapter 4), it is important to use methods that are engaging to participants. Results from the first study indicate that visual graphics and presentation style matter, particularly in understanding design presentations. The most understandable presentation was one with the fewest number of slides, each containing only three or four short bullet points. The least understandable presentation, on the other hand, included a great deal of information on each slide. The small font size and style of the text on some of the slides made it difficult to read.

    From a cognitive psychology perspective, it is not surprising that the amount of information and how it is presented play a role in people’s ability to build mental models of the design options and visualize design alternatives. As discussed by R. Kaplan (Chapter 2), presenting a great deal of information, especially in an incoherent manner or without an overarching structure, can overwhelm participants and make it easy to miss important points. Recognizing the limited capacity (Basu & Kaplan, Chapter 1; S. Kaplan, Chapter 3) of people’s attention by organizing the information into three or four main points and using consistent formatting can enhance understandability and people’s sense of competence.

    Meaningful Action

    People care deeply about the environments in which they live, work, and play. Public participation in design offers an opportunity for community members to take part in creating an environment that meets their community’s needs. Participants are more likely to feel that they can have an impact when given the opportunity to participate early on and feel that their concerns have been heard and their input is valued. One option for inviting feedback is the photo questionnaire. Results from the first two studies provide valuable feedback on the use of the photo questionnaire as a way to foster meaningful action.


    The first two studies found that participants appreciated being asked for their input. The photo questionnaire particularly performed well on measures of meaningful participation. The items related to the participants’ sense of participation had the highest mean ratings of all items. Also, the photo questionnaire was successful in reaching a greater number and more representative group of potential users than the design sessions, thereby providing an opportunity for more people to engage in meaningful action.


    Editors’ Comment: Once again, an example of how important it is to take affect into account when designing supportive environments.

    A weakness of the photo questionnaire used in the design studies was that the relevance of the photographs to the participants’ concerns was rated the lowest of all evaluation items. An important component of meaningful participation is the feeling that one’s concerns have been heard; thus, designers need to make a concerted effort to seek and demonstrate their understanding of the participants’ needs. Participants may not easily make the connection between rating photographs and revealing their concerns and preferences; thus, more traditional means by which participants can directly share their concerns may be important for fostering feelings of being heard. The second study indicated that a space in the survey for participants to add comments may not be enough. Very few participants (3%) provided written comments about their concerns and preferences.

    There are a number of options available to enhance participants’ feelings that their concerns have been heard. The photo questionnaire need not be limited to preference ratings of pictures. Additional questions could take the form of ratings of particular concerns or semistructured, open-ended questions. In previous studies, participants’ responses to these types of questions revealed useful information when compared to their preference ratings of pictures of design concepts (R. Kaplan, 1977; Kearney et al., 2008). Also, the photo questionnaire could be combined with other more traditional approaches for people to share their needs and concerns, such as interviews or a series of focus groups or design discussions. Finally, designers could follow up with participants and provide feedback summarizing the key concerns that emerged from the participation process.

    Designers gain satisfaction from knowing that they can make a difference not only in the design of places but also in the design of the participation process. Like participants, they too want to feel appreciated and needed. This research demonstrates that designers can make a difference in the participation process by choosing drawings and approaches that meet the participants’ needs.


    People develop strong attachments to the environments in which they live, work, and play (see Petrich, Chapter 13). When changes to these environments are proposed, the opportunity to provide input makes a substantial difference in people’s reactions and cooperation. People seek opportunities to expand their understanding, use their skills, and feel needed. When these needs are not met, feelings of helplessness, anger, and frustration readily emerge. Participation in design provides opportunities for people to feel that they make a difference. When done well, it leads to a meaningful exchange of information between designers and participants and has significant impact on people’s satisfaction and quality of life. It also can lead to more opportunities for meaningful action through an increased sense of ownership and participation in the long-term stewardship of the place.

    This chapter demonstrates how RPM can be applied to improve participation in design. Many of the recommendations may seem obvious once stated, yet there are all too many examples of participation efforts that did not fare well. Misunderstandings or lack of engagement have led to undesirable outcomes or missed opportunities for utilizing one’s skills or participating in meaningful activities. Having a framework such as RPM to guide decisions and actions can lead to more supportive environments in terms of both the final design (by identifying and meeting users’ needs) and the design process (by helping people to participate and designers to feel that their efforts inform decisions).

    Findings from three empirical studies provide evidence that the designers’ choice of design graphics and the approaches used to gather feedback matter a great deal. The results are striking regarding the effectiveness of plan drawings and how simplicity, coherence, and consistency with common perceptions can greatly improve people’s understanding of these drawings. The results also uncover differences in how designers and laypeople see things. Two of the studies provide evidence in support of using the photo questionnaire to foster meaningful participation in design. This research equips designers with critical knowledge for making a difference in creating an effective participation process.

    At the same time, the impact and value of this research extend beyond the designer. Participants, educators, and students can benefit from this evaluation of approaches to participation in design. In addition, researchers interested in applying and testing RPM are likely to find the variety of research methods used and measures of RPM useful. Research on RPM would benefit from exploration of other ways to operationalize or measure the components of the model.

    The great potential of RPM lies in its portability and wide applicability. While the studies presented in this chapter relate to the design of small-scale nature settings, many of the recommendations can be applied in a variety of contexts, such as architecture, planning, workplace remodeling, green building, ecological restoration, and community gardening. Public presentations are not the only situations where feedback is requested; such contexts can benefit from consideration of the participants’ core needs for building models, being effective, and taking meaningful action. The results can be remarkable: greater cooperation, an engaged community with the motivation and understanding needed to contribute meaningful input, user-inspired designs for places that matter, and a sense of ownership that fuels a long-term desire to protect these special places.

    Appendix A

    TABLE 21.A Items Used to Measure Dependent Variables (by Study)
    Understanding Engagement Participation
    Study 1 (Design Sessions with photo-questionnaire) & Study 2 (Photo-questionnaire) Please rate how well each statement describes how you feel.
    • The visual media / photographs were effective.
    • The visual media / photographs were overwhelming.*
    • I have a greater awareness of the range of choices for nearby nature settings.
    • The photo-questionnaire incorporated diverse settings.* (Study 2 only)
    Please rate how well each statement describes how you feel.
    • I was actively engaged.
    • I found the presentation / photo-questionnaire interesting.
    • Info presented / the material was relevant to my concerns.
    • The presentation / photo-questionnaire held my attention.
    • I was able to explore different possibilities.
    Please rate how well each statement describes how you feel.
    • The presenters were attentive to comments. / The photo-questionnaire captured my comments. (Study 1 only)
    • I appreciated being asked for my input.
    • I found the session / photo-questionnaire frustrating.*
    Please rate how easy it was for you to perform the following tasks.
    • Visualize alternative nature settings
    • Imagine movement through the space
    • Feel you could find your way
    • Feel what it would be like to be in the space
    • Think of the space from multiple perspectives
    Please rate how easy it was for you to perform the following task.
    • Provide your input [Study 1: . . . during the design session]
    Study 3 (Four drawing types) Understandable: It is easy to make sense of what I am seeing and what kind of place it is. Engaging: The drawing is interesting to look at; holds my attention.
    • Confidence: Based on this drawing, I would feel confident discussing the design with the landscape architect.
    • Frustrating: The drawing makes me feel aggravated or confused.
    * These items were dropped based on results of the analysis of internal consistency (See Appendix B).

    Appendix B: Statistical Procedures Used in the Studies

    Studies 1, 2 and 3: Comparisons

    In all three studies, comparisons were analyzed using a linear mixed-model procedure to account for the repeated measure design. The repeated covariance type used in the analysis was compound symmetry. Bonferroni adjustments were made for multiple comparisons of the estimated means.

    Studies 1 and 2: Relationships between Dependent Variables

    Relationships between the dependent variables—understandability, engagement, and participation—were analyzed using bivariate correlation coefficients.

    Study 1: Regression Analysis

    In Study 1, a regression analysis was conducted to test whether the effectiveness of the presentation influenced participants’ preference for the design. The analysis tests whether understandability, engagement, and participation predict preference for the design presentation. The “Backwards” method was used to test a model with all three predictor variables, followed by subsequent models eliminating the least influential predictor. The strength of this method lies in its use of all information available to determine the least influential predictor in each model, which is then removed in the subsequent model.

    Studies 1 and 2: Operationalizing RPM Domains—Items Used to Measure RPM

    In the first two studies, groups of survey items were developed to measure the components of RPM as they related to the goals of this research. Appendix A provides a list of these items. Since this was the first time these measures were used, statistical tests were performed to determine the fit or coherence of the groups of items. Factor analyses were performed using the Principal Components method with Varimax Rotation. Factors were extracted based on eigenvalues greater than one. Also, Cronbach’s alpha coefficients were calculated in both studies. A coefficient of 0.70 or higher is often used as an indication of sufficient internal consistency (de Vaus, 2002; Nunnally, 1978).

    In Study 1, the effectiveness variables show a moderate to high internal consistency for all three presentations, as shown in Table 21.B1. The single exception is for participation in the case of presentation “A” where the alpha coefficient is just below 0.70.

    TABLE 21.B1 Effectiveness Measures Used in Design Session (Study 1)
    Cronbach’s Alpha by Presentation
    • The visual media were effective.
    • I have a greater awareness of the range of choices for nearby nature settings.
    • [Ease of performing the following tasks:]
    • Visualize alternative nature settings
    • Imagine movement through the space
    • Feel you could find your way
    • Feel what it would be like to be in the space
    • Think of the space from multiple perspectives
    • I was actively engaged.
    • I found the presentation interesting.
    • Info presented was relevant to my concerns.
    • I was able to explore different possibilities.
    • The presentation held my attention.
    • The presenters were attentive to comments.
    • I appreciated being asked for my input.
    • Ease of providing your input

    In Study 2, the alpha coefficients for understandability (0.87) and engagement (0.86) indicated a high internal consistency for the photo questionnaire, as shown in Table 21.B1. Thus, average means across items were calculated for these two dependent variables and were used to compare groups of participants. Based on the factor analysis, the other dependent variable, participation, included only two items and did not meet the standard, with an alpha coefficient of 0.62. Thus, the two items for participation were analyzed independently.

    TABLE 21.B2 Photo-questionnaire (Study 2), Mean Ratings by Effectiveness Variable (n=154) and Cronbach’s Alpha Coefficients
    Mean RatingStandard Deviat.
    Understandability: Cronbach Alpha = 0.87
    The photographs were effective.4.350.81
    I have a greater awareness of the range of choices for nearby nature settings.3.891.05
    [Ease of performing the following tasks:]
    Visualize alternative nature settings4.390.82
    Imagine movement through the space4.290.96
    Feel you could find your way4.220.96
    Feel what it would be like to be in the space4.270.89
    Think of the space from multiple perspectives4.070.98
    Engagement: Cronbach Alpha = 0.86
    I was actively engaged.4.290.92
    I found the photo-questionnaire interesting.4.280.88
    The material is relevant to my concerns.3.761.11
    I was able to explore different possibilities.3.810.98
    The photo-questionnaire held my attention.4.310.86
    Participation: Cronbach Alpha = 0.62
    I appreciate being asked for my input.4.570.78
    Ease of providing your input4.530.73
    The photo-questionnaire incorporated diverse settings.a4.432.38
    The photographs were overwhelming.a1.491.01
    I found the photo-questionnaire frustrating.b1.360.87
    All items were rated on a 5 point scale from 1 - “not at all” to 5 - “very well/easy.”
    a These two items were hypothesized as measures of understandability for a total of nine items. Cronbach’s alpha for the original nine items was 0.72.
    b This item was hypothesized as a measure of participation for a total of three items. Cronbach’s alpha for the original three items was 0.47.


    1. 1.This study is part of a larger research effort: Kimberly Bosworth Phalen, Evaluating approaches to participation in design: The participants’ perspective, PhD dissertation, University of Michigan, 2011.return to text


    • Carpman, J. R., & Grant, M. A. (1993). Design that cares: Planning health facilities for patients and visitors. Schenectady, NY: American Hospital Publishing.
    • Kaplan, R. (1977). Preference and everyday nature: Method and application. In D. Stokols (Ed.), Perspectives on environment and behavior: Theory, research, and applications (pp. 235–250). New York: Plenum.
    • Kaplan, R. (1993). Physical models in decision making for design: Theoretical and methodological issues. In R. W. Marans & D. Stokols (Eds.), Environmental simulation: Research and policy issues (pp. 61–86). New York: Plenum.
    • Kaplan, S., & Kaplan, R. (1989). The visual environment: Public participation in design and planning. Journal of Social Issues, 45(1), 59–86.
    • Kearney, A. R., Bradley, G. A., Petrich, C. H., Kaplan, R., Kaplan, S., & Simpson-Colebank, D. (2008). Public perception as support for scenic quality regulation in a nationally treasured landscape. Landscape and Urban Planning, 87, 117–128.
    • Phalen, K. B. (2011). Evaluating approaches to participation in design: The participants’ perspective. Doctoral dissertation, University of Michigan, Ann Arbor. Retrieved from
    • Ryan, R. L. (2002). Preserving rural character in New England: Local residents’ perceptions of alternative residential development. Landscape and Urban Planning, 61(1), 19–35.
    • Ryan, R. L. (2006). Comparing the attitudes of local residents, planners, and developers about preserving rural character in New England. Landscape and Urban Planning, 75(1–2), 5–22.
    • Van Herzele, A. (2004). Local knowledge in action: Valuing nonprofessional reasoning in the planning process. Journal of Planning Education and Research, 24(2), 197–212. doi:10.1177/0739456x04267723


    This research was funded by the U.S. Forest Service, Northern Research Station, and the University of Michigan Rackham Graduate School. I would like to thank Rachel Kaplan and Robert Grese for their guidance in carrying out the studies. I am forever grateful to Rachel Kaplan and Stephen Kaplan for their perspective and our many stimulating conversations.