How Much Research is Enough?
Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
In an earlier issue of Weave, Emily Mitchell and Brandon West (2016) described a number of low-barrier solutions for getting UX insights when making design decisions. And while this is useful, what do you do when that’s not enough for stakeholder buy-in? Or vice versa, what if you are expected to conduct a rigorous research study, yet don’t believe rigor necessary to make design decisions?
Often the quick and dirty is regarded with skepticism. On the other hand, more intensive research is too time-consuming, and not always necessary when the goal is selecting a label for a drop-down menu.
Erika Hall’s insightful book Just Enough Research (2013) has helped me work through some of these juxtapositions, and could be valuable to fellow librarians. In this article, I summarize key points from Hall’s book that I found useful in the context of user experience in libraries. If you’re looking for something practical, as implied in this article’s title, I’ve also supplied a few scenarios with suggested research techniques. You’ll notice that my scenarios are strictly online or interface focused, as that’s the only user experience work I’ve been involved with. If you’re looking for the application of research techniques to physical spaces or administrative structures, you won’t find it here.
Just Enough Research
In her book, Hall distinguishes between “pure research,” “applied research,” and “design research.” Pure research is “carried out to create new human knowledge, whether to uncover new facts or fundamental principles” (Hall, 2013, p. 12). When we take this definition and apply it to user experience in libraries, studies looking at cutting edge new interfaces for library catalogs or the beginnings of the first digital collection could fit into this category. Even more broadly, ethnographic research that seeks to understand information-seeking behavior in the academic context would be pure research.
Hall separates pure research from applied research. Applied research “borrows ideas and techniques from pure research to serve a specific real-world goal” (p. 12). An example of applied research is a designer working on building a library interface who uses the pure research of the Nielsen Norman Group as a starting point. The Nielsen Norman Group has supplied us with well-known usability principles such as when writing for the web, less is more (Nielsen, 2011). Recently through their evidence-based research, they’ve found that commonly used hamburger menus are not as usable as was initially thought (Pernice & Budiu, 2016). Applying general usability principles and design conventions is undoubtedly useful. Conducting pure research within our libraries to learn the uniqueness of their contexts is an ideal worth striving for, but we need to use the applied research to make change in libraries.
Hall’s design research is a unique form of applied research, specifically for making design decisions. “Design research is for gathering useful insights“ (Hall, 2013, p.13). An example could be taking the pure research done on the first prototype of a discovery layer and applying it to some of our design decisions. While useful and easily accessible, it's a less rigorous form of research than doing research on a specific context in which other research cannot be applied. A common argument in opposition to design research in libraries is “but we already know all of this.” As Hall helpfully explains, a good counter-argument is, “Unless this knowledge comes from recent inquiry specific to your current goals, a fresh look will be helpful. Familiarity breeds assumptions and blind spots” (p. 26).
For me the value in making the distinction between pure research, applied research, and design research is to bring validity and credibility to UX research within librarianship, which is looked at skeptically and not always understood. My hope is that by acknowledging the limited role of design research and distinguishing it from pure research, we can increase buy-in. In order to maintain support from colleagues who are unaccustomed to user experience research, clarifying the role and scope of our proposed research is essential.
Anthropologists Donna Lanclos and Andrew Asher (2016) argue that “constructing long-term views of student behavior, gained via ethnography, is good and necessary practice for effective, engaged, and innovative libraries, and indeed education generally.” It is an ideal worth striving for, and I agree pure ethnography should be done to inform high-level administrative decisions. We need advocates to push for more ethnography to support the future of libraries in a quickly shifting landscape. But Andrew Priestner (2017) responded, “the goal of user experience work, as I see it, is not a purity of methods but a balancing of these methods with a practical effectiveness of outcomes.”
So if we want to ensure we’re representing our users’ needs, we need to ensure a level of rigor in our quick and dirty methods. Design research may be less rigorous than the alternative types, but some rigor is still important to reduce bias. As Hall (2013, p. 33–35) outlines there are a number of biases to look out for. Sampling bias, interviewer bias, and social desirability bias are the most difficult to avoid in user experience research within libraries.
Most librarians should be aware of sampling bias from their graduate school coursework. According to Wikipedia it is “a bias in which a sample is collected in such a way that some members of the intended population are less likely to be included than others” (Sampling bias, 2017). Hall says that “sampling bias is almost unavoidable in quick and dirty qualitative research” (2013, p. 34). But as she points out, this “can always be countered by being mindful in the general conclusions you draw” (p. 34). If we’re designing a LibGuide for undergraduate students and only have graduate students available to do a few quick usability tests, pointing out the potential limitations of this usability test helps counter the bias. However, aiming to reduce sampling bias as much as possible will bring more valuable results.
Hall describes interview bias as “inserting one’s own opinion into an interview” (Hall, 2013, p 34). This can be a particularly tricky bias to overcome. An example of interviewer bias would be to ask during an interview, “how do you use the library catalog?” implying that the user does in fact use the catalog, or even knows what that word means.
Social Desirability Bias
Hall points out that “it can be hard to admit to an interviewer that you don’t know what certain terminology is” (p. 35). Students, faculty members, or other users may not admit that they don’t know what a library catalog, or a subject heading is. This is called social desirability bias. One technique for helping to reduce this bias is to “emphasize the need for honesty and promise confidentiality” (Hall, 2013, p.35).
What Should You Do?
Amanda Etches and Aaron Schmidt summarized many different research methods applied to specific library-related situations in their book Useful, Usable, Desirable (2014). They describe twelve research methods with key strengths and example use cases:
- Surveys: Ask questions about attitudes and opinions, not behavior. Example usage: Ensuring users are receiving assistance on the website when and where they need it.
- Focus groups: Aim for unique opinions and inclusion. Example usage: Ensuring website searches of online collections are relevant to member needs.
- User interviews: Good for gathering info about attitude and behavior. Example usage: Ensuring library web services solve problems.
- Contextual inquiry: Observing users completing a task. Example usage: Ensuring the website supports diverse behaviors.
- Journey mapping: Identify specific touch points that cause friction. Example usage: Ensuring first-time visitors can easily locate all parts of the website.
- Usability testing: Testing online environments to see what works and what doesn’t. Example usage: Ensuring users can easily accomplish critical tasks.
- Cultural probes: Provide a way to collect data over an extended period of time. Example usage: Ensuring services are consistent across the organization.
- Card sorts: Help determine a site architecture and navigation. Example usage: Ensuring the website is easy to navigate.
- A/B testing: Comparing two versions of a design. Example usage: Developing a homepage with two different search boxes.
- Personas: Ensure everything you do is designed in a user-centered way. Example usage: Ensuring marketing materials are relevant to user needs.
- Five-second tests: To determine what elements on a web page stand out the most. Example usage: Ensuring the homepage clearly expresses what people can do on your site.
- Content audit: Helps you take stock of what’s on your website and allows you to do some useful assessment. Example usage: Ensuring that web content is engaging.
Combining Etches and Schmidt’s research methods with Hall’s perspective on bias, below I’ve described four common library UX scenarios where you could apply these techniques. These recommendations are based partially on my own experience using some of these research techniques (i.e. usability tests, user interviews, journey mapping, surveys, and card sorts) in my role as virtual services librarian. The scenarios themselves don’t come from personal experience, but are instead situations I imagine to be quite common in academic library contexts. My assessments of each technique and recommendations are subjective.
Scenario 1. Adjusting the label on a new library discovery system search box
You have a short timeline to complete this task, approximately one month. You’re not solely a UX librarian, you have multiple other responsibilities, such as working on the reference desk and teaching classes. But you want to ensure your users have a positive experience searching the library’s resources. A working group has been pulled together to refine the usability of the new discovery system and the group has decided to call it Discovery Search.
Will the search results created when executing a search of the discovery system match the user expectations for a Discovery Search results list? If not, will this create frustration for users?
|Techniques||Resource and time intensive||Bias||Credibility buy-In|
|Usability tests||Somewhat||Sampling bias, social desirability bias||Medium|
|A/B testing||Somewhat, depends on sample size and tools used.||Sampling bias||High, if high sample|
Scenario 2 Preliminary exploratory research prior to implementing a new Digital Collection
Your team is in the early stages of establishing a new Digital Rare Book Collection. You’d like to learn more about the context in which faculty, researchers, and the public will use this tool, and how it will integrate into their day-to-day activities. You have good support for doing this research and a long timeline is expected to successfully meet your goal.
|Techniques||Resource and time intensive||Bias||Credibility buy-In|
|Contextual inquiry||Yes||Sampling bias||Medium|
|User interviews||Yes||Social desirability bias, interviewer bias, sampling bias||Medium|
|Focus groups||Yes||Social Desirability Bias, Interviewer Bias, Sampling Bias||Medium|
Scenario 3 Determining the most critical tasks on a large university library website homepage
Your library’s website has built up a large collection of links, images, and content that make the page very busy and potentially overwhelming. Stakeholders are continually requesting new content be added to the homepage. You’d like to identify the website’s top five critical tasks to use as a boundary for what should be included on the homepage.
To determine the top five critical tasks across all user groups when using your library’s website.
|Techniques||Resource and time intensive||Bias||Credibility buy-In|
|Survey||Somewhat, depends on rigor of methodology||Sampling bias||High, depends on rigor of analysis and sample size|
|Cultural probes||Yes||Social desirability bias, interviewer bias||High|
Scenario 4 Evaluating LibGuides for first-year undergraduate students
You’re a librarian who’s created a suite of information literacy LibGuides and tutorials targeting first-year students, but the traffic was low throughout the previous year. Your goal is to increase the traffic for the upcoming school year.
Are the LibGuides adequately represented in the library website information architecture, including the labelling and placement within the website?
|Techniques||Resource and time intensive||Bias||Credibility buy-in|
|Five-second test||No||Sampling bias||Low|
|Journey mapping||Yes||Sampling bias, social desirability bias||Medium|
When deciding what type of UX research to conduct, it’s important to not only consider the time and resource investment of the methods, but also the potential for bias, and your colleague and stakeholders’ understanding of the methodologies used. When proposing design research methods or presenting results, communicating the different intentions behind design research versus pure research, and accounting for any potential biases, should increase buy-in and your chances for a successful user experience project.
- Etches, A. & Schmidt, A. (2014) Useful, usable, desirable: Applying user experience design to your library. Chicago: American Library Association.
- Hall, E. (2013) Just enough research. New York: A Book Apart.
- Lanclos, D. & Asher, A. (2016) “Ethnographish”: The state of the ethnography in libraries. Weave: Journal of Library User Experience. 1(5). doi: http://dx.doi.org/10.3998/weave.12535642.0001.503
- Mitchell, E. & West, B. (2016) DIY usability: Low-barrier solutions for the busy librarian. Weave: Journal of Library User Experience. 1(5). doi: http://dx.doi.org/10.3998/weave.12535642.0001.504
- Nielson, J. (2011). Top 10 mistakes in web design. Nielson Norman Group. Retrieved from https://www.nngroup.com/articles/top-10-mistakes-web-design/
- Pernice, K. & Budiu, R. (2016). Hamburger menus and hidden navigation hurt UX metrics. Nielson Norman Group. Retrieved from https://www.nngroup.com/articles/hamburger-menus/
- Priestner, A. (2017) What’s in a name? Does it really matter if we call it UX, ethnography, or
- service design? Weave: Journal of Library User Experience. 1(6). doi: http://dx.doi.org/10.3998/weave.12535642.0001.603
- Sampling bias. (2017). In Wikipedia, The Free Encyclopedia. Retrieved August 1, 2017, from https://en.wikipedia.org/wiki/Sampling_bias