Although every library would benefit from running usability studies, not every library has a dedicated staff available to conduct those studies. Anecdotally, librarians seem to feel incapable of undertaking usability studies for reasons including time, budget, and expertise. We all have other job duties and tight budgets. Moreover, how many of us have ever actually received any kind of training or education on conducting usability studies? For all their importance, they’re not exactly standard coursework for a degree in library science.

We all may be jealous of the libraries that have dedicated usability/user experience librarians, but that doesn’t mean the rest of us can’t conduct successful, worthwhile usability testing that leads to website improvements. There are plenty of quick usability tests that can be run with just a little time and even less expertise. These studies probably won’t get you the in-depth, fine-grained results that are possible with more involved studies, but they will help you to identify your website’s biggest problems. They’ll also point you in a user-centered direction as you fix those problems.

As an added bonus, the data you gather could have applications beyond the website. Understanding which aspects of using the library give people trouble improves our ability to assist patrons at the desk and is useful to know when preparing instruction sessions. It can also inform library promotion and outreach as you learn more about your patrons’ mental models of the library and its services. If you start by taking a couple minutes to figure out how the results of a usability test will be relevant to your coworkers, it will be much easier to ask those coworkers to collaborate with you on conducting the test(s). After all, the only thing better than an easy and productive usability test is an easy and productive usability test done with help!

If You Have a Couple Hours

Can you scrape together five or six non-consecutive hours to gather data to improve the website? Better still, can you spare those five or six hours twice a year?

If so, consider doing some guerrilla usability testing. Just by walking up to people and asking them for three minutes or less of their time, and then asking them a few questions, you can gather quite a bit of useful data. For tasks as quick and easy as those involved in guerilla testing, you probably don’t need to offer any incentives at all. That said, if you want to sweeten the pot for your participants, you can invest in a bag of individually wrapped chocolates or offer some other such tiny reward.

Consider these options:

1. Surveys

If you’re curious what your patrons like, dislike, or believe about your website, a survey can help. Bear in mind that a survey won’t tell you where patrons actually struggle on your website. You’ll need to observe website use to find that out, since patrons don’t necessarily realize when they’re not using our sites as we intend and might occasionally exaggerate or downplay their struggles and misunderstandings. Still, it can be extremely useful to know what your patrons believe are the strengths and weaknesses of the site. This can also be a fantastic opportunity to ask questions like what they last used your website for. The results may surprise you.

Some things to keep in mind when running a survey:

Keep Your Survey Short

The shorter your survey is, the less time it takes to get someone to complete it—and the less time it will take for you to analyze the results. Let’s be honest: asking two or three questions may well get you all the data you have time to work with. Plus, it’s a lot easier to recruit patrons to participate if your survey is very short.

If you really can’t decide on only two questions to ask, why not come up with a few different but equally short surveys? You can run them simultaneously, or spread them out over the course of a couple months if that works better for your schedule.

Running Your Survey

Unless you’re able to offer a small incentive or a chance to win something for survey completion, you will likely get more results from running your survey in person. Walk up to people who are sitting down or at least don’t appear to be in a hurry to get somewhere else and ask them if they have three minutes to spare to help make the library website better. (Make sure your survey really does only take three minutes to complete!) Better still, get a student worker or an intern to run the survey, or see if your service desks can hand out surveys to everyone they talk to. You can post your survey online instead of or in addition to this, but don’t be too surprised if you have a very low return rate for the digital version.

At Penfield Library, we like to take our surveys to the campus food court and ask all the groups who are eating lunch to fill them out. During a moderately busy lunch hour, a single librarian usually averages about 20 responses to a two-question, open-ended survey. Happily, 20 responses to a question tend to be enough for us to see the major trends in people’s answers.

Write Good Questions

Entire books have been published about the art of writing good survey/questionnaire questions. For the purposes of librarians with 20,000 other things that need to get done today, here are some basics to keep in mind:

  • Don’t ask a question unless it will get you information you need and which will directly inform your decision making.
  • Avoid leading or biased questions. For example, “Do you agree that the library offers quality reference services?” is a bad question to ask because it’s both leading and closed ended. Consider asking something more like, “How would you describe your experience getting help with your research?”
  • If your questions have any kind of multiple choice answers, be sure the choices make sense and will allow everyone to answer honestly. It can be helpful to pilot test your questions with colleagues and students to make sure your questions make sense.
  • Keep your language simple and avoid jargon.

Try out some questions on a small number of patrons. If you’re not getting the answers that will help you make progress, change your questions and try again. If you want quick question-writing tips, A Simple Guide to Asking Effective Questions is a useful read.

2. First-Click Testing

First-click testing offers a way to gain insight about an interface in order to make design decisions based on data rather than opinion or anecdotal evidence. The concept is simple: show a patron a library web page and give them an imaginary task to complete. Then ask them to show you where they would click to get started with their task.

For example, a task for a college student may include asking them where they would find journal articles related to sociology for their Social Work 101 class. After the student makes their first click, the test is complete. Talk about simple!

The premise of first-click testing stems from research indicating that users are much more likely to succeed at a task if they are able to select the correct link or pathway to begin with. According to usability expert Bob Bailey (2013), a user will have an 86 percent chance of completing a website-related task if their first click sends them on the correct pathway. A person’s success rate drops to 46 percent if they click on the incorrect path. While first-click testing is not a cure for the myriad issues that can plague websites, it does provide insight to help you make better web design decisions.

Set an Objective for the Test

Before you begin testing, set an objective to test and decide which page you want users to start from. Do you want to know if patrons can find your databases on the library homepage? Do you want to know how they discover your ebook collection? Do you want to know if the wording you used on your journals page connects with patrons who are trying to check your holdings? Pick one or two things to test with each patron you talk to.

Running the First-Click Test

The easiest way to collect first-click data is to do it in person and the easiest way to do it in person is on paper. Print off a picture of the library’s homepage (or whichever page you want to test) and have the patrons circle where they would click. Completing these tests takes very little time; patrons are usually willing to participate if you make it clear that you will take less than three minutes of their time.

If you’re able to put slightly more effort into your setup, you can run your first-click test digitally with software like Chalkmark. Chalkmark’s free plan lets you test three tasks with as many users as you like. The benefit to running your test this way is that your results will be recorded as a heatmap of where users have clicked—very useful for showing off your results to other librarians or administrators! Another bonus is that you can link patrons to the test from your website, though you’ll probably still get more responses by taking a tablet or mobile device and roving your campus or community.

If You Can Spare a Day or Two for Usability

The world will not end if you never go beyond surveys and first-click testing. If you can make time for more in-depth testing, though, you will see the payoff in richer, more powerful data. It really is worth the effort if you can manage it—improvements to your website will come more quickly.

Recruiting Participants

Because the tasks in this section tend to be longer and more involved, you’ll have to decide whether you’ll have more success asking patrons to participate on the spot or by scheduling patrons to come in at set times. You may choose to do larger or smaller tasks with your participants depending on what makes sense for your recruitment efforts.

If you opt to schedule your tests ahead of time, be prepared to market the session, deal with patrons who don’t show up on time (or at all), and to be generally flexible. Aside from hanging up posters or asking instruction librarians to help with recruitment, it’s also helpful to offer an incentive for patrons to participate in longer usability tests. Tests can range from ten to thirty-plus minutes, depending on the patron and what you’re asking them to do. The incentives don’t have to be extravagant. Can you buy participants a coffee? Waive a fine? Give them a small gift certificate? Anything you’re able to offer will make recruitment easier.

To streamline the scheduling process for your patrons, consider using a service like to let them set up their own appointments. This kind of flexibility seems to increase the percentage of patrons who will actually show up to their scheduled appointments. You’ll still want to be sure you have some other work on hand to keep you busy in case of no-shows, though.

If possible, you should test patrons that represent your target audience for the task. For example, imagine that you want to find out how undergraduate students go about selecting subject databases to find articles. If you were to recruit graduate students you see studying in the library every day, then you may not learn as much as you’d hoped; the graduate student may have more experience in utilizing library resources than your typical undergraduate does. The sophomore who pops in once a week might be a more fruitful participant for the study in this example.

1. Card Sorting

Do you already know what content you’re going to put on a page, or the labels in a navigational scheme, but you’re not sure how to organize it so users can find things? Don’t just alphabetize that list of links! Card sorting will get you a much more user-friendly answer by showing you what groupings of content and labels make sense in the minds of your users. For an “open card sort,” all you have to do is:

  1. Write down each of the link labels/pieces of content on an index card.
  2. Shuffle the deck.
  3. Ask patrons to sort the cards into any categories that make sense to them.
  4. Ask patrons to name those categories.

Another variation of card sorting is a “closed sort” where you have patrons sort cards into predefined categories that you define for them at the beginning of the test. This makes it easier to analyze your data afterward (+1 for being easier!), but it also limits patrons to your categories with all the baggage and assumptions that go along with those categories (-1 for being less user-centric).

Regardless of whether your sort is open or closed, if you’re running a small enough test, there are free tools that will let you do your card sorting online. OptimalSort, for example, will let you sort up to thirty cards with up to ten participants for free (you can pay to remove these restrictions). The benefits of online card sorts include the fact that you don’t have to schedule meetings with patrons to conduct the card sorts. Plus, the software automatically generates graphs that take care of a lot of the grunt work involved in turning your raw data into meaningful information. Both of those features will save you a lot of time. Drawbacks, of course, include the fact that unless you have a budget to spend on software, you’re limited to thirty cards and ten participants.

Running Your Card Sort

Some people will sort the cards very quickly and decisively. Others will agonize over every decision, and you may need to reassure them that there are no wrong answers and everything is going to be ok even if they make a “rushed” decision. These personality differences mean that running a card sort with thirty to forty cards could take you anything from ten to forty minutes. If you’re doing the sort in-person, make sure your schedule allows for this.

Some participants might struggle to understand link labels when they see them out of context like this, no matter how clear the language is. To be certain that everyone understands what each card represents, try writing an explanation/description on the back of each card rather than explaining the labels verbally (Brucker, 2010, p. 51). That way, everyone is getting the same explanation. Since it could affect how someone sorts the card, this is important.

Our experience is that you probably want ten to fifteen participants, at least for an open card sort. For a closed card sort you may start to see trends before then, since you will be dealing only with variations in the content of each category, rather than variations in the categories themselves. Finally, because a card sort requires a substantial time investment from participants, it’s nice to offer a small reward for participation if you have the budget.

Analyzing Your Results

Analyzing the results of a card sort can be tricky. Be prepared to spend time on this data analysis, especially if you used physical cards instead of software capable of generating graphs and reports for you. Google Fusion Tables offers the capability of creating many different types of graphs for free, once you’ve cleaned up your data in spreadsheet form. Our personal favorite for visualizing card sort data is a network graph, which isn’t an option in Excel—and that is why we use Fusion Tables (fig. 1).

Figure 1. Network graph showing connections between items on SUNY Oswego’s Penfield Library homepage that at least half of card sorters agreed on.
Figure 1. Network graph showing connections between items on SUNY Oswego’s Penfield Library homepage that at least half of card sorters agreed on.

2. Think-Aloud Testing

When it comes to usability testing, the patron’s actual thoughts and decision making are the most difficult data to gather. While surveys, first clicks, and card sorting can be good for gleaning feelings, perceptions, and interactions, none of these methods fully capture patrons’ actual thought processes when performing a task on a website. Since telepathy or crawling inside their brains is not an option (yet!), try the next best thing: a think-aloud test.

Think-aloud testing is easy to run and offers data that can be persuasive even to skeptical colleagues. Essentially, the patron says aloud what they are thinking as they navigate through a series of website tasks. During this test, the librarian keeps quiet except to remind the patron to articulate their thoughts if they go silent. It can be painful to watch a patron struggle through a task, but helping them defeats the very purpose of this test.

While easy to conduct, there is some advance legwork to the think-aloud test that may take a few hours. Since this testing can produce such meaningful yields, it’s worth the initial investment of time: three to five think-aloud tests with your patrons should turn up plenty of fixes you want to make on your library website.

Designing the Tasks

When you’re coming up with tasks for your patrons, stay focused on gathering data you can act on. Avoid asking anyone to do more than five or six key tasks; that way the patrons won’t get tired or frustrated. The tasks don’t need to fall under a single theme but they should be related to actual patron needs and help to answer your questions about the website. Ask other librarians for task ideas; that way you’ll better understand their concerns about the website and you might even get them interested in usability testing.

Setting Up the Think Aloud

The think-aloud test can be done either in person or online. Before you start testing, do some pilot testing and make sure you’re comfortable guiding participants through the test. By its nature the think-aloud test is an unnatural social situation, so building some quick rapport with the patron will make it easier for both of you.

For an in-person think aloud, try to conduct the test in a quiet, semi-private space. Keep in mind that not every patron will be comfortable sitting in a fully private space with a complete stranger. You will also want to make sure the computer you use is fully functional to minimize the patron’s distractions.

For online think-aloud tests, you can use conferencing software with a screen sharing ability. Google Hangouts is a great, free option. Whatever you choose, make sure you pilot it ahead of time so you can anticipate or preempt technology issues like outdated plugins, etc.

Recording the Data

Before the think-aloud test, decide how you’ll record the data. It’s best practice to let the patron know that you are recording their responses, regardless of how you record the test.

Some of the options for recording a think-aloud test include:

  • Take careful, handwritten notes. This is cheapest way to record the session, but you may miss out on essential details, especially if the patron works quickly through the tasks.
  • Record the think aloud using video or screencasting software. This is an easy way to ensure you capture the entire think aloud. Just make sure to use a microphone to capture the audio. You don’t have to use a high-end product to capture the think aloud but be aware of the time limits on some free options. You don’t want to disrupt the patron’s train of thought by fiddling with the software.
  • For online think-aloud tests, keep in mind that conferencing software usually has the ability to save the session.

If you’re still worried about whether your think-aloud sessions will run smoothly, check out a copy of Steve Krug’s Rocket Surgery Made Easy. Krug provides step-by-step instructions for the whole process from planning to analyzing, including scripts you can follow.

Making Sense of Your Results

Regardless of the test you use to collect usability data, you will need to interpret your findings. If you’re a one-person usability team with lots of other responsibilities—or even if you’ve successfully roped in a few of your colleagues—you’re probably not going to have as much time as you’d like to spend wrangling usability data. Never fear! You don’t have to apply advanced statistical methods to your data to learn useful information. The biggest thing you want to do with your data is simple: look for trends.

Analyze Your Results

For survey results, read through all the answers by question rather than by participant. That is, read every participant’s answer to the first question, and then every participant’s answer to the second question, etc. Are there common themes? Type those themes into a Word document as headings, and copy/paste the relevant answers beneath them. Read through them again now that they’re organized, and there’s a strong chance you’ll see a direction you want to take in improving your website.

You can treat most of the other usability tests we mentioned in this article the same way. Look for the patterns in your first-click tests or your think-aloud sessions. For card sorting, if you don’t have time to fight with spreadsheets of data, even just a look at which cards appear frequently with which other cards can be useful. While a fuller analysis can provide more insight, the bottom line is that if your data provides you with a user-centered direction to move in, you’ve achieved something worthwhile.

Share Your Results

Once you have patterns to report—most users fail to figure out how to pay their fines online, or they’re failing to distinguish between journals and databases, or whatever the stumbling blocks are—draw your coworkers into a discussion of what comes next. You never know when someone who works in a different capacity might be able to point you toward a solution for seemingly intractable usability problems.

Similarly, you never know when you’ll be able to reassure your coworkers about something that worries them. It’s a lot harder for someone to raise a stink about the wording of a link on the homepage if you bring back data showing that actual patrons understand that wording and use it successfully. By sharing what you’ve learned from usability testing, it’s often possible to tone down opinion-based arguments over the website.

Fix What You Can

Above all else, don’t worry if you’re not able to fix every problem users have right away. Fix what you can, and keep track of the remaining issues in case the opportunity to correct them arises later on. Part of the beauty of quick, low-cost usability testing is that you can find ways to fit it into your schedule and budget on a recurring basis. Even if you can only squeeze this in once a year, think long term. Eventually, small fixes can add up to big change.


  • Bailey, B. (2013). Firstclick usability testing. Retrieved from
  • Brucker, J. (2010). Playing with a bad deck: The caveats of card sorting as a web site redesign tool. Journal of Hospital Librarianship, 10, 41–53.