Easier and wider access to information is often touted as a primary benefit of digital libraries. Claims such as "the world's information at your fingertips" and "desktop access to entire library collections" are commonplace. This paper examines the nature of access to information resources and the relationship of access to use, allowing us to consider how these alluring visions of easy information availability might be achieved. Measuring and interpreting access and use data within a digital library is complex, however, and the lack of standard metrics across systems makes it especially difficult to develop explanatory frameworks related to digital-library use.

The paper draws on work associated with the NSF/DARPA/NASA Digital Libraries Initiative (DLI) project at the University of Illinois, which ran from 1994 to 1998 (see Schatz et al. 1996; Schatz et al. 1999, in press; and the project homepage at http://dli.grainger.uiuc.edu/idli/idli.htm). DeLIver, a testbed collection of journal articles mounted on the Web, contains the full text of recent articles from over 50 scientific and technical journals, primarily in the disciplines of engineering, computer science, and physics. While the DLI project has officially ended, the testbed remains available to campus users. The University of Illinois DLI Social Science Team has performed various kinds of user studies to improve our testbed design and document use, and to develop an understanding of faculty and student work and communication practices in the changing information infrastructure. (Visit the Social Science Team Homepage, http://forseti.grainger.uiuc.edu/dlisoc/socsci_site/index.html, to access internal reports related to some of our studies, as well as a list of our publications. See Neumann and Bishop (forthcoming) for further discussion of methods and findings related to DeLIver use.) Throughout the DLI project, we have tried to integrate our research on user needs and behavior with the work of the Testbed Development Team, the group charged with designing and implementing DeLIver.

This paper describes design and evaluation activities that accompanied the implementation of DeLIver across the University of Illinois campus. It concentrates on one particular barrier to use: the difficulties potential users have in logging in to a digital library. To gain access to DeLIver, potential users are required to submit their university network identification number (NetID) for authentication. Users must then complete a registration questionnaire that collects basic identification, demographic, and occupational data, as well as informs them of how user data will be utilized.

Authentication and registration procedures presented an enormous barrier to use, and one that was largely unanticipated. While generally this barrier has not received much attention in published reports of digital library user studies, its importance as a determinant of use should not be underestimated. Our experiences confirm the judgment expressed by a Project ELVYN researcher reporting on user needs: "It is critical that barriers to use are minimal: if there is too much in the way of logging on, passwords, etc., people will not be bothered" (Rowland, McKnight, and Meadows 1995, 89).

Accessibility (along with perceived accessibility) of an information resource is widely accepted as a primary determinant of the extent of its use (Baker and Lancaster 1991, 27-38; Lancaster 1995b). Poole (1985) did a meta-analysis of empirical work on the relationship of accessibility to use. He found that the propositions on information use most commonly validated in empirical studies were that information-channel use is a function of channel accessibility, perceived cost, and user awareness (119-122). Accessibility of information resources is usually assumed to depend on a range of cognitive, social, and physical factors, such as whether a person is aware of a resource, has the knowledge and skills needed to access it, and has the resource close at hand. Culnan (1985) explored dimensions of accessibility and, in analyzing people's perceptions of physical access to information resources in different venues, equated the ability to enter a library with the ability to gain access to a computer terminal and find the system "up." In addition to the availability of a working system, however, people need special knowledge about how to use computer-based information systems. Borgman (1996) concludes that online public-access information-retrieval systems are hard for people to use because they require a combination of basic computing skills and knowledge of how to formulate and execute searches.

Reports of the usage of digital libraries comprised of electronic-journal collections confirm the range and importance of access factors that affect system use. (See Table 1, below, for information about selected electronic journal R & D projects and some of the usage studies associated with them.)

"The variety of ways a system can be 'down' seems only to increase with time"

In baseline studies associated with the SuperJournal project — which provides Web access to about 50 journals in several disciplines in the sciences and social sciences — Baldwin (1998) reports that researchers in academia cited immediate, guaranteed, and convenient access to electronic journals as among the most critical success factors for an online journal system. Similarly, faculty and students who participated in needs-assessment focus groups for our DLI project emphasized the importance of convenient, comfortable, easy, and inviting access to online-journal collections (Bishop et al. 1996). The final report on the TULIP project, in which Elsevier and university libraries explored the delivery of online journals in materials science (Borghuis et al. 1996), associated system use with several access factors. Poor technical infrastructure was deemed the cause of low use at one university, while high use at another was attributed to the fact that its TULIP implementation was a "logical extension" of an existing online-library service that had previously provided access to bibliographic data only (50). Integration with an existing service facilitates access in several ways. People are more likely to become aware of the new service and more likely to be familiar with at least some of the procedures associated with its use. Entlich et al. (1996, 107) found that applicants to the CORE system (a pilot project offering electronic access to 20 journals published by the American Chemical Society) failed to become active users due to lack of convenient access to computers that provided the features needed to productively use the system. McKnight (1997, 1) reports that the Birmingham Loughborough Electronic Network Development (BLEND) electronic-journal project found that the frequency with which people used the system depended on the distance they were from the nearest terminal. Rowland, Bell, and Falconer (1998) note, in their draft paper on factors affecting electronic journal acceptance, that requiring users to download and store software for viewing the full text of articles can limit access to and use of electronic journals.

Table 1
Electronic Journal R & D Projects
Project NameTimeframeNature of CollectionInformation Sources
Chemistry Online Retrieval Experiment (CORE)1990-199510 years of 20 journals published by the American Chemistry Society

Entlich et al. 1996

Stewart 1996

The University Licensing Program (TULIP)1991-199543 Elsevier and Pergamon materials science and engineering journals

Borghuis, et al. 1996

Red Sage1992-199619 publishers and 71 journals in medicine, radiology, and molecular biology

Arnold, Badger, and Lucier 1997

University of Illinois DLI project (DeLIver)1994-1998Over 50 journals in engineering, physics, and computer science

Schatz et al. 1996

Schatz et al. 1999, in press

Neumann and Bishop forthcoming

DLI Web site

Project MUSE1994-Over 40 journals in the humanities, social sciences, and mathematics published by Johns Hopkins University Press

Neal 1997

Project MUSE Web site

SuperJournal1990-1995Some 50 journals in communication and cultural studies, molecular genetics and proteins, political science, polymer physics

Baldwin 1998

SuperJournal Web site

Journal Storage (JSTOR)1995-Archival issues of about 60 journals primarily in the humanities and social sciences

Garlock, Landis, and Piontek 1997

Guthrie 1997

JSTOR Web site

Accessibility

Accessibility is an issue that permeates various stages of system use, from logging in to acquiring a desired document to read. This paper focuses on accessibility in the initial contact users have with a system: logging in and registering. Here, convenience and ease of use are especially important factors. If users experience significant barriers in the form of registration and log-in procedures that are too time-consuming or difficult to complete, they may abandon their attempts to use a system. Attempts to complete registration and authentication procedures can be frustrated by lack of needed expertise or tools, inadequate instructions, or breakdowns in network connections. The variety of ways a system can be "down" seems only to increase with time (Bishop 1995, 449-451). Some electronic-journal systems — typically those that are available at no cost and are not established as research or pilot projects intended to test the system itself — do not demand authentication and the collection of personal data from users. System managers who wish either to limit or to study use, however, will have to consider the trade-offs between accessibility and usage presented by various approaches. Some techniques may be less intrusive than others, and in some cases encouraging use may take precedence over the need to either gather information about users or keep unauthorized users out.

Erecting Barriers to Access

Moving from needs assessment and usability testing to the deployment of DeLIver, the Social Science and Testbed Development Teams were faced with a new set of problems in encouraging, charting, and understanding access and use in real work settings. How should we gauge success? How many users are enough? How do we explain the degree of use DeLIver receives?

The trial version of DeLIver was tested in the Summer of 1997, with October 1997 as the official "roll-out" date for users across our campus. To access DeLIver, prospective users must first enter their University of Illinois network identification number in an online "NetID form." That allows us to offer the publishers of material in the testbed reasonable assurance that access is restricted to campus affiliates. In addition, prospective users must complete a registration form that provides us with the basic data on who is using DeLIver.

We faced a period of initial panic when we consulted our Web-log data on user attempts to access DeLIver during the first several weeks of our full public roll-out across campus. For the period from November 1-14, 1997, we learned that:

  • Of 1,540 attempted accesses, 1,276 (83%) were abandoned at the NetID form
  • Of the 186 people who entered their NetID, 91 (49%) stopped at the registration form

Our first task, then, was to address the barriers to use presented by these two forms and figure out how to gain maximum use in the future. The Social Science and Testbed Teams met to discuss options and their implications. Four basic alternatives were proposed:

  1. Simplify the functionality of the search interface by removing all advanced-search features, so that those people who did enter their network identification numbers and proceed through the registration process would be more likely to actually use the system. The basic idea here was to make up for usage lost in the access stage by increasing usage at search stage.

  2. Remove our authentication and registration processes completely, offering, in effect, a "free sample" of DeLIver during the last few weeks of the semester, when several project members anticipated heavy demand for the system.

  3. Streamline the authentication and registration processes. We began thinking about how to reduce the user's burden by shortening and clarifying these two forms.

  4. Step up publicity around anticipated hubs of use. We reasoned that if we could market the system more aggressively to those people most likely to need it, we could attract a greater number of users who would be willing to complete our authentication and registration procedures and become active system users. Visiting classes in appropriate disciplines and affixing stickers announcing the availability of Web versions of articles on the paper copies of the same journals were the mechanisms we thought might be the most successful.

We decided to try the third and fourth alternatives before pursuing more drastic measures. The different alternatives demonstrated that project participants had different answers to the seemingly simple question of why we would want to reduce barriers to access and use. Those varying responses seemed to reflect the odd placement of our DLI testbed as a hybrid research/demonstration/production system. The hybrid nature of DeLIver is similar to the character of many other digital-library and electronic-publishing ventures. Like the Red Sage, TULIP, SuperJournal, CORE, JSTOR, and MUSE projects, DeLIver includes in its aims the three sometimes-conflicting desires:

  • to contribute to technical and social research about electronic journals
  • to demonstrate the feasibility of electronic publishing systems to publishers, libraries, or individual readers
  • to implement production systems in real settings

In our project, those who viewed DeLIver as a production system wanted to "make the system better" by making it easier to access and use. Those who viewed it as a demonstration system were more concerned that we generate high use in order to show that the system worked and was perceived by users as valuable. And those who viewed our testbed as a research system emphasized the goal of gaining more users so that we could learn more about digital libraries and their use. Of course, it was really a matter of recognizing the need to pursue all of these goals and then negotiating among them in the selection of specific strategies for eliminating barriers to access and use.

"The 'just surfing' phenomenon raises the important question of how to measure and interpret attempted and actual uses in digital libraries"

In order to select from among the possible alternatives, the Social Science and Testbed Development Teams pooled their expertise and experiences to identify the most-likely explanations for both the limited number of attempted accesses and the high percentage of people who were bailing out when confronted with our NetID and registration forms. A number of possibilities were proposed, based primarily on results from our usability tests and needs assessment user interviews. (Reports of those results are collected on the DLI Social Science Team Homepage at http://forseti.grainger.uiuc.edu/dlisoc/socsci_site/internal-reports.html.)

We attributed low usage and abandoned access to the following eight reasons:

  1. Many potential users do not know about the system.

  2. People who come across the system on the Web who are not University of Illinois affiliates bail out when asked for their University NetID.

  3. People experience technical problems with access. Some Web browsers do not support the "cookies" (a mechanism for storing information about an individual user's interaction with a system) required for submitting DeLIver access forms.

  4. Some people think that registration forms on the Web signal a fee-based service, so they abandon their attempted uses as soon as they see a registration form.

  5. Privacy: People are concerned about the increasing amount of information about them that is being stored electronically by various systems, so they refuse to fill out a registration form.

  6. People do not understand what their NetID is, so they cannot complete our NetID form.

  7. The length of the registration form: People do not mind completing a registration form, but only if it doesn't take too long.

  8. Lack of real need: People happen upon DeLIver when surfing the Web; they have no strong need to use the system, so they do not bother to get through roadblocks like our authentication and registration processes.

The effect of awareness on access and, hence, use has been noted in a number of studies of both print and digital collections. Baker and Lancaster (1991, 103) provide evidence of the increased use of books due to promotion through displays, book lists, etc. They also note the importance of promotion and perceived accessibility in determining the extent of use of database search services (276-277). Lack of awareness is further cited as a primary reason for low use of online full-text collections (Rowland, McKnight, and Meadows 1995). Conversely, high use of the TULIP service at one university was attributed to heavy promotion of the system to its intended users. The TULIP final report, in fact, places significant emphasis on both the importance and difficulty of promotion for full-text electronic-journal collections (Borghuis et al. 1996).

Lack of real need — what can be called the "just surfing" phenomenon — is suggested by a study of online museum-gallery visitors. In a user survey, the most commonly reported response to how the site was discovered was, "Just happened upon it while using the Web" (Bishop and Squier, 1995). Though most digital libraries developed today include Web access, this issue has not received significant attention in the literature. Nonetheless, the "just surfing" phenomenon raises the important question of how to measure and interpret attempted and actual uses in digital libraries.

Measures of Access and Use

The traditional measures of library use (Baker and Lancaster 1991; Lancaster 1993; Van House et al. 1987) that seem most relevant to digital-library use include calculating the extent of use of the library, its collections, and its search systems, as outlined below:

  • Library Use: the number of library patrons as a percent of the target audience; the number of library visits — or visits for a particular purpose — in a given time frame

  • Materials Use: the number of times a particular item has circulated or been used on site

  • Materials Access (or "fill rate"): the percent of items sought — by title, subject, author searches, or browsing — that were actually obtained

  • Library-System Use: percent of patrons who use the system; number of searches performed (often by type of search, e.g., author, title, subject term, or keyword); number of "hits" or bibliographic citations returned by the system; percent of items identified through searches that were actually obtained.

Can such metrics be employed to gauge use and effectiveness of digital libraries? Lancaster (1995a) postulates that even though the digital library is conceptually different from a traditional library, "user objectives and evaluation criteria will not change in substance." One group currently exploring the development of an appropriate set of metrics to evaluate and compare the effectiveness of digital libraries is the D-Lib Working Group on Digital Library Metrics. (The group's charter and several position papers are posted on their Website at http://www.dlib.org/metrics /public/index.html.)

We can propose digital-library equivalents for traditional access and use measures. In the "library use" category, the relative number of library users is easily transferred to the digital realm as number of registered system users. The "number of library visits" might be equivalent to the number of hits logged on a digital library's home page. Materials access and use rates may be equated with the number of documents viewed, or perhaps printed, in a digital library. Metrics for system use seem equally appropriate to online public-access catalogs, abstracting and indexing services, or online-journal search systems. The Red Sage final report (Arnold, Badger, and Lucier (n.d)) presents data in several of these access and use categories, including the number of articles and pages accessed, print jobs, searches performed, and sessions. Transaction logs for DeLIver tally more specific data on user actions, such as the type of search performed and types of results screens viewed. (See Borgman, Hirsh, and Hiller (1996) for a description and discussion of online-monitoring procedures for collecting such measures of system use.)

"We cannot call a digital-library or electronic-publishing system a success if we cannot measure and interpret its use"

Interpreting usage data is difficult, whether looking at traditional or digital libraries. It is often hard to guarantee the validity of a particular use measure — to be assured, in other words, that you are really measuring what you think you are measuring. It is also difficult to take usage statistics and derive explanations from them or to arrive at conclusions of the value or impact of a service based on those usage statistics alone. For that reason, we also employ other means of studying use and users — like observation and interviews — in order to develop a picture of the context of use, gauge impact, etc. One particular problem we considered in evaluating DeLIver was the difficulty of gauging purposeful use in our testbed. Reference librarians in academic libraries often tally patrons' questions and can easily sort out questions that do not represent use of library materials, such as "Where is the drinking fountain?" But in interpreting access data for DeLIver, we have no way of ascertaining whether an attempted access is really associated with an intended use. In some cases, the person who comes across the DeLIver Website may just be surfing the Web and stopping at any site that looks mildly interesting. If that person abandons her access attempt, to what extent should that be considered a thwarted use?

Because of the various purposes associated with DeLIver as either a research, demonstration, or production system, we had another level of complexity to deal with. People logging in to test the system for research purposes or to demonstrate the system to others should not be counted as "real users" if we are measuring against a typical library's use. Those users should be tagged in the transaction logs so that the analysis is not affected by them. But if the goal is to explore the hybrid nature of digital library access and use, research and demonstration uses should be logged and analyzed. And the case of a new user "playing" with the system to get a sense of its contents and how to use it is harder to categorize and account for when measuring system use. Trial-and-error learning may be thought of as an important category of use, and indeed Neumann and Ignacio (1998, in press) devoted their paper solely to a discussion of trial-and-error learning in DeLIver. If, as Borgman (1996) suggests, learning to use online-library systems requires a combination of basic computing skills and knowledge of how to formulate and execute searches, users' attempts to develop and try out these skills are worthy of observation and study.

We cannot call a digital library or electronic-publishing system a success if we cannot measure and interpret its use. Even if we record the number of users and uses of DeLIver, what do the numbers mean? How much use of their material should occur before our publishers can deem their electronic-publishing ventures a success? Selecting the basis for calculating percent of use according to the number of people in the intended audience may be difficult in both theory and practice. And interpreting the resulting measure of degree of penetration is even thornier.

Who Was Our Audience?

In approaching this issue, we considered what we already knew about DeLIver users from our earlier studies. We also drew on our accumulated knowledge about library use among scientists and engineers generally. First, we concluded from focus groups and interviews that graduate students and faculty members were the most appropriate users for DeLIver's content. Thus we reasoned that one measure of our success in attracting use would be to look at the number of graduate students and faculty members registered to use DeLIver as a percent of the total number of graduate students and faculty in disciplines relevant to the material in our collection — engineering, physics, and computer science. In our case, that meant that we were looking at a prospective user population of about 1,900 graduate students and 420 faculty members. Registration data gathered as of August 1998 reveal that about 35% of graduate students and 21% of faculty members in relevant disciplines registered to use DeLIver.

Other digital-library projects have utilized various metrics to gauge audience penetration and degree of use. The TULIP final report develops and presents measures of degree of penetration for each institution offering the system (Borghuis et al. 1996). "Degree of penetration" ranged from about 1% to over 50%, and was calculated as the number of repeat users (faculty or graduate students), divided by the number of potential users at a particular site (41). "Repeat user" is defined as a user (faculty or graduate) who used TULIP in at least two months within a six-month period (41). About 47% of all people to whom CORE accounts were distributed accessed the system and performed a "loggable action" at least once (Entlich et al. 1996, 106). In a report on SuperJournal use, Baldwin (1998, 2) reports that percent of repeat usage equaled about 30%. Red Sage reported that, throughout the course of the project, about 20-25% of all registered users were "active" (Arnold, Badger, and Lucier 1997), Figure 4). In a survey distributed to all registered users of DeLIver in Spring 1998 (N=950; response rate = 25%), we found that about 19% of respondents reported having used DeLIver only once, while 81% indicated that they were repeat users (65% reported having used DeLIver 2-9 times; 16% reported ten or more uses).

"We considered what we knew about the reading habits of our potential users"

Because they use different definitions and different methods of gathering and calculating measures of use, it is hard to compare and interpret these usage statistics. The TULIP project, for example, compares the number of repeat users to the entire population of potential users; other projects report number of repeat users compared to all users. Some basic conclusions can be reached however:

  • Electronic-journal systems are not used by the majority of people in their target audiences, at least not within the first year or so of a system's implementation.

  • Electronic journal systems are used more by students than faculty. Along with DeLIver, both CORE (Entlich et al. 1996, 107) and TULIP (Borghuis et al. 1996, 73) reported greater use by students thanby faculty.

In considering how to measure and interpret use of DeLIver, we also thought carefully about library and journal use among those in our potential pool of users. Research on information-seeking habits of scientists and engineers provides some context for data on the use of DeLIver. Pinelli et al. (1998, Part B, 543-544) report, for example, that aerospace-engineering faculty used the library an average of seventeen times in six months. They also used journals an average of twenty-one times in six months. Griffiths and King (1993, 113) report that 62% of the journals that scientists and engineers read are personal subscriptions, and 24% are from the library. In terms of physical access, digital-library visits may be more like accessing a personal copy than a library copy for which one has to leave the office and visit the library.

We also considered what we knew about the reading habits and patterns of our potential users. First, how many of those graduate students — our heaviest users — regularly read the journals in DeLIver? Of those who do, how many of them already have personal or office subscriptions to the paper issues, making access through DeLIver from their desks less of an imperative? For those who would benefit most from online access at their desks, with what frequency and at what times of year do they normally search for or browse through journals? Many graduate students who participated in our interviews, for example, noted that they only consulted journals at the beginning or end of a project, or at a few key junctures in the semester. Several of our system designers thought that DeLIver usage would be extremely high in the final weeks of the Fall semester and were especially disappointed by the low number of attempted accesses during that time. However, the librarians on our project and Social Science Team members who had interviewed faculty and students countered that during that period people were either studying for exams or completing projects for which they had already gathered literature.

The calculation of the number of attempted accesses for DeLIver, then, should be viewed within the broader context of the size of our anticipated audience and knowledge about the nature and patterns of use of similar services. In attempting to put our statistics on attempted access and use into an appropriate context, additional comparative data would be helpful. It would be valuable to compare the number of graduate students who tried to search DeLIver to the number of graduate students who sought and used the same paper journals within similar timeframes. It would also help to compare the number of students who registered to use DeLIver to the number of students who have used our library's online catalog and other computer-based information systems. While we do not currently have those specific data, our user survey provides us with some basis for comparison. About 21% of respondents said they used computer-based library systems twice a semester or less. About 88% said they used the Web every day. About half consult science and technology journals, and visit the library, once or twice a month or less. Finally, we noted that only 280 people had registered for the custom-client version of DeLIver that ran on designated terminals in the library from August 1996 to October 1997, whereas over 1,400 people registered to use DeLIver between October 1997 and August 1998. So, despite DeLIver's access and use barriers, we have seen substantial increase in the use of our system since its migration to the Web.

"While the numbers are not definitive, the trends they reflect are real"

Another important consideration in interpreting levels of attempted use is the sort of trajectory that is typical for the adoption of any new information system. Based on our earlier interactions with potential users, we reasoned that it takes more than just a few weeks, or even a few months, to become aware of a new library system and to learn enough about it to be comfortable with its use. In addition, librarians whom we interviewed commented that use would probably be limited because of potential users' lack of commitment to any system portrayed as an "experimental" service. They said that neither they nor students and professors would be willing to invest the amount of time needed to become familiar with a new system if they felt that the system would be removed when the project supporting it ended. In general, the learning curve for DeLIver seems steep, especially given the relatively infrequent use an individual would be likely to make of it because of the limited extent of its collection.

Based on our interviews and observations, we also concluded that any new system, no matter how carefully designed to meet user needs, disrupts established work and information-seeking practices, so the transition takes time. Baldwin (1998, 2) notes that SuperJournal usage is building slowly. The number of searches performed in Red Sage rose from 6,000 in 1994 to over 13,000 in 1996 (Arnold, Badger, and Lucier 1997, Table 1). Here, again, comparison and interpretation of such reports is difficult because, for instance, the speed with which new documents and users are added to a system affects the rate of use.

Addressing Access Barriers

Once we identified key barriers to the accessibility of DeLIver, and reasonable approaches to removing them, we set about stepping up our publicity and revising our access procedures. Presented below in Table 2 is a rough summary of the changes we made and the resulting statistics for subsequent user access attempts. The time periods presented in the table are not uniform, and the actions taken did not necessarily occur at the beginning of their respective time periods. The figures were originally generated for informal, internal use only. They allowed University of Illinois DLI project members to get a basic sense of access activity over time. They do not provide definitive data on the effects of specific actions. But while the numbers are not definitive, the trends they reflect are real. Our changes made a difference in use of the system.

The first change that DeLIver designers implemented, in December 1997, was to revise the text of the NetID form to provide basic information about the nature and contents of DeLIver, and to emphasize that system use was free. In earlier versions of the interface, potential users were faced with a NetID login screen that demanded their identification number but told them nothing about the system itself. To encourage potential users to complete the login procedure, designers therefore added a few sentences intended to demonstrate why it would be worthwhile to continue. The new text noted that the system contained recent articles from over 50 science and technology journals and that its use was free. In addition, the word "DeLIver" was changed to a link to our "About DeLIver" Web page, so that people could get easily get further information about the system.

Designers also added an explanation of the term "NetID" and a picture of the NetID as it appears on a campus identification card (see Figure 1, below). These changes to the NetID form appear to have facilitated user access, since the number of people bailing out at the NetID form subsequently dropped by over 25%. The number of people who abandoned their access attempts at the registration page also fell by about 16%, perhaps because reassurance at the authentication stage carried over to the registration stage.

Figure 1Figure 1

No major changes in our online-access procedures were made until the end of January 1998, although promotion was increased by pasting DeLIver announcements on the paper journals in the Engineering Library that were also available on the Web. During that period, successful negotiation of the NetID form increased another 13%. We did not track access attempts by user, so it is possible that users may have made a number of attempts to complete the online forms over the course of several weeks and were finally successful after a few tries. Or perhaps users were more motivated to complete the NetId authentication process because accessing the material was more important to their work at the beginning of the semester. However, it may have been the stickers on the journals that led to an increase in successful completion of the login procedure.

From the last week of January through February 18, we introduced further revisions to our authentication procedures. New users were told explicitly that they had to register. If they entered a NetID and it was rejected, they were asked to either re-enter their NetID (in case they had mistyped it the first time) or complete a registration form. From the basic access statistics we generated, it does not appear that this change had any effect on users' success in completing the NetID authentication process, although, once again, the percentage of people who subsequently bailed out at the registration form decreased.

The final major revision was to shorten the registration form that all DeLIver users were required to complete (see Figure 2, below). We changed it both cosmetically and substantively. We improved the design of our form so that it appeared shorter. We also removed questions that we decided were not essential for our user studies, such as those asking about users' current hardware and software, and their use of various computer services. (Many of those questions were transferred to the user survey we distributed in Spring 1998.)

Figure 2Figure 2

Shortening the registration form didn't increase the number of people who completed it. We were surprised by that, since users had complained earlier about the length of the form. Perhaps even greater cuts are needed before any significant effect will be seen.

In Table 2, below, which summarizes the changes we made in DeLIver access procedures and their effect, we also found a difference in the number of attempted accesses at different times of the year. That suggests the value of ascertaining natural rhythms of library and journal use in order to interpret digital-library use statistics.

Table 2
Summary of Actions Taken to Facilitate Access, and Access Statistics
Time PeriodAction TakenAccess Statistics
November 1 - 14, 1997Baseline — No action taken

Of 1,540 attempted accesses, 1,276 (83%) stopped at NetID form

Of 186 people who entered NetID, 91 (49%) stopped at registration form

November 15 - 23, 1997Baseline — No action taken

Of 1,158 attempted accesses, 956 (83%) stopped at NetID form

Of 146 people who entered NetID, 68 (47%) stopped at registration form

December 9 - 19, 1997Revised NetID form

Of 462 attempted accesses, 259 (56%) stopped at NetID form

Of 113 people who entered NetID, 35 (31%) stopped at registration form

January 1 - 23, 1998Notices of Web availability attached to print journals

Of 560 attempted accesses, 240 (43%) stopped at NetID form

Of 182 people who entered NetID, 49 (27%) stopped at registration form

January 23-February 18, 1998Authentication process clarified

Of 1,238 attempted accesses, 500 (40%) stopped at NetID form

Of 368 people who entered NetID, 62 (17%) stopped at registration form

February 18 - April 9, 1998Registration form shortened

Of 1,978 attempted accesses, 750 (38%) stopped at NetID form

Of 571 people who entered NetID, 162 (28%) stopped at registration form

The data presented in the table suggest that some actions — such as providing basic information about a digital library's characteristics (i.e., its cost and contents) and clarifying access instructions — are more important than others in facilitating access. It appears that our efforts to improve access procedures paid off: The rate of successful access attempts rose from 17% to 62%; the rate of successful registration completions rose from 51% to 72%. Even with these gains, however, it is obvious that registration and log-in procedures are still major bottlenecks for users of DeLIver and, presumably, other systems.

Conclusions

This paper has presented a description and discussion of how we identified, interpreted, and attempted to remedy access barriers associated with DeLIver, our digital library testbed. The issues we confronted are likely to arise in the implementation stage of any electronic-library or electronic-publishing system that incorporates user authentication and registration processes. We believe that we gained important insights about the "hybrid" nature of information services that incorporate, virtually simultaneously, goals and characteristics of research, demonstration, and production systems. The hybrid nature of such systems increases the complexity of measuring and interpreting use. We found that insights gained from various user studies, such as our needs assessment interviews, usability tests, and transaction logs, helped us both to identify barriers to access and to develop reasonable expectations related to use. Likewise, we learned the importance of pooling the expertise of system designers, users, librarians, and social scientists in order to identify and address access barriers and interpret usage data.

For DeLIver, we set up the access system to allow us to learn more about our users, but in so doing unwittingly created barriers to use. One ramification of the hybrid nature of digital-library implementations is that the convenience of faculty members and students in using what — from their point of view — is a production system is compromised by the needs of those for whom the digital library is primarily a research or demonstration system, that is, the system's designers and evaluators. Hill et al. (1997) (citing Weedman (1998)) contribute to understanding this issue in their discussion of the user's return on investment for participating in user studies, including the completion of registration forms. They argue that people are not altruistic, but are motivated by the potential return on their investment of time and effort in using an information system. Often users are asked to contribute more to the development of a system than they get out of its use, at least in the early stages of implementation.

In studying DeLIver access attempts, we must conclude that seemingly "trivial" barriers — like basic awareness and authentication and registration requirements — may prevent a substantial portion of the target audience from ever using the digital libraries designed for them. This paper suggests that, as was emphasized in the final report on the TULIP project, both subjective (e.g., initial expectations of convenience) and objective (e.g., difficulties in accessing the system) access factors greatly influence use. It is often difficult to isolate the effects of individual factors in interpreting access and use data (Borghuis et al. 1996). In the case of DeLIver, we found that subjective factors like system awareness and knowledge that the system is free and has worthwhile contents apparently made a substantial contribution to reducing abandoned-access attempts. On the other hand, objective factors such as the clarifying and shortening of access procedures appeared to have little effect on users' success in completing such procedures.

Difficulties in understanding access and use suggest the need to integrate evaluation measures at different levels (e.g., system engineering and user outcomes) and across genres (e.g., online public access catalogs, digital libraries, abstracting and indexing databases) of systems (Saracevic, 1995). The consideration of use metrics typically employed in libraries and online systems, however, points to the inherent difficulties of applying and interpreting a single measure such as "number of library visits" across online and off-line systems, as well as across different system genres.

The continued development of appropriate metrics associated with digital library access and use will aid efforts to document and understand these phenomena. Deployment of standard measures across similar digital libraries would allow the collection of more uniform data. The wide variation in findings related to digital-library access and use (such as degree of audience penetration) suggests that both collecting uniform data and pursuing studies that can help interpret those data is critical.

Acknowledgement

The research represented in this paper was sponsored by the NSF/DARPA/NASA Digital Libraries Initiative under contract number NSF 93-141 DLI. The paper draws on the work of project members affiliated with the University of Illinois Social Science Team (S. Leigh Star, Laura Neumann, Emily Ignacio, Robert Sandusky, Cece Merkel, and Eric Larson), as well as work of project members engaged in the construction and promotion of our testbed (Bruce Schatz, William Mischo, Tim Cole, Thomas Habing, Susan Harum, and Donal O'Connor). The author is also grateful for assistance from Pamela Scales, who contributed to the identification and review of published literature on electronic journal use.



Ann Peterson Bishop is a faculty member in the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign (http://alexia.lis.uiuc.edu/). She received an M.L.S. from Syracuse University, where she also earned a Ph.D. in Information Transfer. Bishop is currently serving as Principal Investigator for a study of community information exchange and computer use in low-income neighborhoods (http://www.prairienet.org/cni), funded by the U.S. Department of Commerce and the W. K. Kellogg Foundation. Recent publications include: "Social Informatics for Digital Library Use and Infrastructure" (With Susan Leigh Star. Appears in Williams, M. E., ed. Annual Review of Information Science and Technology, vol. 31. Medford, NJ: Information Today, 1996, pp. 301-401.); and "Digital Libraries and Knowledge Disaggregation: The Use of Journal Article Components" (Appears in Digital Libraries '98: The Third ACM Conference on Digital Libraries. New York: ACM, 1998, pp. 29-39.) She is currently editing, with Nancy Van House and Barbara Buttenfield, a monograph devoted to human-centered design and analysis of digital libraries. You may contact her by e-mail at [email protected].

References

Arnold, James Q., Robert C. Badger, and Richard E Lucier. February, 1997 Red Sage Final Report. http://www.springer-ny.com/pres s/redsage/ (15 September 1998).

Baker, Sharon L., and F. Wilfrid Lancaster. 1991. The Measurement and Evaluation of Library Services. 2d ed. Arlington, VA: Information Resources Press.

Baldwin, Christine. 1998. "SuperJournal Update." Ariadne, issue 14 (March) http://www.ariadne.ac.uk/i ssue14/superjournal/ (15 September 1998).

Bishop, Ann P. 1995. "Scholarly Journals on the Net: A Reader's Assessment," Library Trends 43, no. 4: 544-570.

Bishop, Ann P., and Joseph Squier. 1995. "Artists on the Internet." In INET '95 Conference Proceedings, vol. II. Reston, VA: Internet Society, 1009-1018.

Bishop, Ann P., Susan Leigh Star, Laura Neumann, Emily Ignacio, Robert J. Sandusky, and Bruce Schatz. 1996. "Building a University Digital Library: Understanding Implications for Academic Institutions and Their Constituencies." In Higher Education and the NII: From Vision to Reality. Proceedings of the Monterey Conference, Sept. 26-29, 1995. Washington, DC: EDUCOM, 45-53.

Borghuis, Marthyn, Hans Brinckman, Albert Fischer, Karen Hunter, Eleonore van der Loo, Rob ter Mors, Paul Mostert, and Jaco Zijlstra. 1996. TULIP: Final Report. New York: Elsevier Science. [formerly http://www.elsevier.nl:80/hom epage/about/resproj/trmenu.htm] (15 September 1998)

Borgman, Christine L. 1996. "Why Are Online Catalogs Still Hard to Use?" Journal of the American Society for Information Science 47, no. 7: 493-503. [doi: 10.1002/(SICI)1097-4571(199607)47:7<493::AID-ASI3>3.0.CO;2-P]

Borgman, Christine L., Sandra G. Hirsh, and John Hiller. 1996. "Rethinking Online Monitoring Methods for Information Retrieval Systems: From Search Product to Search Process." Journal of the American Society for Information Science 47, no. 7: 568-583. [doi: 10.1002/(SICI)1097-4571(199607)47:7<568::AID-ASI8>3.0.CO;2-S]

Culnan, Mary J. 1985. "The Dimensions of Perceived Accessibility to Information: Implications for the Delivery of Information Systems and Services." Journal of the American Society for Information Science 36, no. 5: 302-08. [doi: 10.1002/asi.4630360504]

Entlich, Richard, Lorrin Garson, Michael Lesk, Lorraine Normore, Jan Olsen, and Stuart Weibel. 1996. "Testing a Digital Library: User Response to the CORE Project." Library Hi Tech 14, no. 4: 99-118. [doi: 10.1108/eb048044]

Garlock, Kristen L., William E. Landis, and Sherry Piontek. 1997. "Redefining Access to Scholarly Journals: A Progress Report on JSTOR." Serials Review 23, no. 1: 1-8. [doi: 10.1016/S0098-7913(97)90002-2]

Griffiths, Jose-Marie, and Donald W. King. 1993. Special Libraries: Increasing the Information Edge. Washington, DC: Special Libraries Association.

Guthrie, Kevin M. 1997. "JSTOR: From Project to Independent Organization." D-Lib Magazine (July/August) [doi: cnri.dlib/july97-guthrie]

Hill, Linda L., Ron Dolin, James Frew, Randall B. Kemp, Mary Larsgaard, Daniel R. Montello, Mary-Anna Rae, and Jason Simpson. 1997. "User Evaluation: Summary of the Methodologies and Results for the Alexandria Digital Library, University of California at Santa Barbara." In Proceedings of the 60th Annual Meeting of the American Society for Information Science, Nov. 1-6, 1997, Washington, DC. Medford, NJ: Information Today, 225-243.

Lancaster, F. Wilfrid. 1993. If You Want to Evaluate Your Library... 2d ed. Champaign, IL: University of Illinois, Graduate School of Library and Information Science.

_______. 1995a. "Are Evaluation Criteria Applied to 'Traditional' Libraries Equally Applicable to Digital Libraries?" In How We Do User-Centered Design and Evaluation of Digital Libraries: A Methodological Forum, 37th Allerton Institute, Oct. 29-31, 1995, Monticello, IL. [Editor's note: link removed August 2001 because it was no longer active. [formerly http://edfu.lis.uiuc.edu/allerton/95/s1/lancaster.html] (15 September 1998)

_______. 1995b. "Needs, Demands and Motivations in the Use of Sources of Information," Journal of Information, Communication and Library Science 1, no. 3: 3-19.

McKnight, Cliff. 1997. "Electronic Journals: What Do Users Think of Them?" [formerly http://www.dl.ulis.ac.jp/ISDL97/proceedings/mcknight.html] (15 September 1998).

Neal, James G. 1997. "Models of Analysis and Data Drawn from the Project Muse Experience at Johns Hopkins University." http://www.arl.org/scomm/scat/nea l.html

Neumann, Laura J., and Ann P. Bishop. Forthcoming. "From Usability to Use: Measuring Success of Testbeds in the Real World." In The 1998 Clinic on Library Applications of Data Processing: Collected papers. Champaign, IL: University of Illinois, Graduate School of Library and Information Science. (June 15, 1998 draft available at: http://forseti.grainger.uiuc.edu/dlisoc/socsci_site/dpc-paper-98.html (15 September 1998).

Neumann, Laura J., and Emily Ignacio. 1998. "Trial and Error as a Learning Strategy in System Use." In Proceedings of the Annual Meeting of the American Society for Information Science. Medford, NJ: Information Today, in press.

Pinelli, Thomas E., Rebecca O. Barclay, John M. Kennedy, and Ann P. Bishop. 1997. Knowledge Diffusion in the Aerospace Industry: Managing Knowledge for Competitive Advantage. Part A and B. Greenwich, CT: Ablex.

Poole, Herbert. 1985. Theories of the Middle Range. Norwood, NJ: Ablex.

Rowland, Fytton, Ian Bell, and Catherine Falconer. 1998. "Human and Economic Factors Affecting the Acceptance of Electronic Journals by Readers. Draft. [formerly http://www.sfu.ca/scom/rowland-paper.html] (15 September 1998)

Rowland, Fytton, Cliff McKnight, and Arthur Jack Meadows, eds. 1995. Project ELVYN: An Experiment in Electronic Journal Delivery: Facts, Figures, and Findings. London: Bowker-Saur.

Saracevic, Tefko. 1995. "Evaluation of Evaluation in Information Retrieval." In Proceedings of the Association for Computing Machinery Special Interest Group on Information Retrieval (ACM/SIGIR) 18th Annual International Conference on Research and Development in Information Retrieval, July 9-13, Seattle, WA. New York: ACM, 138-146.

Schatz, Bruce R., William H. Mischo, Timothy W. Cole, Joseph B. Hardin, Ann P. Bishop, and Hsinchun Chen. 1996. "Federating Diverse Collections of Scientific Literature." Computer 29, no. 5: 28-36. [doi: 10.1109/2.493454]

Schatz, Bruce R., William H. Mischo, Timothy W. Cole, Ann P. Bishop, Susan Harum, Eric Johnson, and Laura J. Neumann. 1999, in press. "Federated Search of Scientific Literature: A Retrospective on the Illinois Digital Library Project." Computer.

Van House, Nancy A., Mary Jo Lynch, Charles R. McClure, Douglas L. Zweizig, and Eleanor Jo Rodger. 1987. Output Measure for Public Libraries: A Manual of Standardized Procedures. 2d ed. Chicago: American Library Association.

Weedman, Judith. 1998. "The Structure of Incentive: Design and Client Roles in Application-Oriented Research." Science, Technology, and Human Values 23, no. 3: 315-345. [doi: 10.1177/016224399802300303]