spobooks5621225.0001.001 in

    5.3 PEAK design

    Unlike TULIP, where Elsevier took a leading role in the design and oversight of the experiment, PEAK was a University of Michigan experiment in which Elsevier was a participant. Wendy Pradt Lougee, from the university library, was PEAK project director and Jeffrey K. Mackie-Mason, a professor in the School of Information and Department of Economics, led the economic design team that included economics graduate students, librarians and technologists. It fell to Jeff's team to do much of the experimental design and, later, the analysis of the results. Wendy took on the thankless task of recruiting other institutional participants and managing all of the day-to-day processes. They were ably assisted by others within the School of Information.

    PEAK was similar to TULIP in having a central data provider and a number of participating libraries. The University of Michigan was the host, offering web access to all Elsevier titles from its site. The participating institutions totaled twelve in all, ranging from a small, highly specialized academic institution to corporations and large research universities. The goal of the experiment was to understand more about how electronic information is valued and to investigate new pricing options.

    The participating libraries were assigned by Michigan to one of three groups (Red, Green and Blue). There were also three pricing schemes for content access being tested and each library group had some (but not full) choice among these three pricing schemes.

    In addition to the content fees (which came to Elsevier), Michigan charged a "participation fee," to offset some of their costs, ranging from $1000 to $17,000 per year. The fee was differentially set based on the relative size (e.g., student population) of the institution.

    What were the pricing choices?

    1. Per article purchase — The charge was $7 per article. After this type of purchase, the article was available without additional charge to the requesting individual for the duration of the experiment.

    2. Generalized subscription — $548 for a bundle of 120 articles ($4.57 per article). Bundles had to be purchased at the beginning of the year and the cost was not refundable if fewer articles were actually used. Articles in excess of the number purchased as bundles were available at the $7 per article price. Articles accessed under this option were available to the entire user community at no additional charge for the duration of the experiment.

    3. Traditional subscription — $4 per issue (based on the annual number of issues) if the title was subscribed to in paper; $4 per issue plus 10% of the print price if the title was not subscribed to in paper; full price of the print if the print were to be cancelled during the experiment. Those purchasing a traditional subscription had unlimited use of the title for that year.

    In addition to the paid years (1998 and 1999), there were back years (1996-1997) available for free.

    Elsevier participated in the pricing in the sense that we had discussions with our Michigan counterparts on pricing levels and in the end agreed to the final prices. There was some give and take on what the prices should be and how they should be measured (e.g., using issues as a measurement for the traditional subscriptions was a compromise introduced to permit some reflection of the varying sizes of the journals). We had hesitation about the low levels of the prices, feeling these to be unrealistic given real costs and the usage levels likely to develop. But in the end we were persuaded by the economic and experimental design arguments of Jeffrey Mackie-Mason.

    Once the prices were set, the Red group had all three pricing alternatives to choose from, Green had choices 1 and 2 and Blue had choices 1 and 3. In making choices, some decided to take all three, some to take only the per article transactions or only the generalized subscription. As the experiment ran more than one year, there was an opportunity to recalibrate at the end of 1998 based on what had been learned to date and to make new decisions for 1999.

    The process of agreeing to and setting up the experiment and then actually getting underway took much longer than any of the participants expected. We had all hoped for an experiment of at least two years (1997-1998). We started our discussions no later than late 1995 or early 1996. The experiment was actually live in 1998 and ended in August, 1999. It is hard now to reconstruct what happened to delay the experiment. Perhaps most of the initial long delay was a result of Elsevier's hesitation on pricing issues (more on this below), although the experimental design also took time at Michigan. The difficulties later were more in the implementation process. Signing up institutions was difficult. Many institutions that were approached were unsure about the price in general and wanted, for example, a lower participation fee, hence the ultimate range of fees negotiated. They were also concerned about participating in an experiment and felt there could be some confusion or difficulty in explaining this to their users. Once signed, start-up also took time at each location. In addition, there was a need, not always immediately recognized, for marketing and promotion of PEAK availability on campus.