EPUBs are an experimental feature, and may not work in all readers.

Abstract

Hierarchical systems of indexing and classification go against the grain of the culture that has built up around the Internet, but is there a better way of indexing and classifying documents? Faceted indexing and classification, based on clusters of objects and attributes, was originally designed to accommodate the growth of knowledge and has enjoyed renewed interest as a tool to improve relevance and recall of electronic documents from across diverse platforms. Applications so far, however, have been limited. The methods of indexing scientific and technical information are still determined by user expectation and practices of established indexing and abstracting services in traditional disciplines. To serve readers in academic and research environments fully, publishers of e-journals will probably have to continue to submit issues to third-party indexing and abstracting services. No new paradigm is yet in sight.

Introduction

Since the days of the Library of Alexandria, librarians have envisioned a library encompassing all recorded knowledge and have pursued this elusive goal though the centuries with the arrival and departure of an eccentric assortment of ideologues, technologies, and funding agencies. Their quest forms an interesting and often cautionary tale, usually little known except within the library profession—that is, until today. The ancient idea of a universal library has now returned not so much in libraries proper, but on the vast, seemingly borderless playing field of the global networks where a wired generation has committed itself to making a new Alexandria Library, not only of the Internet’s 100-billion-and-counting Web pages, but also of an almost equally numerous array of digitized books, periodicals, sound, and video recordings. However, the promise that “the technology of search will transform isolated books into the universal library of all human knowledge” [Kelly 2006, 71] still remains only a promise.

Keyword searching using search engines such as Google lacks precision, typically drawing in tens or hundreds of thousands of marginally relevant documents. Links to cited papers, now a staple of the journal literature that appears on line, are a great convenience, but still restrict the reader or researcher to a limited set of documents—those known and cited by the authors of the papers that are themselves published in digital formats. Comprehensive and systematic indexes such as Medline and Compendex, usually continuations of traditional print indexes (Index Medicus and Engineering Index), probably continue to provide the most comprehensive and equitable access to periodical literature, but they seem to have lost ground to more random forms of access. In their survey of Internet use, Bjork and Turk conclude that people use “references in other publications,” general or specific Web search engines, and hyperlinks to find “items worth reading.” They are least likely to use “traditional bibliographic databases,” or “browsing in libraries.” They conclude that Internet users have a “strong preference for just-in-time search in the readily available (references in other publications) or free (Web) resources.”

“researchers frequently caution against the use of search engines alone.”

While that is probably true in most cases, researchers involved in the formal processes of obtaining grants, performing research, and publishing results forgo systematic literature searches in established indexes and abstracts for the more readily available forms of desktop access only at their own peril. Writing in the Journal of the American Medical Association, Satya-Murti cites the search of Medline as “an irreducible step” in medical research; other researchers frequently caution against the use of search engines alone. An article on healthcare information on the Internet also published in JAMA rated access to the published literature through search engines as “poor and inconsistent.” [Berland et al. 2001]

Indexing and Classification: Hierarchical and Faceted

Can alternatives to thesaurus-based third-party indexes, such as Medline or BIOSIS, ever become focused enough to circumvent indexing and abstracting services and to make the universal book or library a reality? One alternative is the use of systematically harvested metadata to link like documents from across the Web. Another is the use of folksonomies—user-defined tags—that may automatically be mapped to formal classification or indexing terms. Will publishers eventually be able to rely on such indexing and linking that would automatically both open search to a wider audience and wider set of documents and, at the same time, increase the relevancy of retrieved documents, so that, as Kelly writes, their “ new works are born digital, and they will flow into the universal library as you might add words to a long story”? [Kelly 2006, 71]

A Place for Everything

The hierarchical and distinctly authoritarian nature of the indexing based on controlled vocabularies established in the age of print and still favored by librarians seems increasingly to go against the grain of the egalitarian ethos of the Internet and its new and unprecedented way of sharing information and connecting people. The products of remote, powerful authorities such as the National Library of Medicine or the Library of Congress, hierarchical classification and indexing systems with their thesauri of branching broader and narrower terms, are explicitly designed to put things in their place In the culture that has built up around the Internet, such systems appear limiting and reactionary.

They have appeared so in the past as well, before the Internet. In a 1996 study of library classification, Weinberg writes,

Rigid hierarchical classification schemes cannot keep up with scientific advances. Sections of the widely used schemes—notably Dewey—are restructured periodically, but there are always protests from the library community when the revisions necessitate reclassification of large parts of a collection. Jacob [1994] has argued for flexible categorization, providing evidence from the field of psychology that different people classify things in different ways. And yet, the systems replete with options—such as Bliss—have not been widely adopted, perhaps for the economic reason cited above: a professional librarian has to select an option and document it locally. It is easier to have a paraprofessional copy a centrally supplied classification number without thinking whether it serves local need. [Weinberg 1996, 6]

Hierarchical classification and indexing based on models such as the binomial nomenclature of Linnaean taxonomy is thus generally easier to apply and better understood by the public. Alternatives, running from S. R. Ranganathan’s system of faceted classification to today’s semantic web of interlocking ontologies, systems in which objects are grouped by varying, shared attributes rather than unvarying essences—better represented graphically as clusters of objects and attributes than branches of a tree stemming from a single trunk—continue to provoke discussion but have never really gotten off the ground, neither in print nor yet in electronic publishing. There is, moreover, a long history of their never having gotten off the ground.

Classification and Its Discontents

As Weinberg implies, employing one classification schedule or indexing vocabulary or another is more often a matter of utility or convenience for the classifying agency and its constituency than of philosophical principle. Many of the more comprehensive classification schemes currently in use are in fact not strictly hierarchical, but hybrids, polyhierarchies in which specific subject terms are entered more than once in varying relationships to broader and narrower terms in multiple hierarchies, thus allowing some of the wiggle room offered by more flexible, but less utilitarian, classification schemes. (The NLM’s MeSH vocabulary is an example of a polyhierarchy. In MeSH “Multiple Sclerosis” is listed once under “Immune System Diseases,” twice under “Nervous System Diseases” under “Autoimmune Diseases of the Nervous System,” and again under “Demyelinating Diseases.”[1])

Since the bibliographic applications of classification are primarily intended to make documents retrievable, they are not required to make absolute statements on knowledge and its objects. But the fact also remains that when, for the convenience of users, subject and index headings follow more widely accepted conventions for describing, organizing, and applying knowledge in a given field, they reinforce a status quo understanding of the content they describe. The longer and more widely they are used, the more classification systems take on a life of their own and the greater the likelihood of misrepresentation and inconvenience. Bowker and Star’s Sorting Things Out: Classification and Its Consequences provides an absorbing survey of formal classification schemes, their frequent failures in getting an adequate hold on reality, and subsequent implications for setting social policy (for a truly tragic tale, see their chapter on racial classification in apartheid South Africa).

Dynamic, evolving natural systems, in particular, pose difficulties for the kind of labeling that results from indexing and classification. As Bowker and Star point out,

The structural aspects of classification are themselves a technical specialty in information science, biology, and statistics, among other places. Information scientists design thesauri for information retrieval valuing parsimony and accuracy of terms, and the overall stability of the system over long periods of time. For biologists, the choice of structure reflects how one sees species and the evolutionary process. For transformed cladists and numerical taxonomists, no useful statement about the past can be read out of their classifications; for evolutionary taxonomists that is the very basis of their system. These beliefs are reflected in radically different classification styles and practices, for example, whether or not to include the fossil record in the classification system; fossils being a problem since they perpetually threaten to create another level of taxa and so cause an expensive and painstaking reordering of the whole system. [Bowker and Star 1999, 58–59]

Similar dilemmas occur in relation to diseases and their relationships to host organisms. Clinical medicine, epidemiology, and information science have overlapping but not identical interests in classifying and indexing diseases and syndromes that may be of varying etiologies and presentations. While “pneumonia” may be of significance in keeping mortality statistics, it may be less significant in clinical medicine’s concern with diagnoses that indicate immediate causes (streptococcal infection, etc.) and underlying causes (aging, heart failure, poor nutrition, etc). Writing on Parkinson’s Disease, O’Suilleabhain argues that in neurology disease classification based on clinical manifestations such as tremor (Parkinson’s disease, for example) might soon be better based on genetic origins as genetic etiologies, or disease mechanisms, become better understood The classification of neurological disease, then, may change with the advancing science of genetics, and bibliographic classification would eventually have to follow suit.

While the reclassification of neurological disorders connecting diagnoses with specific genes is possible, it is not a simple task within a hierarchical classification system, and one that is likely, as Weinberg points out, to meet some resistance from those maintaining databases with millions of records. For such emerging or neglected connections brought into light by scientific research, S.R. Ranganathan proposed a system of faceted classification that discarded traditional hierarchies for an organization of knowledge based on a non-hierarchical grouping of objects by common attributes or facets (“rootedness” to “oak”). Ranganathan thought that viewing objects in terms of varying, shared attributes rather than unvarying essences would better accommodate the growth of knowledge because it would be more adaptable. Knowledge managers and information architects today are taking up Ranganathan’s facets in their efforts to exploit the power of their search technologies. They also hark back to the father of modern information science, the Belgian documentist Paul Otlet, who proposed a similar system of indexing and classification based on facets.

Otlet’s indexing system might, for example, take documents describing the physical properties of lead, normally classified in metallurgy, and link them to documents in architecture, engineering, medicine, and toxicology, nominally unrelated disciplines. In the early 1900s he devised an elaborate indexing system to retrieve related references from among a range of separate subject indexes—a mechanical precursor to the World Wide Web.

Paul Otlet envisioned a new kind of scholar’s workstation: a moving desk shaped like a wheel, powered by a network of hinged spokes beneath a series of moving surfaces. The machine would let users search, read and write their way through a vast mechanical database stored on millions of 3x5 index cards.

This new research environment would do more than just let users retrieve documents; it would also let them annotate the relationships between one another, “the connections each [document] has with all other [documents], forming from them what might be called the Universal Book.” Otlet imagined a day when users would access the database from great distances by means of an “electric telescope” connected through a telephone line, retrieving a facsimile image to be projected remotely on a flat screen. In Otlet’s time, this notion of networked documents was still so novel that no one had a word to describe these relationships, until he invented one: “links.” Otlet envisioned the whole endeavor as a great “réseau”—web—of human knowledge. [Wright 2003]

Otlet and his quaintly grandiose indexing scheme were just about forgotten except to scholars until the World Wide Web provided the near perfect technology for his dreamed web of knowledge.

The Web as it exists today, barely emerged from frontier days of organized chaos, continues to defy information science. It has grown so vast so quickly that the idea of systematically indexing it seems to have divided librarians and information scientists into believers and skeptics, leaving wide open the question of how documents might be more meaningfully linked or grouped for retrieval. Advocates of the automatic indexing and retrieval of Web pages based on interlocking, faceted lattices or ontologies face the dilemma already encountered in information science: everything in the universe stands in some kind of relationship to everything else (relatively larger, smaller, earlier or later in time, at nearer or closer distances, etc.). Thus, while every Web page may automatically be related in one way or another to every other, few such casual relationships add much to knowledge.

“while every Web page may automatically be related in one way or another to every other, few such casual relationships add much to knowledge.”

Rayward, Otlet’s biographer and foremost English language scholar, finds Otlet’s arguments on the interrelatedness of fields of knowledge to be more visionary than practical, although they have in part inspired the most ambitious hyperlinking schemes of many of today’s information age gurus: “Often he presents little more than long lists of desiderata for achieving the reorganisation of the world and access to knowledge along the lines he thought desirable. Nothing is too grand or general or difficult to appear on these lists. Committed to a thick encyclopedism which has an almost imperceptible pulse of argument, his major books present little or no momentum of thesis, evidence, and argument. If they make a case for something it is at such a general level that today we might well see the exercise as banal or pointless” [Rayward 1997]. Rayward’s comment anticipates our current experience with the informal, user-generated tagging of folksonomies. Even in consumer applications and social networking, classification based on non-hierarchical many-to-many relationships tends to break down, creating too many incidental, invalid, and redundant categories to form even the semblance of a coherent system:

The advantage of folksonomies isn’t that they’re better than controlled vocabularies, it’s that they’re better than nothing, because controlled vocabularies are not extensible to the majority of cases where tagging is needed. Building, maintaining, and enforcing a controlled vocabulary is, relative to folksonomies, enormously expensive, both in the development time, and in the cost to the user, especially the amateur user, in using the system. Furthermore, users pollute controlled vocabularies, either because they misapply the words, or stretch them to uses the designers never imagined, or because the designers say “Oh, let’s throw in an ‘Other’ category, as a fail-safe” which then balloons so far out of control that most of what gets filed gets filed in the junk drawer. Usenet blew up in exactly this fashion, where the 7 top-level controlled categories were extended to include an 8th, the “alt.” hierarchy, which exploded and came to dwarf the entire, sanctioned corpus of groups. [Shirky 2005]

Practical and Logical Dilemmas

“By its nature, classification entails generalization and, since the terms that we use to describe the world and its objects are never perfect, also a degree of falsification.”

The completely automated discovery of knowledge, except perhaps in highly controlled experiments, still seems a long way off, if in the offing at all. Indexers and classifiers working manually are rarely at leisure to probe hitherto undiscovered relationships among the documents they handle: they are limited by the structure of the vocabularies they use. But they do use judgment in assigning headings, and judgment introduces bias. Human factors may remain more prevalent in classification and indexing than is generally acknowledged. As Bowker and Star observe, classifiers tend to be either “lumpers” or “splitters.” [1999,159] Some (the lumpers) prefer to gather objects together under general headings, others (the splitters) to differentiate them by placing them under separate, more specific headings. Since no two things are completely alike, however, even splitters must generalize to some extent. By its nature, classification entails generalization and, since the terms that we use to describe the world and its objects are never perfect, also a degree of falsification. The optimal granularity of a given vocabulary—the fineness of the distinctions it makes—will vary with the mindset of a user community. Public health officials faced with marshalling resources to combat such scourges as AIDS or tuberculosis may prefer lumping cases together. Clinicians treating individual cases prefer splitting. O’Suilleabhain argues, “The denser the vocabulary, the richer the exchange. The tradeoff is the clutter introduced: excessive terminology makes assimilation more difficult, at which time communication errors rise. Advantages outweigh disadvantages when a disease categorization is valid and useful. Does the term benign tremulous parkinsonism merit a place, if not in the disease-classification manuals then at least in the minds of neurologists? That some patients can persist with severe tremor but otherwise have a limited deterioration in parkinsonism is a clinically useful nugget.” [O’Suilleabhain 2006, 321] Neurologists and other medical specialists may be happier, therefore, with more granularity than public health officials. As Sir John Wilson noted of blindness, “People do not really go blind by the million. They go blind individually, each in his own predicament.” [Ferris 2004, 451] Or, as William Blake put it more bluntly in his Annotations to Reynolds, “To Generalize is to be an Idiot. To Particularize is the Alone Distinction of Merit.”

Librarians and indexers applying subject terms to documents spanning public health, medical specialties, biochemistry, and genetics are usually obliged to retreat to a middle ground, avoiding the ephemeral or little-used terminology that will clutter their thesauri, while still making the distinctions recognized and understood by their constituencies. The overall stability and relative ease of application of the established classification systems and indexing vocabularies, especially when maintained by such authorities as the National Library of Medicine, appeal to them. For databases of hundreds of million records extending back to the advent of electronic abstracts in the 1960s, stability, maintainability, and economy of indexing systems remain prime factors [Bowker and Star 1999]. So does document selection. With the global explosion of publication, especially electronic publication, Otlet’s idea of a central library or agency to gather, organize, and make accessible the published output of the entire world no longer seems possible. Catalogs and indexes such as the Library of Congress catalog or Medline, once prized as nearly comprehensive, are now better defined by the type of selectivity they practice. Yet, with the world’s vast output of information daily becoming even vaster, editorial selectivity may be seen as adding value to indexing.

Professional librarians need to be able create meaningful sets of documents. Their strategy is to select relevant databases, to cast nets widely using assigned descriptors and then refine searches, often using specific keywords (“Cyclooxygenase Inhibitors/DE AND Vioxx” for a set of papers whose principal subject is the class of drugs called Cyclooxygenase Inhibitors and which specifically mention Vioxx). Such a capability is nearly impossible except within such professionally indexed databases as PubMed, which analyze nine different synonyms to index and gather together articles on Cyclooxygenase Inhibitors. Using PubMed indexes, one can easily retrieve a set of clinical trials involving Cox-2 Inhibitors (“Cyclooxygenase 2 Inhibitors/DE AND PUBLICATION TYPE=Clinical Trial”). That search recently yielded 117 papers, all clinical trials, and took less than a minute. Try to get such results on Google or at the Highwire Press’s searchable aggregation of over 5 million articles from leading medical and science journals; neither database supports searching on assigned subject headings or publication type.

The models of trust on which scientific and scholarly publishing depend remain solidly vertical, entailing peer review and professional indexing, both in print and on the Web. The idea, often implicit in grander indexing and classifying schemes such as Otlet’s—that “science is on a long term campaign to bring all knowledge in the world into one vast, interconnected, foot-noted, peer-reviewed web of facts” [Kelly 2006, 71]—is not necessarily an idea that is entirely welcome. Most writers cite Borges’ elusive Library of Babel, but the idea of all knowledge organized into an integrated whole, represented by Otlet’s universal book, is also the assumption of both the rationalism of 18th-century encyclopedias and the idealism of 19th-century metaphysics. It is an assumption that has been soundly and repeatedly criticized up through our century.

Bertrand Russell, among others, attacked Hegel’s belief that “absolute reality forms one single harmonious system.” He argued that “we cannot prove that the universe as a whole forms a single harmonious system such as Hegel believes that it forms. . . . Thus we are left to the piecemeal investigation of the world and are unable to know the character of those parts of the universe that are remote from our experience.” [Russell 1912, 106] Russell and his followers attempted to exorcise the often-unexamined belief that sciences and social sciences necessarily illuminated and advanced each other, progressing in concert toward the revelation of a great truth. They also took aim at the opposing assumption that the theories and explanations of the sciences and other branches of knowledge competed with each other for primacy. They said that neither physics, biology, nor economics, nor any other given field, no matter how immutable its laws and how vast its data, could be assumed as necessarily taking precedence over any other field or as being sufficient in itself. As Gilbert Ryle put it, “Expertness in one field was assumed to carry with it the techniques of handling problems in the other. This instance shows not only how theorists of one kind may unwittingly commit themselves to propositions belonging to quite another province of thinking, but also how difficult it is for them, even after inter-theory litigation has begun, to realize just where the ‘No Trespassing’ notices should have been posted. ” Ryle concluded that “the kind of thinking which advances biology is not the kind of thinking which settles the claims and counter-claims between biology and physics.” [Ryle 1954, 7]

In practice, indexers and catalogers are more of the mind of Russell and Ryle than of Hegel and Kelly. They work best within a pre-defined domain and are not much concerned with a final synthesis of all knowledge. Despite the Internet and the supposed growth of knowledge, the fields of science and technology are still largely dependent on peer-reviewed literatures gathered and indexed along traditional lines by interested professional societies, government agencies, and commercial indexing and abstracting services. More innovative forms of indexing have actually found their application in the formerly more-constant humanities. Both the Getty Art and Architecture Thesaurus and the Modern Language Association Bibliography employ faceted thesauri. Getty’s Art and Architecture Thesaurus is a polyhierarchy based on seven facets—association, physical attributes, style and periods, agents, activities, materials, and objects. It allows works in various genres and art forms, which otherwise might occupy isolated niches within enumerated hierarchies, to be grouped by such facets as ideology of the creator (Communist, Buddhist, etc.) or materials (clay, marble, paper, etc.), making it possible, for example, to query a database on objects in clay created by Marxists—the sort of connections that interested Ranganathan and Otlet. But such querying is possible only if an indexer has manually indexed an artist as “Marxist.” The discovery of such facets as political affiliation is still far from the capabilities of existing automated indexing systems and, due to the limits of the machine processing of natural language, may always be. Few, if any, of such faceted applications seem to be in place for indexing the more extensive and highly funded literature of science and medicine where the established practices governing grants, publication, and literature searching may result in greater conservatism. Academic literature may in fact even become more authoritarian in face of the great volume of open, unrefereed publication. With the Internet making it possible for just about anyone to be published and indexed in Google, as Eli Noam writes, “In the validation of information, the university will become more important than ever. With the explosive growth in the production of knowledge, society requires credible gatekeepers of information and has entrusted that function to universities and its resident experts, not to information networks.”

Conclusion

To stay in the ballpark in which librarians and research scientists play and to maximize the impact of their content, publishers of e-journals are still advised to emulate the form and practices of established print journals. This generally means well-managed peer review, structured abstracts, publication on a regular schedule, conscientious submission of a version of record to established third-party indexing and abstracting services for thesaurus-based indexing, and some form of trusted, publicly available archiving.

The publisher of Science Citation Index, whose enhanced, hyperlinked Web-based product is called The Web of Science, does not view e-journals differently than print journals among the journals their editors collect and index:

Thomson ISI is an indexing service that selects, indexes, and ranks journals. In his presentation, Testa said the selection of journals in electronic format is not very different from the selection and indexing of print journals. In both cases, selection is based on basic publishing standards, such as peer review and timeliness, editorial content, international diversity, and citation analysis. But indexing electronic journals has required some adjustments. For example, it is essential that journals publish on time. . . . [Silberg 2003]

E-publishers aiming at academic readerships who do not submit their journals to indexing and abstracting services might consider adding appropriate controlled vocabulary descriptors as metadata to the content they post. Librarians are increasingly active in identifying, classifying and providing access to “grey literature”—conference papers, technical reports, Web sites—of interest to their clienteles, typically using or adapting established classification and indexing vocabularies to find and link to relevant resources. Since much of the mapping is done automatically, it helps to use language that machines understand, as Hagedorn and her co-authors describe in their 2007 article on metadata.

As for the universal library, it still eludes us. No matter how much is learned in the quest, the goal, like Mr. Casaubon’s search for the key to all mythologies in Middlemarch, may not only prove practically unattainable, but perhaps, finally, logically impossible.



Bruce McGregor, a graduate of the University of Chicago Library School, has worked in publishing as indexer and editor for over 15 years, most recently as indexer for JAMA/Archives, the journals of American Medical Association. He is currently a consultant. He previously published in the Journal of the Medical Library Association. He may be reached at bmcgregor@onshore.net.


Note

    1. National Center for Biotechnology Information (National Library of Medicine and National Institutes of Health), MeSH (Medical Subject Headings) home page, http://www.ncbi.nlm.nih.gov/sites/entrez?amp;db=meshreturn to text

    References

    Berland, Gretchen K., Marc N. Elliott, Leo S. Morales, Jeffrey I. Algazy, Richard L. Kravitz, Michael S. Broder, David E. Kanouse, Jorge A. Muñoz, Juan-Antonio Puyol, Marielena Lara, Katherine E. Watkins, Hannah Yang, and Elizabeth A. McGlynn. “Health information on the Internet: accessibility, quality, and readability in English and Spanish.” JAMA 285, no. 20 (May 23–30, 2001): 2612–21. [doi: 10.1001/jama.285.20.2612]

    Bjork, Bo-Christer, and Ziga Turk. “How Scientists Retrieve Publications: An Empirical Study of How the Internet is Overtaking Paper Media.” The Journal of Electronic Publishing 6, no. 2 (Dec 2000). [doi: 10.3998/3336451.0006.202]

    Bowker, Geoffrey C., and Star, Susan Leigh. Sorting Things Out: Classification and Its Consequence. Cambridge, Mass.: MIT Press, 1999.

    Ferris, Frederick L., and James M. Tielsh. “Blindness and Visual Impairment: A Public Health Issue for the Future as Well as Today.” Archives of Ophthalmology 122 (April 2004): 451–52. [doi: 10.1001/archopht.122.4.451]

    Hagedorn, Kat, Suzanne Chapman, and David Newman. “Enhancing Search and Browse Using Automated Clustering of Subject Metadata.” D-Lib Magazine 13, no. 7/8 (July/August 2007); [doi: 10.1045/july2007-hagedorn]

    Kelly, Kevin. “Scan This Book!” The New York Times Magazine, May 14, 2006 (Section 6), pp. 42ff.; available at http://www.nytimes.com/2006/05/14/magazine/14publishing.html?_r=1&oref=slogin

    Noam, Eli M. Electronics and the Dim Future of the University.” Science 270 (October 13, 1995): 247–49;. [doi: 10.1126/science.270.5234.247]

    O’Suilleabhain, Padraig E. “Parkinson Disease with Severe Tremor but Otherwise Mild Deterioration.” Archives of Neurology 63, no. 3 (Mar 2006): 321–22. [doi: 10.1001/archneur.63.3.321]

    Ranganathan, S.R. Elements of Library Classification. Bombay: Asia Publishing House, 1962.

    Rayward, W. Boyd. “The Origins of Information Science and the International Institute of Bibliography / International Federation for Information and Documentation (FID).” First published, Journal of the American Society for Information Science 48 (April 1997): 289–300;. [doi: 10.1002/(SICI)1097-4571(199704)48:4<289::AID-ASI2>3.0.CO;2-S]

    Russell, Bertrand. The Problems of Philosophy. New York: Dover, 1999 [first published 1912].

    Ryle, G. Dilemmas. London: Cambridge University Press, 1954.

    Satya-Murti, Saty. “New Media Reference Manager.” JAMA 284, no. 12 (Sept. 27, 2000): 1581–82. [doi: 10.1001/jama.284.12.1581-a]

    Shirky, Clay. “Folksonomies + Controlled Vocabularies,” posted on Many 2 Many: a group weblog on social software (Jan. 7 2005); available at http://many.corante.com/archives/2005/01/07/folksonomies_controlled_vocabularies.php.

    Silberg, Bill, James Testa, George D. Lundberg, Stevan Harnad, and Jennifer Ann Hutt. “E-Journals: Still the Next Wave or Washed Up?” Science Editor 26, no. 5 (Sept.–Oct 2003): 154; available at http://www.councilscienceeditors.org/members/securedDocuments/v26n5p154.pdf

    Weinberg, Bella Hass. “Complexity In Indexing Systems—Abandonment And Failure: Implications For Organizing The Internet,” ASIS 1996 Conference Proceedings; available at http://www.asis.org/annual-96/ElectronicProceedings/weinberg.html.

    Wright, Alex. “Forgotten Forefather: Paul Otlet.” Boxes and Arrows, published on 11/10/2003; available at http://www.boxesandarrows.com/view/forgotten_forefather_paul_otlet.