Let me begin with a quotation from Jeremy Campbell's book on information theory, Grammatical Man: "To the powerful theories of chemistry and physics must be added a late arrival: a theory of information. Nature must be interpreted as matter, energy, and information."[1] If you interpret nature as matter and energy, you create the industrial society within which we are accustomed to think that we live. If you interpret nature as information, you create a very different society—the information society in which, many authoritative voices tell us, we now live. The change from goods to information goes deep, from physical objects formed by manipulating the earth's crust to electro-chemical events in the human brain; from stuff to what we think about stuff; from, as Epictetus put it in a proverb Laurence Sterne chose as the epigraph for Tristram Shandy, from pragmata to dogmata. This is, as Sterne knew, a fundamental jump.

According to economists, practical and theoretical, with whose judgment I would not presume to differ, we have already made this jump. Peter Drucker, for example, has written recently:

The basic economic resource—"the means of production," to use the economist's term—is no longer capital, nor natural resources (the economist's "land"), nor "labor." It is and will be knowledge. The central wealth-creating activities will be neither the allocation of capital to productive uses, nor "labor"—the two poles of nineteenth- and twentieth-century economic theory, whether classical, Marxist, Keynesian, or neo-classical. Value is now created by "productivity" and "innovation," both applications of knowledge to work.[2]

Knowledge, Drucker goes on to argue, used to be a private good, but "Almost overnight it became a public good" (p. 19).

In fact, knowledge is the only meaningful resource today. The traditional "factors of production"—land (i.e., natural resources), labor, and capital—have not disappeared, but they have become secondary. They can be obtained, and obtained easily, provided there is knowledge. And knowledge in this new sense means knowledge as a utility, knowledge as the means to obtain social and economic results. (p. 42)

If this change has occurred, obviously adjustments will be required in several disciplines besides economics. Such adjustments do not seem to be a hot topic in any discipline with which I am acquainted, but economic practitioners are feeling the heat. Let me quote one of these, Walter Wriston, the former Chairman of Citybank.

The world desperately needs a model of economics of information that will schematize its forms and functions. But even without such a model one thing will be clear: When the world's most precious resource is immaterial, the economic doctrines, social structures, and political systems that evolved in a world devoted to the service of matter become rapidly ill suited to cope with the new situation. The rules and customs, skills and talents, necessary to uncover, capture, produce, preserve, and exploit information are now mankind's most important rules, customs, skills, and talents.[3]

Wriston, at least, sees that the changes implied by the movement from goods to information extend far beyond his own bailiwick. The very ways we think about economics will change; how we think about self and society will change; how politics grows out of an economic base will change. Not least, our educational structures and practices will change, and how these practices relate to the world of work will change even more.

What does it mean to say that information is the new capital? What would be the economics of an information economy? Economics, in the classic definition, is the "study of how human beings allocate scarce resources to produce various commodities and how those commodities are distributed for consumption among the people in society." (Columbia Encyclopedia) In an information economy, what's the scarce resource? Information, obviously.

But information doesn't seem to be in short supply. Just the opposite. We're drowning in it. Coping with the "short supply" has been compared with trying to drink out of a fire hose. This profusion, though much amplified by digital electronics, has not been created by it. From 1980 to 1990, the world production of books, good old-fashioned codex books, increased by 45%. The futurists tell us that we face a document explosion as threatening as the population explosion. What then is the new scarcity?

It can only be the human attention needed to make sense of information. This need has, in fact, been acknowledged in the current discussion, but only in a tacit terminological fashion. Everyone discussing the information society hastens to distinguish between raw data and a more eulogistic term—"true information" or "knowledge" or, even sometimes, "wisdom"—which describes the really valuable item. But the refinery that converts "data" into "information" is human attention.

An economics of attention differs from an economics of goods. Herbert A. Simon, one of the few economists to address this issue, argued that excesses of information-flow can be governed by the suitable Artificial Intelligence agents sure to be invented in the future.[4] Surely the change involved runs deeper than this. Both the economics of attention and its economists will be found in very different places from those where we are accustomed to think economics dwells. We must license ourselves, in fact, to look anywhere for this new science.[5]

The earliest, and the most long-lasting, attempt to study the allocation of human attention is the doctrine of classical rhetoric. This body of thinking was formulated in Greece half a millennium and more before Christ. It began informing Western culture at the juncture when Greece was moving from orality to written language, and its initial formulation profited much from the struggle between these two modes of imaginative life. Western culture has never left its oral ingredient behind; indeed, one might argue that until the age of the rotary press, it had never become predominantly literate, based on written text. Rhetorical theory has thus always been able to cover the whole range of Western expression, both oral and written.

Rhetoric argued that, since attention constituted the central social power, the orator who allocated it must be the central figure in the polity, and his training the fundamental training for civic life. This theory emerged from a city-state political democracy, but proved that it could prevail even in the larger and less democratic Rome of Cicero, in whom this view of the orator as central statesman reached its full theoretical—and for Cicero at least, physical—embodiment. This central argument depended on a shortage of attention in the electorate, whether broad-based or narrow. Wisdom, or at least information, seemed to abound; political power lay with the statesman who could, by focusing it, galvanize action.

We might even argue that rhetorical theory included, like modern economics, a micro and a macro version. Microeconomics is that part of the science which has to do with a private household, the part that derives directly from the Greek root of the word "economics." Macroeconomics is, according again to an encyclopedia definition, "the study of the behavior of economic aggregates." I extract from this marvel of tautology the conclusion, confirmed by other standard formulations, that "macro" means economics on the level of the state rather than the private household. Rhetoric, too, has always fallen into two parts, its larger argument concerning how to persuade people in the aggregate, and its smaller the analysis of persuasion on the sentence level, the doctrine of rhetorical figures.

Rhetoric as an economics of attention has endured with surprising consistency (or, as I have argued elsewhere, with a consistent lack of consistency) to the present day. It thrives in its classical form; it has been periodically reembodied in various schools of literary criticism and theory, most notably in modern times the "New Criticism" and after that the body of thinking called "Literary Theory." It continues as an active, as against an academic, doctrine in the omnipresent world of advertising and marketing.

If, then, we find ourselves in an information economy which is really an economy of attention, we already know something about such an economy, and have known for a very long time. This is not to say that human attention spans have not varied in this very long time. Obviously, human attention does not just span twenty-four hours minus the minimum needed for sleep. It can be, and always has been, compressed and expanded by all kinds of algorithms—compression of this sort being, for example, what rhetorical figuration is all about.

Classical economics posits a world of single private selves making rational decisions about what to buy and sell based on individual self-interest. These individual decisions, by a spontaneous self-adjusting "invisible hand" (a phrase that, I've always thought, might bear a little deconstructing) establish a market price which averages all those buy and sell decisions. A rhetorical view of life, and hence an economics of attention, inverts this view both as to self and to society. In the rhetorical world view, homo sapiens sapiens is an actor, possesses a dramatic social self as well as a central interior one, and lives in a society predominantly dramatic. And self-consciously so. Rhetorical training means training in how to play a role, how to read a line, and how tolook at the behavior of others as roles and lines and to critique their playing as aregularpart of human experience. This dramatic interaction is a much more complex affair than the marketplace's monad doing its buying and selling. It is predominantly "irrational," as we like to say, which means that it is concerned far more with dogmata than with pragmata, with what we think about things, than it is with things themselves. Economists have from time to time suggested that their discipline should try to account for irrational behavior but the idea seems not to have caught on, perhaps because it makes the argument less algebraic.

The dramatic conception of self and society upon which rhetoric's economics of attention builds has informed most of Western literature and art—and perhaps of some non-Western art as well. In such an economics the world of art, which is to say the world of "play," infiltrates the world of "work," which is to say the economics of goods. The arts and letters move from the periphery to the center.

Traditional economics brings with it a theory of communication, which is to say a theory of style, taken over from the Newtonian physics whose quantitative certainties economics has tried to reproduce. I have described, not to say caricatured, this theory of communication in my composition textbooks as the Clarity-Brevity-Sincerity or "C-B-S" view of style. It runs this way: "I have a message to give you. I package it in words and send it through the market mail and you receive it, unpackage it and absorb the meaning. No messy feelings involved. No hidden purposes. Sender never has designs on Receiver. Any such designs, any such 'rhetoric,' form no part of right-thinking communication." Such a view emerges not only from the view of language imposed by seventeenth-century science but also from the marketplace itself as conceived in classical economics. That this view omits much of what human communication is all about, as well as the problematic substrate in which it occurs, forms the substance of modern thinking about language. But, as the furor about "deconstruction" proves, it does not form the substance of the popular view of language (or indeed the view of many learned disciplines, economics included). That view remains pure "C-B-S." And, if we really do find ourselves in an economics of attention, it is precisely that popular view which will have to change. In an economics of attention, such a simplification of human communication is much worse than wrong—it is inefficient. It leads directly to gross losses of productivity, gross errors in business judgment.

Rhetorical training has alway honed the orator's awareness of how language and gesture intermediate every human communication. Such a paideia renders human communication irrevocably self-conscious. You become, when you have internalized it, always aware not only of what you are saying but of how you are saying it, and of how that "how" changes the "what." Modern professional discourse resists this stylistic self-awareness, and nowhere more so than in economics.[6] For to admit an inevitable rhetorical dimension in all human communication admits too a different kind of self, society, and range of human motive. Thus we may predict that the movement from one kind of economics to another will be fiercely resisted on stylistic grounds even as it may be accepted on other grounds which seem more "substantive."

Self-consciousness about expressive media has become an especially pertinent problem in digital electronic expression. Digital information uses the same code to store information about images, sounds, and alpha/numeric notation. It can convert the one into the other in the software, making music from pictures or writing, and pictures from music and so on. (This convertibility, which stands at the heart of the digital revolution, changes absolutely the nature of the arts and their institutional practices, but that is not my present subject.) Digital convertibility makes stylistic self-consciousness an unavoidable issue. Information in the digital world is being expressed increasingly in images—"scientific visualization" is the name usually given to the field. Inquiry which used to be carried on in alpha/numeric notation is now being done visually. In the last two or three years, data sonification has begun to do the same thing for sound. In both these cases, in all such cases, the author must decide how the digital data are to be expressed. There is nothing inevitable about the expression—we cannot resort to one-word-for-one-thing simplifications. Choices about expressive mode must be made and they will alter the "truth" of the thing expressed. This is not a fresh French puzzle from the Comparative Literature department. This is a report from the hard science front, whose soldiers also must wrestle with the demon of expressive self-consciousness. In such a world, the C-B-S theory of expression no longer works. The economics of attention will not allow it in science any more than it does in politics. We require, for an economics of attention, a new theory of communication, a new theory of style. An education in this theory thus becomes a vital ingredient in the world of work. Not an ornament but a vital center.

Let me try to visualize my own argument by an analogy from the visual arts. When Marcel Duchamp started picking banal objects out of daily life and declaring them "art," he insisted that he was not choosing them for their intrinsic beauty. There was nothing specially beautiful about a bicycle wheel mounted upside down on a front fork stuck through the top of a high stool. There was, times two, nothing beautiful about a urinal submitted to a major art show. He was clever enough not to dwell on exactly what the point was of doing these "readymades," as he called them, but the point was clear enough. "Art" lay not in a rare object but in the attention bestowed on that object. The real "object" in such a work was the attention not the object. To see that attention, to make us self-conscious about the spectacles through which we are currently looking, required a shock treatment that we call "outrageous" but that was just the opposite—didactic. The outrage all centered on the object—how dare he humiliate "art" by comparing it to a pisspot?—but the object was beside the point. It was the attention that he wanted to, well, to pay attention to. Don't we see here something like the movement from an economics of goods to an economics of attention? The same dematerialization? The same radical self-consciousness? The same shift from stuff to what we think about stuff? And, perhaps, the beginning of a change in patterns of motivation as well?

For Duchamp knew he would cause a tempest in a peepot and he obviously enjoyed this—as he repeatedly insisted with mock-seriousness—unintended side-effect. He was insisting, by orchestrating a furor, that the attention brought to art had lost its motivational balance. It had become too breathless in its adoration, too centered in the perfected object; he suggested a new kind of "seriousness," one mixed with game and play. Was there not a kind of game lurking in the Armory Show, salon de refusés, a selfish competition for triumph in the selfless worship of pure form? How better to point this out than by submitting the plumbing apparatus in question?

Once we think of attention as a possible subject of art, a number of famous works suggest themselves; to find them, just do a hypertextual search for the word "outrage." Think, for example, of Cage's famous composition, 4'33". Cage walked into the auditorium, sat down at the piano, made himself comfortable there in the usual way, then closed the keyboard cover, gave it a sharp rap with his knuckles, and sat there for 4'33", after which he rose, bowed, and left the room. Outrageous! But not if he wanted to "make music of," that is, render us self-conscious about, the attention flowing from the audience to the performer. He intended, that is, to reverse the usual flow of attention in a concert hall and thus make us aware of the attention itself. Concert halls do have attention-structures, after all. One doesn't always sit breathless with adoration. One notices that the performer is sexy perhaps, or that the tails of the percussionist's dress coat wiggle back and forth in a droll way as he moves from one drum to another, or that the conductor has obviously been watching an old film of Toscanini. But we are ashamed of these " economic decisions and so pretend they haven't occurred. We want to preserve an unselfconscious, "serious" decorum.

Maybe that is why artists like Andy Warhol who work primarily in attention-structures continue to irk the popular imagination. When he iterated the soup cans, and the Marilyns, he was painting iteration itself, an attention-structure which the digital computer has brought center stage. When he made his non-eventful films, he was, like Cage, calling attention to a temporal attention-structure. He was acutely aware that the new economics of attention changed both self and society. So when he met Judy Garland he recognized her essential genius without a prompter:

To meet a person like Judy whose real was so unreal was a thrilling thing. She could turn everything on and off in a second; she was the greatest actress you could image every minute of her life.[7]

She existed, that is, in an economics of attention.

Andy Warhol was forever pointing out that the object has been displaced by the attention devoted to it. "But then," he remarked, of Henry Geldzahler's Metropolitan show, "we weren't just at the art exhibit—we were the art exhibit, we were the art incarnate. . . ." (p. 133) "Fashion wasn't what you wore someplace anymore; it was the whole reason for going. The event itself was optional." (p. 187) A museum devoted to Warhol's work has been opened in Pittsburgh, sparking a debate over whether he is an artist of sufficient caliber to have a whole museum to himself. Such outrage comes, finally, from an industrial economics; that guy hasn't left enough good stuff behind to merit a museum!

Actually, the fuss does hit the target. It probably is misguided to create a Warhol museum, a building with objects in it, for it is very hard to museum-ize an art made up of attention-structures. This difficulty manifests itself every time an artist who works primarily in attention-structures—Robert Irwin, for example—has a museum show. Irwin's work is all about seeing, about slowing the sensation of seeing down into a very different time scale. In this respect, Irwin does for human vision what Robert Wilson's drama does for human gesture—slows it way, way down so we can ponder it. Irwin describes days spent in his studio just looking at canvas, at color, at white, seeing his vision change over time. He finally came to create architectural rooms made of very thin translucent cloth, scrim, which manipulate light so that one comes to see it as volume. You really cannot photograph such work, since it is created by attention, not instantiated in matter. And you cannot successfully exhibit it in a museum, since museum-going happens on too fast a time-scale. It is about the need for an exquisite self-consciousness about attention. I dwell upon these difficulties because they all recur when you ponder an economics of attention. Attention-structures occur everywhere in contemporary art: Art about Art; Happenings; a fortiori in Conceptual Art. And everywhere the "outrageousness" dissolves into didacticism (sometimes repetitive and pedestrian didacticism) once you see the primary displacement. This displacement is big, perhaps absolute, a fundamental reversal of figure and ground.

Let me now modulate from the artist as attention-economist to an anthropologist playing the same role. M. R. A. Chance has described two basic patterns of primate attention.[8] In an arboreal environment, where an animal cannot see others in the troop, it maintains its social reality by listening to vocalizations and returning them, and then by sharing in mutual display behaviors when the group reunites. This is called "hedonic cohesion." Animals like baboons and macaques, who live largely in open country, preserve group cohesion by gazing toward the center at the alpha-male. Such an attention-structure Chance calls "agonistic cohesion." This centripetal gaze is so deeply satisfying that peripheral males engaged in it will sometimes forget to eat. What seems to have happened, in our present global information system, is that these two attention-structures have been wired into oscillation. They potentiate, create a positive-feedback that keeps multiplying itself. We adore looking at the center, whether toward movie stars, politicians, football stars, or whatever. We love it so much we forget to eat. At the same time, we keep in touch by talking to each other and engaging in fashionable group displays of one kind or the other. The two basic attention-structures in our evolutionary equipment are amplifying each other.

This potentiation seems built into our current economics of attention. Charles and Diana, however o'erparted for it, have become actors in a real tragedy. The people at the center of a primate attention-structure like ours in the homo sapiens family are all actors. The attention makes them so. It really doesn't matter whether they are professional actors, or professional politicians, or professional athletes, or a real prince and princess. It is pointless to object that they are not "real" actors. Centripetal force has made them actors. They inhabit, professionally or not, the world of game and play. Our worship of the center, the terrible satisfaction we feel in gazing at it, will always make its objects into actors. The centripetal gaze renders them so, as Andy Warhol was so quick to see. Our desire to obliterate the difference between public self and private self will always be doomed to failure. An honest man may remain an honest man after he is elected to Congress, but he will not remain a private self. That is structurally impossible. That is not how human attention works. There is, then, an economics of attention built into our primate biogrammar, one that seems to oscillate between a "serious" relationship with one another and a dramatic relationship with a central drama.

Let's look now at yet another kind of attention-economist. "In the nineteenth century," said Johann Huizinga, the Rector of Leiden and the great cultural historian of game and play, "all Europe donned the boiler suit." He meant by this that industrialism had brought with it a different motivational structure, a different kind of "seriousness," than that which he had so acutely dissected in The Waning of the Middle Ages. The play impulse had deserted the world of work, leaving there only sober purpose. In an information economy, the world of work takes off the boiler suit again, returns to the late medieval world Huizinga sought to describe. An economics of attention brings with it, that is to say, a different motivational mix of play, and purpose, finally, a different kind of seriousness, the kind Huizinga argued as characteristic of late medievalism.

Let me explain this motivational distinction a little further. Imagine three reasons for buying a car. You buy it to get to work: that is Purpose. You buy it to make a statement about your hierarchical relationship to your neighbors: that is Game. You buy it because you like to drive: that is Play. Put Purpose in the middle of a spectrum, Game and Play at the two ends. In an industrial economy, the central motive is Purpose and the other two, though strong, are derivative. The force flows outward from the center. In an attention economy, the flow is reversed. Game and Play become the driving forces, Purpose the derivative one. A new "seriousness" is established. There are, in human life, only two kinds of seriousness, one always the figure, the other the ground. To move from stuff to what we think about stuff reverses this figure and ground.

What about the market in an attention economy? How can we compare it with the market of a goods economy? They seem to lead in opposite directions. The central political polarization of our time has formed on the axis of the goods market. Should we regulate it (the Left) or leave it alone (the Right)? Plan it from the top down (the Left) or let it emerge from the bottom up (the Right)? This polarity reverses in the marketplace of attention. No one, at least in a democracy, argues for top-down attention-planning, that is, for thought control. We all favor the, as we call it, "free market of ideas." All of us who pursue research careers maintain that the invisible hand creates wisdom in the marketplace of attention. Let human curiosity find its own paths. The political polarity reverses here, the Left defending the free market, the First Amendment more vociferously than the Right. Is the "market" only a metaphor in the economics of attention? Ideas, after all, are like computer programs; you can give them away and still have them. Copying is how thinking proceeds. If the market thinking of traditional economics carries over to a market of attention, how does it do so?

The invisible hand, in the marketplace of attention, works through a central mechanism which is very visible indeed: specialization. All of us are products of this mechanism. It is how the world of work has always coped with the shortage of human attention. From this shortage, all the learned disciplines devolve. But is the invisible hand now working in another way? Professional specialization is the ultimate state of linear thinking. We all feel by now, though, that hypertextual patterns of thought are spontaneously emerging, and not only in the digital domain. Might we be witnessing, in this and other places, an alternative to specialization as a means of allocating attention, a different hand in the marketplace?

One more economist of attention. In a recent book, Jane Healy considers the manifestations, popular and clinical, of what has come to be known as ADD or Attention Deficit Disorder.[9] She started with the observation that her students simply could not pay attention to their work in the ways in which students used to. She asked other teachers about this change and found that she was not alone. She then read, in a methodical way, the literature in neuroscience that deals with this matter. The proposition she entertains comes down to this: the modern overload of information, especially the sensory overload of electronic media and the fragmentation of family life have collaborated to wire the brain of her students in a different way. The basic neural wiring needed for higher conceptual processing simply isn't there. If this is so, it means that a genuine tragedy lurks at the base of the digital revolution, and of the economy of attention which it has created. Digital display may create a deficiency in the vital ingredient, the "capital," of an economy of attention—attention itself. If so, the hypertextual universe of thought may have generated the seeds of its own destruction.

Let me suggest another economist of attention, the copyright lawyer. Copyright law was invented to establish and regulate a market in books. It has worked so well that we have been scarcely aware of dwelling beneath its umbrella. It has worked less well for image properties, both still and moving, but still well enough to do what copyright is supposed to do, enable creative careers by sustaining a market. How does it work when intellectual properties are defined in terms of attention rather than physical artistic substance?

Copyright law organizes itself around the difference between Idea and Expression. An idea is what we might call a generic pattern: boy meets girl; son hates father; the children of feuding families fall in love. An idea is not protectible, only the individual expression or instantiation of that idea. Thus the general body of the human imagination remains in the public domain, but not individual works of art. One expression infringes on another only when there is, to use the term of art, "substantial similarity" between the two.

But in the digital universe the same digital code can create many different expressions. It can express itself as an image, a sound, as alpha/numeric information of some sort. It enshrines a different sort of Idea/Expression relationship from the world of printed expression. Substantial similarity requires some substance, and in a digital universe there isn't any. There is only a code which may generate potential attention-structures. The substance, the stuff of actual expressions, is the derivative entity, not the referential one. Art has, in yet another way, become attention rather than object.

Will the move from substance to attention mean that the law of real property and the law of intellectual property undergo a background/foreground shift like the one we have just explored for human motivation? Right now, our basic ideas of ownership all come from a world of stuff, ground, objects, crops, cars. The law of intellectual property is a special case, one we try to align, as best we may, with the law of stuff. Stuff is referential; attention derivative. Is this ratio reversing? What will happen if it does? Where would one go for guidance in such a matter?

I started this essay with a conventional definition of economics. Let me end it with an unconventional one. Donald McCloskey, the economist whom I noted earlier, defined his discipline in literary terms: he called it "an allegory of self-interest." It tells a tale whose insistent moral is the interest of the individual. Such a definition works a lot better than the conventional one in an economics of attention. For "self-interest" can be "maximized," as an economist would say, in all kinds of irrational ways. It confronts an imaginative calculus with an imaginative terminology. I cite it because I think it represents a terminological revolution that will have to occur before we can learn to think with any real clarity about the new economics I've been trying to sketch out. Terminology is vital here, not secondary.

I've suggested a few places where economists of attention might be found—formal rhetoric, twentieth-century art, behavioral biology, intellectual property law, the professional language of traditional economics. I could have chosen others—aircraft cockpit design, theme park design, digital multimedia design, computational evolutionary biology. But even this preliminary exploration has suggested that some profound changes may emerge from an economics of attention: a different conception of self, of society, of communication, of human motive itself.

Yet we will not be leading disembodied lives any time soon. We'll still be stuff and need stuff and like stuff. We confront, that is, not an irrevocable transformation but a need for both the old and the new economics, one vocabulary for stuff and another for attention. We need a kind of poise which makes us easy in entertaining both worlds and moving from the one to the other. We need a calculus which includes both and transcends both. That calculus seems to me the great task for the humanities in an information society. And if it threatens to make economists of us, we'll just have to accept that as part of a centrality which we have claimed since the Renaissance and which now indeed has come upon us.

NOTES

    1. Jeremy Campbell, Grammatical Man: Information, Entropy, Language, and Life (New York: Simon & Schuster, 1982), 16.return to text

    2. Peter Drucker, Post-Capitalist Society (New York: HarperBusiness, 1993), 8.return to text

    3. Walter Wriston, Twilight of Sovereignty: How the Information Revolution is Transforming Our World (New York: Scribner, 1992), 19-20.return to text

    4. Herbert A. Simon,"Designing Organizations for an Information-Rich World," in Computers, Communications, and the Public Interest (Baltimore: Johns Hopkins University Press, 1971).return to text

    5. Lawrence Berger has begun to chart this ground in "Self-interpretation, attention, and language: Implications for economics of Charles Taylor's hermeneutics," in Economics and hermeneutics, edited by Don Lavoie (London and New York: Routledge, 1990). return to text

    6. The classic analysis of this is Donald N. McCloskey, The Rhetoric of Economics (Madison: University of Wisconsin Press, 1985).return to text

    7. POPism: The Warhol '60s (New York: Harper & Row, 1983), 101.return to text

    8. "Social Cohesion and the Structure of Attention," in Biosocial Anthropology, edited by Robin Fox (New York: John Wiley, 1975), 93-114.return to text

    9. Jane M. Healy, Endangered Minds (New York: Simon & Schuster, 1990).return to text