108dculturenmw5680986.0001.001 in
    Page  121

    Part 2: Hyperlinks and the Business of Media

    Page  122
    Page  123

    Preface to Part 2

    Part 2 focuses on the ways media and marketing organizations use linking as they face new challenges in the digital environment. Martin Nisenholtz, senior vice president of digital operations for the New York Times Company, provides an account of the ways the New York Times has been reconfiguring itself to succeed in the new age. This meant recognizing the need to become comfortable with new ways to reach out to readers as well as with opening the paper’s vast archive to search engines.

    Like so many publishing businesses, most of the Times’s online revenues come from advertising. The three essays that follow reflect the importance of marketers in shaping new media—and the role hyperlinks play in that. Tom Hespos argues forcefully that the advertising industry needs to come to terms with the drastic changes the Internet and the hyperlink have brought about in human communication processes. Hespos states that advertisers have lost the control they previously had in their interactions with customers, now that the Internet not only enables consumers to find competing information but also allows them to connect to the opinions of other consumers. Consumers now talk to each other and also talk back to the advertisers. Hespos sees this as an opportunity rather than a threat and stresses the need for advertisers to take advantage of the unique capabilities of new media.

    Stacey Lynn Schulman continues the topic with thoughts on how the advertising industry should start dealing with an environment where the consumer is increasingly at the center and in control. She says that consumers expect advertisers to know who they are and what they like, and while, in many cases, they are willing to give up privacy for personalization, it is crucial to know where to draw the line between what is acceptable and what is not. She points out that one of the opportunities afforded by the digital environment is the untapped potential of online communities for marketing research. The next logical step up for marketers is to create their own online communities that connect the brand to customers in order to win their loyalty.

    Page  124

    Eric Picard, continuing the spirit of the essays by Hespos and Schulman, talks about the challenges advertisers face as a result of digital and network technology. His essay reflects on the changing economics of attention and describes the shift in advertisers’ buying from time slots to “impressions.” Whereas in the past, the advertiser simply had to select a time slot to find the right audience, Picard shows how technological advances have made it possible for the audience to “watch content whenever and wherever they like,” resulting in a fragmentation of the audience. He concludes that with the content world changing to one where the audience is in control, advertising strategies have to adjust and adapt accordingly.

    Marc Smith takes off where the previous four essays end and focuses on the innovative possibilities new technology will have for the ways people relate to one another as well as to media and marketers. Structuring his essay around the concept he calls the “hypertie,” Smith predicts how mobile devices with wireless technology that increasingly become aware of themselves and their location will lead to a drastic reconfiguration of our day-to-day interactions. Social ties will become more digitized, visible, and archived, and this in turn will allow us to interact in many ways hitherto impossible. Smith illustrates what is now possible with these new linking technologies. He describes name tags that automatically exchange data with other tags based on the common interests of the wearer. Also in testing are mobile devices that are fully equipped with a diverse range of sensors, including accelerometers, thermometers, cameras, and Bluetooth radio, to keep track of every movement of the user.

    The ability of Smith’s futuristic gizmos to follow people wherever they go certainly resonates with the goals that Hespos, Schulman, and Picard exhort marketers to pursue—as well as with the reasons they express caution regarding customer privacy. It’s less clear what implications such a linked world will have for the development of the New York Times Company and the gamut of other media firms that are trying to find their audiences across technologies and across time and space. Will highly particularistic knowledge of audiences change the creation of news and entertainment? Will media companies serve not just different ads to people but different editorial content as well? To what extent will people’s preference for customized material change the nature of their conversations with others—and so the shared discourses that may be crucial to a democratic society? The essays in part 2 lead us to examine these questions.

    Page  125

    The Hyperlinked News Organization

    The Way We Were

    In The Making of the President, 1972, Theodore H. White observed: “It is assumed that any telephone call made between nine and noon anywhere in the executive belt between Boston and Washington is made between two parties both of whom have already read the New York Times and are speaking from the same shared body of information.”[1] The news has always been used as a catalyst for shared ideas. Baby boomers remember the era of Walter Cronkite. Sitting around the dinner table, families watched CBS news during the Vietnam War, talking about and often arguing about the war as the “magic of television” brought battlefield images into their dining rooms. The telephone, the postal service, and even the fax machine enhanced these connections. During the 1980s, it was common for people on commuter trains to cut articles out of newspapers and then fax them around to colleagues. Perhaps they would later discuss these articles on the telephone.

    The patterns that defined these predigital interactions were hard to see, the trails left behind ephemeral. The news itself was “packaged” exclusively in analog format—whether in a nightly broadcast, a daily newspaper, or a weekly or monthly magazine. But the idea that these news packages were merely “one-way” delivery devices from top-down journalistic institutions to waiting masses is simplistic. Every day, specific articles were used for discussion fodder. And influential newspapers, particularly the New York Times, would be used by other media outlets to form the basis for their stories, amplifying and extending the journalism far beyond the printed page. This ecosystem created meaningful linkages through the technology of the day.

    Page  126

    The Dawn of Internet News

    It is not surprising, therefore, that the World Wide Web was initially used as a mere extension of existing behavior. News Web sites circa 1996 were mostly simple extensions of the printed product. The copy flowed out of the publishing systems into formatted pages to be made available to a growing number of Internet users. Message boards were established by most of these sites to encourage discussions about the articles. With few exceptions, the sites were published every twenty-four hours (perhaps augmented by wire copy throughout the day), and they were mostly “walled gardens,” or repurposed online versions of the print product.

    Beginning in 1996, the New York Times experimented with a section of its site called CyberTimes (see fig. 1). A young reporter named Lisa Napoli invented a column called Hyperwocky that insinuated many links throughout the articles she wrote. These were early forms of interactive journalism, nascent ways to begin insinuating the broader Web into the fabric of an article. But for the most part, readers weren’t ready; networks were slow, and the real benefits of linking were abstract. The message boards were off in their own “ghettos,” where only the most fervent and involved readers posted messages. There was no integration between articles and the social tools that might bring those articles directly into a conversation.

    Of course, there were no easily usable syndication technologies to allow for multiple sources to exist in a single environment. Portals like Yahoo had developed highly useful aggregation services, but for the most part, the narrowband networks of the day favored shorter articles and breaking news. The wire services, particularly the AP and Reuters, became the cornerstone of the portals’ news services. The absence of syndication standards mandated that business development deals and complex technology arrangements were often required to share content.

    All of this led to an assumption on the part of news providers that they were creating electronic versions of their analog products, that the main benefit of the Web was its distribution capability. This was a world without “side doors”—in other words, the typical usage pattern would show readers arriving at the home page (much as they would pick up a paper and look at the front page first or start viewing a broadcast at the “top” of the hour) and then proceeding to read the online newspaper or broadcast. At the most innovative sites, there would be frequent news updates, so that users would return to the home page throughout the day and click through to read the stories. The idea that some readers would come to article pages from somewhere else on the Internet was a function of marketing. Page  127

    Fig. 1.: Early NYTimes.com home page showing CyberTimes feature
    Fig. 1.

    Early NYTimes.com home page showing CyberTimes feature

    These were the days of “anchor tenancies” on AOL, where Web sites would pay tens of millions of dollars for favored positions that would deliver users. Only the “voodoo economics” of the dot-com boom, when businesses were valued on “eyeballs” rather than profits, made these agreements workable. News organizations that were part of traditional media companies, including the Times, couldn’t play this game. In 1999, NYTimes.com had only 1.3 million monthly unique visitors on average. We were failing to harness the underlying fabric of the networked world. Our Web sites were created in HTML, but the world was not ready for the true power of hypertext.

    Questions and Answers

    The power of the Internet is its social fabric. By 1998, the Times recognized that it needed to find a bridge between news and community. This was during the first great era of community on the Internet, when sites like GeoCities and Tripod were being sold to portals for billions of dollars. But these community sites had nothing to do with journalism. By and large, they were like (mostly static) online vanity license plates.

    At the Times, we had conceptualized the idea of a “knowledge network” that would combine our journalism with what our users knew; in Page  128other words, we would attempt to “unlock” the knowledge inherent in our very literate user base and, where appropriate, to combine that knowledge with our journalism. This sounds vaguely like the description of a Web log, but blogs were still in the future. Instead, we thought of social utility as questions and answers. During this pre-Google era, our users were going to Web sites like AltaVista to find answers, but we thought that “human search” would grow rapidly as the network effects of “people helping people” kicked in. Given our vast archive of content, we planned to find ways for our users to supplement their answers with our journalism. So, for example, if someone in the network asked for advice on great restaurants in Paris, our users could supplement their own answer with a Paris restaurant article that was stored in our archive.

    In order to execute this plan, we acquired a sophisticated knowledge-management firm based in Cambridge, Massachusetts, called Abuzz. Abuzz had built an “adaptive routing” technology that “learned” from the link structure of human behavior in the system. People who would frequently answer questions about wine, for example, would be regarded as “expert” in this area. Behavior was complemented with ratings from other users. In the end, the technology would identify the handful of most knowledgeable users against almost any question from among millions of prospective answerers in the network.

    For the first time, a journalistic organization was looking at the Web as a network, rather than as a mere distribution mechanism to deliver its content. Articles or even parts of articles could be used in the context of a conversation—in this case, one that involved answering a stranger’s question on the Internet. The underlying link structure of a user’s behavior would create a hierarchy of expertise or a predictive approach to quality.

    Unfortunately, Abuzz and the Times’s knowledge network fell victim to poor timing. Advertisers at that time were skittish about buying community inventory, even though it was very topic specific. Several years later, Google’s AdSense product would solve this problem, but with the dot-com bust and deep advertising recession of 2001, the Abuzz experiment was terminated.

    The Rise of Aggregation

    While the Times and other newspaper companies were placing their content online, entrepreneurs were building sites that took advantage of both the distribution power of the Web’s open standard and the hyperlinked nature of the medium itself. The most important of these companies, Page  129founded by Jerry Yang and David Filo, was Yahoo. Yahoo’s core value contribution, like that of several of its competitors, was to structure the Web into discrete categories, serving not as a creator of content but as a directory pointing users to others’ content. The recognition that the Web was, first and foremost, a platform on which to share documents and even conversations goes back to its invention by Tim Berners-Lee. Berners-Lee, a scientist at CERN (the European Organization for Nuclear Research), built a generalized platform to share and discuss documents. Yahoo and its peer companies extended a part of this vision by creating a framework around which these documents could be organized on a global scale. Soon, the company extended this aggregation notion into news and ultimately became the world’s largest Web news service, without creating any content.

    Yahoo built this early position in news by understanding the essential character of the Web—that its value was as much in the aggregation and sharing of links as in the distribution of content. The explosion of choice and personalization, now harnessed by Yahoo and others, overwhelmed many mainstream news organizations. This soon led to the emergence of a new kind of Internet company—the portal—that offered a broad range of services under a single brand umbrella.

    The Dawn of Web 2.0

    Web 2.0 has become a buzzword without much meaning, but the folks who coined the term—Tim O’Reilly, John Battelle, and others—saw something fundamentally different emerging from the ashes of the dot-com bust. O’Reilly, in particular, has taken pains to create a substantive definition of the idea. His diagram in figure 2 depicts its many components.

    In his post “What Is Web 2.0?” O’Reilly writes, “Google is most certainly the standard bearer for Web 2.0.”[2] But as figure 2 shows, there are many broad aspects to the concept that have gotten lost, as Web 2.0 has been simplified in the popular press as the mere description of the postbust era or reduced to the concept of “social networking.” From the perspective of the hyperlinked society, the most interesting aspect of O’Reilly’s diagram is the user positioning, the idea that users now control their own data. This has many implications and relates to the hyperlinked news organization in profound ways. Notions of user participation, online identity, reputation, and the “granular addressability of content” have all begun to change users’ expectations of what a news service should Page  130

    Fig. 2.: A meme map of Web 2.0 developed at a brainstorming session.(From http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html.)
    Fig. 2.

    A meme map of Web 2.0 developed at a brainstorming session.(From http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html.)

    be. In turn, new forms of content creation and aggregation have exploded, as the gradual de-portalization of the user experience pushes users from the head of the “long tail” to its edges.

    From One to Three Ways to Experience News

    Only ten years ago, there was essentially one way to experience news: from the editor’s perspective. This was true in newspapers, on television, and by radio. Editorial judgment came exclusively from professionals who had spent their careers in journalistic institutions, such as the Times.

    While some disagree, there is little evidence that the editor’s perspective isn’t both highly valued by readers and necessary in a complex democracy. But this perspective has now been joined by the Web 1.0 idea of aggregation and, more recently, by the Web 2.0 notion embodied by James Surowiecki in his book The Wisdom of the Crowds. The idea that Page  131readers, by voting en masse on which stories and events are most interesting or important, are now becoming a kind of editorial collective force is embodied in Web 2.0 companies, from giants like Google to more recent entrants like Digg and Memeorandum.

    Fewer than 60 percent of the inbound links to NYTimes.com come from users who type the organization’s URL into a browser or who link from a bookmark. The rest come from the distributed Internet, and a third of those come from Google alone. Google has become a vast content distribution system, its PageRank algorithm using the underlying link structure of the Web to act as a massive editorial filter.

    In the case of Digg, users vote stories up or down, resulting in a highly dynamic, continuously updated stream of news content that readers can then share and comment on. Pages throughout the Web now include Digg tags (among others) that prompt users to submit stories. Digg evolved from the fabric of the Internet and the urge that users have to participate and interact. The fact that 75 percent of Digg users are male perhaps suggests something about its appeal, but it nonetheless is attracting millions of users every month.

    In the case of Memeorandum, the service draws news content—mostly on politics—from around the Internet and associates this content with discussions taking place in the blogosphere. Whereas Digg is a kind of news voting system, Memeorandum is a huge, distributed authority engine, drawing content from across the whole Web. It is a kind of antiwalled garden, as so many Web 2.0 applications seem to be, taking advantage of the openness of the Web and the underlying associations embedded in its link structure.

    Shamu Is Back!

    Traditional news organizations are also tapping into the “wisdom of crowds” by creating new ways for news content to surface on the home page. At the Times, the most popular of these new forms of authority is the “most e-mailed” list.

    In June 2006, the Times published a story by Amy Sutherland entitled “What Shamu Taught Me about a Happy Marriage.” In brief, the story is about a woman’s attempt to change unpleasant aspects of her husband’s behavior using animal training techniques. The story quickly shot to the top of the most e-mailed list and stayed there for weeks. A fun story buried somewhere in the vastness of the Times was now—based solely on the fact that thousands of readers were e-mailing it around—appearing Page  132on our home page day after day, attaining an afterlife that would have been impossible just a few short years ago.

    But the story doesn’t end there. On January 10, the Times published a list of most e-mailed articles for the year, and right at the top of the list was the Shamu piece. Yet again, it was catapulted into the most e-mailed list, given a new lease on life based on its popularity six months earlier. The day before the list was published, the article generated 511 page views. Two days after it was published, it generated 94,637 page views. That week, Shamu generated over 600,000 page views—a testimony to how alternate taxonomies and the “wisdom of crowds” can drive enormous interest in even the most obscure news story.

    Now, the Times is syndicating its most e-mailed list to other sites. Blogs can now use it as a “widget” to offer their readers a view into the Times’s most popular content. In this way, our journalism spreads across the Internet and around the world.

    The “granular addressability of content” referred to by O’Reilly in his Web 2.0 post is playing out across the Times Web site. The Times pioneered the use of RSS as a way to allow users to subscribe to just those topic feeds that are of greatest interest. The reader is now using the distributed nature of the Web to assemble a personal news experience, linking to sources from across the Internet.

    “Hi, I’m Art Buchwald, and I just died.”

    O’Reilly’s diagram notes that blogs are forms of participation, not publishing. The best blogs draw links from around the Web to create a rich stew of commentary, reader participation, and conversation. Blogs are the most successful new medium since the video game, with over 50 million people now publishing.

    Blogs are often put at odds with traditional media. How often do we hear, “When will blogs replace newspapers?” Actually, blogs and news sites are highly complementary. In fact, according to Technorati, the Times was the most blogged source in the world last year. This means that bloggers were using our content as the fodder for conversation. Millions of readers come to the Times through blogs, offering us an immense source of new distribution. This is why we were early pioneers in RSS—because by allowing users from across the Web to remix our content for their own needs, we actually enlarge the audience for the Times by a very large margin. This is the inherent paradox of the hyperlinked news organization—it can get much larger and has more impact through its disintegration. Page  133Part of the reason the Times now has the largest newspaper Web site by a significant margin is our early recognition of this phenomenon.

    The phenomenon is illustrated perfectly by happenings following our recent introduction of The Last Word—a series of video obituaries that are created with the subjects while they are still alive. The first of these was on Art Buchwald. He begins the interview by saying, “Hi, I’m Art Buchwald, and I just died.”

    Two days after it launched, a reference to The Last Word could be found on Fred Wilson’s blog. (Fred is a well-known venture capitalist who blogs on a diverse array of topics under the heading “Musings of a VC in New York.”) On his blog, the headline “The Last Word” was accompanied by a description of the Buchwald obit. But Fred didn’t find out about the obit at the Times. In the body of his description, he wrote, “Found this on Fred Graver’s blog.” I had never heard of Graver’s blog, so I went there, and, indeed, Mr. Graver wrote, “The NY Times has done something wonderful,” and he offered a full description of The Last Word and a link to the Buchwald obit.

    But guess what? Graver didn’t seem to find the obit on the Times, either. He pointed to another blog, PaidContent.org, as the source and offered a link to that site. PaidContent.org is a well-known site covering the world of new media. Staci Kramer, a writer for the site, described the Buchwald video this way: “It’s an excellent example of what newspapers can do to translate their print personalities into an online blend of words, video, audio, stills and links.” Did Staci find the video on the Times site? Apparently not, because she pointed to Romenesko, a fellow who’s been writing on newspapers on the Internet for many years—a blogger before blogging.

    The point is that Web content is part of a huge, swirling “conversation” taking place across the Internet twenty-four hours a day, seven days a week, in every corner of the earth. The Art Buchwald obit was enormously enlarged by being a part of that conversation. It was found and linked to from one writer to another. Surely, many people discovered it on the Times site as well, but over time, far more will have found it through the link structure of the Web itself.

    The Iceberg

    As O’Reilly points out, no company better embodies the principles of Web 2.0 than Google. The very nature of its PageRank algorithm is to Page  134use the “wisdom of crowds”—the underlying link structure of the Web—as a kind of mathematical voting machine for which documents are of the highest relevancy and quality given a particular search. In his book The Search, John Battelle refers to this underlying database as the “database of intentions,” because the searches that people execute across the planet tell us everything from how “university students in China get their news” to where “suburban moms get their answers about cancer.”

    In a world where millions of people are searching for news and information every day, it has become critical for news organizations to be found. It is notable that some news organizations around the world are seeking payment for these links. They view the indexing of their headlines and summaries as a violation of copyright. Moreover, they argue, the search firms, by aggregating and structuring the Web, gain all the economic advantage, while the content providers, without whom the search firms would offer no value, are marginalized.

    This is an understandable reaction, but the likely result of this protest will be to further diminish the impact that these news organizations have on their readers. As searching becomes the primary interface on the Web, it is more important than ever for news content to appear in the results. In part, this has motivated the Times to create Times Topics, a new area of the site that exposes all of the Times’s vast archive of content to search.

    In many ways, the Times Web site can be viewed as an iceberg. The exposed content—the daily Times and associated Web features—is the tiny tip of something much larger. Most of the Times content is actually under the surface, buried in the archive. It is a vast storehouse of articles that goes back before the Civil War, to 1851. By dividing this content into tens of thousands of topical categories, from Mao to Madonna, and by exposing all of these topics to search engines, content that is rarely or never used by ordinary readers is revitalized and brought to the surface. Moreover, Times Topics is being developed as an open database, with quality controls built into the editorial process. This means that content will come not just from the Times but from other sources throughout the Web and from the community of users who have interest in the topics.

    The Challenge

    While the increasing disintegration of the packaged news experience brings millions of new users into the hyperlinked news organization, the problem is that many of these users are so ephemeral as to be of no practical Page  135

    Fig. 3.: Time on site and page views
    Fig. 3.

    Time on site and page views

    economic benefit to the provider. Traffic from sites such as Digg can be of very low quality, as measured by user involvement. SiteLogic has done an analysis of inbound links from different kinds of Web referrals. Their analysis is striking.

    Figure 3, showing inbound links to a single blog, suggests that while inbound links from all sources translate to relatively “light” reader engagement, those from Digg are of the lowest quality. Digg users spent only 3.6 seconds on this particular site and looked at one page. But even the inbound links from search referrals were of relatively little consequence. The principle challenge that news organizations have in this hyperlinked world is to convert these “flyby” users into more serious readers. As the SiteLogic analysis demonstrates, much of this traffic has almost no economic value to the news provider. As people increasingly turn to Web 2.0 mechanisms to find information and to communicate, news organizations must discover tactics to deepen engagement with these users.

    Fortunately, the strong brand of the Times is a magnet for almost 60 percent of our inbound users. And a deeper analysis of our search referrals suggests that many users get to the Times by searching for it. Of the top fifteen natural inbound search links in December 2006, twelve resulted from some version of the keywords “New York Times.” Two non-Times Page  136keywords, “Saddam Hussein” and “James Brown,” were public figures who died in December. The remaining keyword was “Wii”—the popular Nintendo gaming console. The overwhelming majority of the links thus came from readers trying to find the New York Times itself by searching for it. Nonetheless, the future is clearly one in which news organizations must embrace the hyperlinked nature of the Internet to find and embrace readers from all sources. We can no longer depend fully on our traditional packaged view of the world, if we are to survive and prosper in the hyperlinked society.

    Notes

    1. Theodore H. White, The Making of the President (New York: Bantam, 1972), 346.return to text

    2. Tim O’Reilly, “What Is Web 2.0?” http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html?page=1.return to text

    Page  137

    How Hyperlinks Ought to Change the Advertising Business

    The advertising industry is an interesting bird, owing largely to the fact that it was one of the first business sectors to experiment with the Internet and one of the last ones to realize why the Internet is important. As I write this, thousands of advertising industry executives still don’t understand why the Internet and hyperlinking are important. They think they know why, and most of them will spout off a few canned lines about “consumer control” when asked about the Web, but they often don’t fully comprehend the weak explanations coming out of their own mouths.

    To grasp how hyperlinking has changed the advertising business, one must accept two fundamental truths, one of which logically follows from the other:

    1. Hyperlinking has changed the fundamental dynamic of human communication.
    2. This change in dynamic has altered how advertising functions within the context of the communications landscape.

    The first fundamental truth is something that quite a few people, both inside and outside of the advertising business, have trouble swallowing—and with good reason. It’s an incredibly broad statement—the kind an MBA candidate might back up with a thesis paper hundreds of pages long. But it’s certainly true.

    Allow me to illustrate with an example. A few years ago, I decided to make treasure hunting my new hobby. My family had given me a cheap metal detector from Radio Shack for my birthday that year, and I had some limited success with it on the beach near my house. One morning, I managed to uncover a small pile of change and some jewelry and was bitten by the bug. I was convinced that if I upgraded my metal detector and consulted some of my fellow treasure hunters, I might be more successful.

    Page  138

    Think about how I might have addressed this challenge in the pre-Internet days. Gathering information about this niche hobby would have been a real challenge. I probably wouldn’t have mustered up the courage to stop one of those solitary treasure-hunting nerds as he walked down the beach, intently listening for a signal in his headphones. Those people look like they want to be left alone.

    There’s nothing like getting information about a hobby right from the source, and in the pre-Internet era, my information choices might have been limited to finding stores that sold metal detectors and asking the (biased) shopkeeper, scouring the library for (outdated) books on the subject, or trying to find a magazine for hobbyists. None of these options gives me what I really want—both immediate gratification and accurate information. The Internet does.

    To find out more about treasure hunting, I first did a Google search. Google pointed me toward a site called TreasureNet, where I read about the hobby and later interacted with enthusiasts on an online message board. There, I learned which metal detectors were best suited to my needs and where to get the best bang for my buck when I decided to upgrade. I perused a lot of valuable information on the site, but the best information came from my fellow enthusiasts, many of whom were happy to share their recommendations, experiences, and pitfalls I should avoid.

    I learned all about this hobby using the Internet. The important thing to realize is that I didn’t simply use the Internet to read static articles I could have found in a magazine. Nor did I use it only to price metal detectors like I might in a paper catalog. I got the most out of the Internet when I used it for the reason it’s different from every other medium on the communications landscape—as a facilitator for human communication.

    It’s this concept that is at the core of how hyperlinking has changed the dynamic of human communication. The Internet has allowed us to connect not only with information but with each other. One would think that high-paid executives who purport to be experts in communication would understand that. Ironically, it’s this concept that most advertising executives have trouble understanding. Most don’t “get it” because of the institutional inertia of the advertising business itself.

    Advertisers and advertising agencies have traditionally operated under the erroneous assumption that they control how their products and services are perceived by people. To many advertisers and agencies, messaging to consumers is the solution to nearly every marketing problem. The advertiser and the agency have information to communicate to consumers, Page  139and they push this information out through a variety of media—television commercials, ads in magazines and newspapers, billboards, radio commercials, and direct mail, just to name a few. Institutional inertia is a defining characteristic of the advertising business. Entire media empires have been built on this push model. Even David Ogilvy, the patron saint of advertising, owes his success to it. His canonical text Ogilvy on Advertising is one of the most widely read books in the business, and it’s rare to find an advertising industry executive who hasn’t read it.[1] (My dog- eared copy sits in my home office, on a shelf reserved for books that are frequently referenced in my weekly Web marketing column.)

    If you thumb through Ogilvy on Advertising, you’ll find a ton of information about the dynamics of the push model but almost nothing about two-way media and how to construct compelling campaigns within a media world where customers talk back. The back cover of my edition is littered with bullet points about how to make money with direct mail, about how to create advertising that “makes the cash register ring,” and about television commercials that sell. There’s nothing about how to handle a deluge of customer feedback or even about how to respond to an e-mail from a customer who is frustrated about a defective widget. Still, Ogilvy on Advertising is required reading for anyone hoping to make a career in the advertising business, and that tells us that many advertising agencies and their clients remain overly focused on the push model of communication. Meanwhile, the fundamental shift in the dynamic of human communication brought about by hyperlinking favors the conversational approach over the one-way push model. Push has been falling out of favor for more than a decade, and advertising agencies haven’t exactly been quick to adjust.

    One might think that agencies would try their hardest to be the first to figure out the best way to market in the age of the hyperlink. Certainly, if an agency could break away from the pack by showing unparalleled success in online marketing, it would stand to make a great deal of money. Regrettably, such efforts are not common, mostly because agencies think they’ve figured out how to approach interactive marketing, when they truly haven’t. For most agencies, the answer to the interactive question involves completely ignoring the two-way nature of interactive media and attempting to force it into a box filled with a wide variety of one-way media. To many agencies, the Internet is yet another channel by which commercial messages can be disseminated to the masses, and they say damn the whole business of what happens when customers decide to talk back. You can see this systematic reengineering of the Internet in action Page  140when you take a look at the variety of models in use for advertising within interactive channels.

    This reengineering of the Internet didn’t start taking place until the commercial explosion of the World Wide Web at the tail end of 1994. The pre-Web Internet was a place where a hyperlink was as likely to connect you with other human beings as to connect you with a piece of information. E-mail discussion lists, Usenet newsgroups, bulletin boards on CompuServe—all were providing opportunities for people to connect with one another around like interests and lifestyles. This represented a shift in how people used communications media. Rather than simply consume it, they participated in it. While this model for accelerated human communication still exists today, advertisers tend to emphasize the one-way model and underwrite content development, putting online conversation in the role of second fiddle.

    After the Web came onto the scene, commercial marketers jumped on the bandwagon in droves, encouraging the growth of the informational aspect of the Internet over the social aspect. Advertising revenues funded content development through a variety of tactical approaches toward advertising, all based on the old push model. The first of the advertising models to emerge was paid hyperlinking. Advertisers paid well-trafficked Web sites to carry what amounted to hyperlinked ad messages in areas where people might see them. Aside from the ability to easily move to the advertiser’s Web page with a single click of the mouse, these ads were no different from classified ads in newspapers. Then came the ad banners. Advertisers learned to take advantage of pictures and animation, but the push model prevailed. Again, aside from providing easy access to the advertiser’s Web site, the banner ad was no different from forms of push media that already existed—in this case, billboards. Then banner ads got crazy. Some had functionality, like store locators, built right into them. Some contained sound. Yet others expanded beyond the space allocated to them on a Web page, increasing the profile of the messaging and, by most accounts, really ticking people off. (Is it any wonder that one of the first companies to bring over-the-page ad formats to the Web was called Eyeblaster?)

    The trend continued with some of the more modern ad formats. Sponsored search results on Google, Yahoo, and MSN are no different from paid hyperlinking, which is itself scarcely distinguishable from classifieds. Audio content on the Internet is peppered with thirty- and fifteen-second spots—direct analogs of radio commercials. Clicking on a video clip on CNN’s home page usually brings up a thirty-second video Page  141spot for an advertiser. The ad runs for thirty seconds before it gets to the news clip the user requested in the first place. Yep, it’s a TV commercial.

    Along comes Web 2.0, which was supposed to bring about a new era in human communication. The next stage is ostensibly about connecting directly with the customer through social networking and two-way media, yet advertisers stubbornly cling to the push model. Just look at how many marketers have handled social network initiatives in places like MySpace. The “solution” seems to be all about creating pages for advertising mascots, which agencies then attempt to promote with more push advertising and paid hyperlinks. Advertisers also struggle with presences on You- Tube, often opting to place their television commercials there in hopes that young people will see them. Today, if I search YouTube for the keyword “Mitsubishi,” some of the results returned are video files of Mitsubishi commercials. They get a few thousand views, have a decent rating, and garner one or two comments. But if you were to look at some of the fan-generated Mitsubishi pages on YouTube, you would find higher ratings, pages upon pages of comments, and many more views. This is so because these pages are merely conversation starters that help Mitsubishi fans congregate around a common interest. As for the straight commercials, they’re all push, and 99 percent of them fail to leverage the two-way nature of the medium that arose from the hyperlink.

    See what I mean? While the rest of us are using the Internet in ways that bring us closer together, the advertising industry is hard at work trying to force the Internet into the box currently inhabited by media like television, magazines, and direct mail. Most advertisers prefer a world in which people absorb their advertising messages and buy a product without talking back to the company that sells it to them.

    This still begs the question, if making the most of Internet advertising involves teaching advertisers how to talk directly with their customers rather than at them, why hasn’t someone done it yet? Remember that institutional inertia I wrote about earlier? It’s a lot more deep-seated than simply being blind to the back channel. The economic models of the advertising business reflect a bias toward the push model as well.

    Currently, an advertising agency stands to make a lot more money on the recommendation and deployment of a twenty-million-dollar television campaign than from a twenty-million-dollar interactive campaign. A television buy of that size might net an agency eight hundred thousand dollars in fee and commission revenue, with much of that adding to a significant bottom-line profit. An interactive messaging buy usually nets an agency significantly less, with many more interactive professionals Page  142needed to staff the account. Why? In general, television campaigns take a lot less work to pull off successfully. Three or four people could handle a campaign this size, setting the campaign up to run and then communicating the results back to the client before moving on to help another client with another TV initiative. An interactive campaign of the same size requires more maintenance. As interactive campaigns run, they are continuously optimized by moving media weight from site to site and placement to placement in order to achieve the best results. Interactive campaigning also requires a very complex skill set that draws from both the technology and media worlds. Labor costs more, and there’s a lot more labor involved. Does a TV buy net more than an Internet buy because the latter is more labor intensive, or are there other reasons—such as (sometimes) commissions on TV buys but not on Internet buys? In other words, why does this difference exist? Quite simply, interactive messaging is a less profitable business for ad agencies.

    Now, factor in the notion of using the Internet for what it’s good for—direct communication with customers. Agencies might not know how to do that just yet, but they know it will require more people spending more time to service the account, possibly calling the account’s profitability into question. So while the answer to figuring out interactive marketing might be staring advertising agencies in the face, it goes largely ignored, owing in large part to uncertainties in the economic model. That’s the bad news. The good news is that there are two schools of thought. While the traditionalists of the advertising business continue to cling to the push model, new thinkers are challenging that model’s effectiveness and are developing new ways of doing business that could address profitability concerns. I call them conversationalists.

    The conversationalists see how hyperlinking has changed communication, and they believe that the changes brought about by hyperlinking will make their mark not only on the Web but on every medium that will emerge over the course of their lifetimes. They believe the primary function of new media is to connect people meaningfully with one another and not merely to carry one-way commercial messages. They see the lengths to which ordinary people will go just to dodge the flurry of messages heading their way every day (think spam filters, PC-based ad blockers, or simply throwing out the junk mail before opening it). And they think they can fix it.

    Many of these folks, coincidentally, are refugees from advertising agencies large and small, from the biggest of the Madison Avenue behemoths to the small independent boutiques. They see how push advertising Page  143becomes less effective year after year, and they believe that advertisers need to make changes in the way they do business to accommodate the expectations of Joe Websurfer. Among those changes is an investment by companies in resources that will allow them to meaningfully participate in the dialogue unfolding on the Internet about their products, services, brands, and product categories. They need to free up time for people working at their company who are familiar with their products, services, and policies to participate in that dialogue. Some have dubbed this investment concept a “Conversation Department,” and it’s designed to give people who buy products and services a human being to connect with, rather than an empty advertising message. Once a company with something to sell can contribute meaningfully to a conversation on the Internet, it can deliver on what Internet users expect from it. To do that, conversationalists need to fight decades’ worth of institutional inertia and billions of dollars’ worth of transacted business. Perhaps the only way they can do that is by demonstrating the power of a more direct approach to addressing customer concerns and questions.

    There are millions of conversations taking place right now on the Internet—on blogs, social networks, bulletin boards, and other Internet communities (including virtual worlds like Second Life)—that have something to do with unaddressed needs. All of them owe their very existence to the building block we call the hyperlink. The only substantial thing standing between advertisers and success in addressing these needs is a scalable way to take a personal and human approach to participating in these conversations. Right now, companies like Nielsen, Cymfony, and Technorati are providing new ways for companies to listen to these conversations. They sift through blogs, message boards, and other online forums and apply algorithms to determine relevance to an advertiser’s brand, product, or category, providing advertisers with intelligence on how and where people are talking about them. With such technological solutions to assist with the Herculean task of keeping up to date on what people are saying about a brand, there’s clearly an opportunity for advertisers to find a nonpush way of addressing them.

    Given the sheer volume of the conversation, it’s much easier for a small company to participate, and many small companies do. For instance, AccuQuote, which provides life insurance through online channels, launched its own blog in 2006 to provide a focal point for conversation about topics related to life insurance. The CEO, vice president of marketing, and other top-level managers contribute to the blog. They also follow up every comment and question personally. The AccuQuote brand Page  144is much less well-known than, say, Chevrolet, and their category tends to generate less conversation than the automobile category. So it’s much easier for AccuQuote to keep up with comments and conversation than it would be for Chevy to do the same if they wanted to follow up on every comment posted to the GM Fastlane blog. The conversationalists’ best hope thus might be to demonstrate the power of participation through a number of success stories with smaller companies. If there’s a scalable approach that will allow larger companies to easily participate, the success of small companies will drive the interest of larger ones.

    If we believe that hyperlinking brought about a fundamental change in the way human beings communicate, then we might also come to the conclusion that the changes brought about by hyperlinking have yet to be felt in a significant way within the advertising business. There are a lot of companies out there that are still clinging to push, and when we reach a tipping point, the advertising industry is in for yet another period of upheaval. This time, it will make the chaos brought about by the commercial explosion of the Web in 1994 look insignificant in comparison.

    In the end, I think advertising has about a dozen years before the conversationalists revolutionize the business as we know it. Admittedly, this isn’t characteristic of the sweeping, immediate changes that disruptive media like the Internet tend to bring about. However, as of this writing, we’ve waited more than a dozen years for the Web advertising business to chip away at institutional inertia to the point where advertisers spend more on the Web than they do on, say, billboard advertising. Simply put, advertising won’t be as eager to kill off its own cash cow than we might expect, even with cold, hard facts staring it in the face. It will take time. Yes, hyperlinking has brought sweeping change to the advertising business, but we haven’t seen anything yet.

    Note

    1. David Ogilvy, Ogilvy on Advertising (New York: Crown, 1983).return to text

    Page  145

    Hyperlinks and Marketing Insight

    It seems that everywhere we turn these days, marketers and advertising professionals are talking about “putting the consumer at the center.” They speak of understanding the consumer’s needs and desires, crafting finely tuned segmentation studies, and using equal parts art and science to accurately pinpoint the right media environments for brand messages. Gone are the days when advertising told consumers what they needed and why (remember simple chronic halitosis?).

    So why have marketers begun to prick up their ears? Although advertising has always focused to some degree on modeling (if not outright manufacturing) consumer behavior, today’s emphasis on the value of consumer preference is less about competitive edge and more about survival. Technology’s advances have given rise to a cacophony of amusements that compete for attention amid increasingly facile tools for avoidance. The result is an ultrasavvy, self-indulgent consumer who moves nimbly between a state of continuous partial attention and complete immersion in highly relevant media experiences. Today, consumer interaction with media (and thus brands) is self-styled, so won’t marketers who capture consumers in their immersive moments win? The answer is partly.

    Every effort to understand the consumer’s lifestyle, patterns of consumption, and media habits culminates in a well-crafted creative campaign and a selective media plan that will be both effective and efficient. This is typically where the rationale for consumercentric research ends. The problem is that the effort marketers typically pour into “holistically” understanding the consumer in a “360-degree way” culminates just short of the critical insight we need today to truly connect. Identifying the relevant, engaging media vehicles is only half of the equation. Consumers have come to expect us to know who they are and what they like. Playing on that level is simply the price of entry. When we demonstrate, however, that we understand why they like it, we are welcomed into a relationship. The why is the critical second half, and marketers who embrace and activate this knowledge win.

    Page  146

    Know Me, Know My Desires … Just Don’t Invade My Privacy

    The problem with getting at the why is that the exploration requires extensive, qualitative consumer research at a time when no-call lists are gaining traction. Syndicated research is battling dwindling cooperation rates each year, while fragmented consumer segments demand bigger and better respondent samples. And we’re not even sure we’re always getting accurate information. Survey data, in any form, carries some degree of bias. From questionnaire design to focus group “leaders,” bias can be introduced into the process at almost any access point. If the industry is to turn itself toward a larger scale of softer, qualitative research methods to get at the why, then new research methods need to be explored and supported.

    Additionally, consumers are well aware of marketing efforts to track their behaviors and purchases, and in many cases, they will gladly give up privacy for convenience and personalization. The slippery slope is to know when and where the line is. The debate over personalization versus privacy illustrates the increasingly dichotomous world of marketing efforts to serve and communicate with consumers. In response to an increased demand from consumers for personalized attention, companies are providing greater choice, convenience, and customization in all types of products and services. The trend spans all levels of technological integration and is evident in media (satellite radio and podcasting), in online commerce (frequent shoppers now expect Amazon-style recommendations), and even in the store (Wendy’s allows consumers to choose one of three sides for their value meals). The fact that this high level of personalized service and communication requires that consumers share with marketers richer data about their needs and preferences creates the second diametrical aspect of the consumer-marketer relationship: consumers are increasingly wary of providing too much information, for fear that their privacy will be compromised. The consequences of decreased privacy in today’s world can mean, at best, an e-mail inbox overstuffed with unsolicited offers for “natural male enhancement” products, to, at worst, identity theft and a crippled credit rating.

    Companies that exceed customers’ expectations for personalized service and use appropriate timing and personalization in their marketing communications are richly rewarded. Isn’t that what consumercentric research is all about, after all? With e-mail, Internet, cable, broadcast, and Page  147print advertising, the relevance of the content to consumers and the extension of the brand deeply into the experience is the home run we’re looking for. The right combination of marketer-collected data sets and contextual qualitative analysis should yield a complete understanding of the why, but not in mass-sized quantities. Efficiencies lost in a more complex process of creating messaging, planning, and buying media are surely gained in a higher rate of connection with and therefore conversion of potential consumers.

    The Hidden Link

    One of the more exciting avenues for research has become the vastly unexplored intersections of online consumer communities. Today, those intersections exist robustly on the Internet in Web logs, discussion groups, and chat rooms. In those spaces that are not password protected and are thus “open to the public,” a wealth of passive, free-form consumer sentiment is waiting to be mined.

    Hyperlinks are the glue of these online communities, forming digital footprints of the way individuals make connections. Through a simple selection to include, exclude, or just follow a link in our daily online interactions, we passively telegraph the way we see the world, what is important to us, to what degree, and why. This information on a person-by-person level can be deconstructed and reassembled into meaningful groupings—or target markets—for advertisers. Smaller than mass audiences, but more efficient than one-to-one connections, these dynamic target markets promise more relevant, meaningful methods to connect consumers and brands in the future.

    For years, the marketing community has depended on multiannual, expensive, longitudinal surveys of consumers in which respondents recalled their own behavior. Hyperlinking provides a map of actual behavior that expresses not only what purchases we make but what passions and concerns we have. In many ways, harnessing the power of hyperlinks unlocks the hidden link marketers have been seeking between many disparate sources of information. Media preferences, brand preferences, attitudinal disposition, and consumption habits are still primarily measured in separate studies by separate research vendors. By following and segmenting the patterns of hyperlinking, they can now be rolled into a single-source, behavioral composite of core consumer segments.

    Page  148

    The Massive Myth Yields to the Finer Slices of Life

    Over the past ten years, the advertising and marketing industry has lamented the degradation of the mass over the rapid advancement of technologies that challenge their ability to reach many consumers at once (proliferation of media channels) or even at all (commercial avoidance technologies, e.g., personal video recorders).[1] Somewhat paradoxically, as advertisers have begun to embrace the value of one-to-some marketing strategies, individuals have become enthralled with the newfound soapboxes that allow them not only to be channels in and of themselves but also to revel in the popularity of their postings (how many people are linking to their blog) as well as boast their number of “friends.” In a world where big business has resolved to celebrate a more intimate connection with its audience, the audience has become enthralled with the potential robustness of its cohort set.

    What can be at the source of such need for notoriety in society today? As technology speeds our ability to connect to the world, it simultaneously disassociates us from the neighbor next door. Everyone is a member of a global village but is woefully disconnected from the local infrastructure that historically defined “community.” We’re able to be intimately involved in events happening millions of miles away because we can manage the rote aspects of our daily lives—banking, bill payment, shopping—without ever making contact with a real person. The extreme example is the global citizen who’ll step over the neighborhood homeless on the way to the ATM to empty his pockets for tsunami victims. We are at once connected and disconnected.

    Twenty years ago, media scholars like Herbert Schiller pointed out that Main Street had been usurped by the suburban mall as the point for the intersection and exchange of ideas.[2] Almost in parallel, our real-world communities began to unravel, as membership in organizations from PTAs to bowling leagues showed marked declines.[3] Today, the intersection and exchange of ideas is still happening, but it’s not at the mall or at the bowling alley; it’s on the Internet. The emergence and proliferation of Web logs (Blogger), social networks (MySpace, Facebook), and online landscapes (Second Life) have become virtual surrogates for the real-life communities we’ve detached ourselves from.

    For marketers, the crisis of community is important not so much because they seek new halls within which to capture consumer interest but because the concept of community is linked to that of identity. In fact, Page  149the two concepts are linked in the virtual space just as they are in the physical space. Erik H. Erikson, who popularized the notion of identity in his writings from the 1960s, called out the need for community in affirmation of the self: “The functioning ego, while guarding individuality, is far from isolated, for a kind of communality links egos in a mutual activation.”[4] Any advertising scholar will tell you that while a product gets into your consideration set for simply fulfilling what it says it does, ultimate selection against its competitive set of products is based in emotional connection—and that requires deeper understanding of a more personal nature. It requires a linkage between the product’s values and the consumer’s personal identity.

    Until recently, the industry has been able to identify the intersection of these values and instill that essence within the creative aspects of their advertising, but when it came down to buying media space, we were right back to looking at consumers in two slices—age and sex. While you might be able to make a case for significant differences between men aged eighteen to twenty-four as a group and women aged fifty and over as a group, you’d be hard pressed to validate that all women aged fifty and over share the same values, passions, and concerns. And therein lies the promise of marketing to the unmassed.

    The Community-Identity Junction

    Today, our individual identities exist within two types of communities—the physical and the virtual. While the expression of the self in the physical world has always been through a combination of personal signals (from what we say, to what we wear, to how we move), the virtual world is characterized by links and references to broader-known elements of the culture. In this way, TV shows, brands, and bloggers alike become markers of individual identity when referenced or hyperlinked.

    In a very simple model (fig. 1), we can imagine a balance of identity production and community participation in today’s society.[5] True to Erikson’s thinking, both aspects exist individually and in concert with a larger community base. The difference today is that the “links” are hard coded as opposed to simply psychological. An individual MySpace page is viewed and reviewed by the self as well as by a larger MySpace community.

    Living between identity production and community participation in this model is the mediated self. This is the critical space in which our identities are crafted by the symbols we choose as representations of our Page  150

    Fig. 1.: Mapping the community-identity junction: needs and actions
    Fig. 1.

    Mapping the community-identity junction: needs and actions

    true selves. The mediated self exists between identity and community precisely because it acts as a double-sided filter, simultaneously affirming and reflecting our personal values. It is the expression of identity through the use of materials or symbols that are generally more widely known to a larger group—logos, musical styles, favorite TV shows, links to other content, and so on.[6] The mass-produced cultural products of our time are a welcome common ground in a sea of disconnectedness—and for this reason will never truly disappear from society. Absent a true physical interaction, individuals will need these mass-understood symbols to shape, affirm, and reveal themselves within a community in which they want to belong, share, or lead. Hence our virtual (and physical) selves today are “mediated,” as brands and the cultural industries flex their muscle and are either ignored, adopted, or discarded as potential markers of identity.

    Practical Applications

    Hyperlinking insights can be used by marketers to identify appropriate media vehicles for their advertisements and product placements or even in selecting celebrities for product endorsements. Consider, for example, a community of small business entrepreneurs. As a small business advertiser, Page  151you may want to reach these potential consumers within a small business context that is specific to your services. Alternatively, you may want to reach out to them in their leisure time, where they may be less guarded and more open. In either case, capturing and categorizing the hyperlinks of an online sample of small business owners could provide both types of insights.

    In one study, a sentiment analysis was conducted across thirty targeted Web sites related to small business, from January to March 2006.[7] The sites were chosen for both quality and depth of conversation, from a set of small business forums, discussion groups, and blogs. Nielsen BuzzMetrics provided the technology to mine the conversations and categorize the sentiment. In figure 2, the percentages in the pie chart represent conversations related to small business topics and are based on the raw number of messages as a percentage of the total messages captured (N = 3000). The chart at right depicts the personal passions of twelve hundred unique small business owners who visited the thirty sites monitored in the pie chart at left. This data was culled from ComScore and provided an opportunity to map the online behaviors of small business owners outside the core focus of business concerns. Each of the columns lists the areas in rank order according to the statistical measure at top. The shading of the boxes allows for a quick visual understanding of where commonalities exist across statistics.

    Beyond simply accessing the insight from preexisting online communities, marketers are beginning to create their own communities in ways that entertain, inform, and provoke interaction. Some of these are very robust (P&G’s Tremor), while others seek to entertain in line with their brand’s values (Coca-Cola’s Chill Factor). Perhaps one of the most enlightened examples is Toyota’s Hybrid Synergy microsite. Rather than ply consumers with online video testimonials and stagnant statistics, the site mined its own database of existing customers and visually represented their motivations for acquiring the hybrid vehicles. The result is a compelling visualization of consumer sentiment representing the various links that brought the new owners to investigate the product, and it will grow organically as more Toyota hybrid owners contribute to the site.[8]

    Media organizations, by contrast, can use intelligent links to better package the assets within their portfolio of content offerings. Instead of selling the mass of impressions that are delivered through the content pages of MySpace, News Corp could be mining its community for any links to News Corp content that are placeholders for our virtual identities. How does a group of users who feed on a steady diet of American Idol, Page  152

    Fig. 2.: Small business oweners’ concerns versus personal passions
    Fig. 2.

    Small business oweners’ concerns versus personal passions

    for example, differ from those who quote and link to Bill O’Reilly? And do either currently subscribe to TV Guide or have a DirecTV system in their homes? Fan cultures that aggregate in social networks like MySpace are gold mines of information not only for connecting the dots between content assets in a media organization’s portfolio but for linking program preferences to product and brand affinities. How many of those self-proclaimed Page  153
    Fig. 3.: Deconstructing the demise of the CW’s Runaway. The percentages are based on the number of messages per total messages (N = 84) culled from several hundred entertainment message boards, discussion groups, and blogs after the show’s premiere (September-November 2006).
    Fig. 3.

    Deconstructing the demise of the CW’s Runaway. The percentages are based on the number of messages per total messages (N = 84) culled from several hundred entertainment message boards, discussion groups, and blogs after the show’s premiere (September-November 2006).

    Idol fans are also commenting on Dove’s Real Beauty campaign or the newest Samsung cell phone? In one community database, marketers can simultaneously prove the brand connections with media content for advertiser clients; develop prospect lists for deals involving content integration; mine ideas for the next prime-time drama project; and deconstruct our latest failures—as in the case, depicted in figure 3, of the demise of the CW’s show Runaway.[9] The true benefit is not in the size of the potential audience but in the ways we can better understand the segments that exist within.

    Listen, Enable, Engage for Insight (Not Just Impact)

    Online consumer expression—whether through blogs, uploaded video, or embedded links—has created viable prisms with which marketers can move beyond the mass and engage consumers in a dialogue about their brands. In our world of rapid-fire change and immediate gratification, however, self-control is likely to emerge as the differentiator between success and failure. Marketers eager to be first or to ride the crest of the Page  154social community du jour without taking the time to listen and learn are at risk of disenfranchising the very consumers they’re trying to woo.

    Consider a case study of American Idol 2.[10] One aim of the study was to explore the drivers of engagement, by which the researchers meant the elements that attracted people to the program. I led a team at Initiative, a media buying and planning firm, that conducted a multitiered quantitative and qualitative analysis of the show’s fans. Critical to the analysis was understanding the why. What was it about the show that fans connected to the most? How did the marketers associated with American Idol 2 successfully or unsuccessfully harness the why to communicate their brand messaging? To answer, we actually created a special environment called “Shout Back” with the FOX and Fremantle teams on the official Idol fan site. This area allowed us to both query fans on specific questions of interest and allow them to free-form rant about the show. We then mined all of the free-form data to get at the most prevalent concepts. We analyzed the comments of more than fifteen thousand fans who discussed the show on the Web. Our goal was to extract the elements that they most frequently mentioned as attracting them to the show.

    Figure 4 identifies the core engaging elements of American Idol 2—as noted by the fans over the course of the final five weeks of the series. In a surprising twist, what we would have considered the most “engaging” proposition—the interaction via a voting mechanism—was not the dominant element. In fact, it was the least engaging element. The personalities of the judges and the bonds established with the contestants proved to be much more powerful connection points with viewers.

    In figure 5, Initiative mapped the major marketers to the core engagement drivers, highlighting the fact that Coca-Cola and Ford accurately tapped into the most resonant elements of the show, while AT&T focused on the least engaging element, the voting. Both Coca-Cola and Ford used the core personalities of the show within their creative. AT&T, on the other hand, used a Legally Blonde-esque actress as the core character in a spot featuring an American Idol voting campaign. Each week, the young blonde character would deliver a feverish, high-pitched appeal to the show’s viewers to vote for their favorite contestant through AT&T SMS text messaging—and the core fans of the show translated the “ditziness” of the AT&T character as an affront to their commitment to the show and their “fanhood.” They felt as though AT&T was making fun of their entertainment choice. The AT&T spot became clutter. The proof, of course, is in the data (fig. 6).

    The American Idol 2 case study is but one example that points out the return on investment (ROI) of enabling versus disruptive communication. Page  155

    Fig. 4.: Attributes of engagement—American Idol 2. (From Initiative/MIT/FOX/Hindsite—April/May 2003 Expression Research.)
    Fig. 4.

    Attributes of engagement—American Idol 2. (From Initiative/MIT/FOX/Hindsite—April/May 2003 Expression Research.)

    Fig. 5.: Leveraging program equities—American Idol 2. (From Initiative/MIT/FOX/Hindsite—April/May 2003 Expression Research.)
    Fig. 5.

    Leveraging program equities—American Idol 2. (From Initiative/MIT/FOX/Hindsite—April/May 2003 Expression Research.)

    Page  156
    Fig. 6.: Doing it right makes a difference—American Idol 2. (From Initiative/
MIT/FOX/Hindsite—April/May 2003 Expression Research.)
    Fig. 6.

    Doing it right makes a difference—American Idol 2. (From Initiative/ MIT/FOX/Hindsite—April/May 2003 Expression Research.)

    Initiative calculated the two marketers’ performances along its proprietary Brand Value evaluation system, which measures the impact of marketing actions on a brand’s core value statements. While both marketers made the same on-air marketing investment around American Idol 2, Initiative’s tools scored Coca-Cola at a +64, while AT&T delivered a –16. Doing nothing at all would have generated a zero score. In other words, doing it wrong was worse than doing nothing at all.

    So how should marketers embrace the new hyperlinked society for maximum benefit to consumers and their own bottom lines? Research shows that they should listen to the new dialogue, enable the immersion into the new technology, and engage consumers for insight (not just impact).

    Listen to the Dialogue. Gone are the days of marketing monologue. With so much expression happening online, marketers can only learn. Collecting sentiment passively arms advertisers with better intelligence to build better products and bring them to market in more relevant ways for consumers.

    Enable the Immersion. The plain truth is that when we sit down in front of the TV set or open up a magazine, we want one of two things—to be informed or to be entertained. What we don’t want is to be advertised to. The technology at our fingertips—digital video recorders, video Page  157on demand, and so on—makes irrelevant content a disruption to our engagement with the experience we seek. The marketing challenge today is not only to communicate the brand without disruption but to harness the insight from inside the community culture in a way that actually enables engagement and creates goodwill.

    Engage for Insight (Not Just Impact). Marketers might think that, armed with better intelligence and the next great idea to engage consumers, we need only find the right immersive application in the right environment. This assumption would be wrong. Wise marketers will make their investments in the new consumer dialogue work for them in the future, not just in the moment. Asking consumers to write your next Superbowl commercial may make you appear to embrace user-generated video, but the real value is gained from deconstructing how, why, and in what contexts those same consumers chose to highlight your brand, its attributes, and the competitive set.

    In Conclusion

    The fabric of real communities in American life is slowly being rebuilt with virtual threads in online communities. Those threads are the building blocks of a new social ecology in which brands can derive critical insight on consumer experience as well as serve as markers of identity in both the real and virtual landscapes. Our desire for connection sets up our media experiences in today’s world as proxies for “community,” providing the depth of experience and interpersonal connections we crave as a result of our fractionalization. Hyperlinks passively provide the understanding of how and why online communities form and, importantly, what drives individual engagement. Through effective use, marketers can actualize a less-is-more strategy, diverging from the one-size-fits-all mass tactics and moving toward accurately addressing a range of smaller target groups. Armed with a richer qualitative source of insight, marketers can then more readily move consumers from consideration to purchase.

    Notes

    1. Michael Lewis, “Boom Box,” New York Times, August 13, 2000, late edition, sec. 6.return to text

    2. Herbert Schiller, Culture, Inc.: The Corporate Takeover of Public Expression (New York: Oxford University Press, 1989).return to text

    Page  158

    3. Robert D. Putnam, Bowling Alone: The Collapse and Revival of American Community (New York: Simon and Schuster, 2000), 57, 112.return to text

    4. Erik H. Erikson, Identity, Youth, and Crisis (New York: Norton, 1968), 224.return to text

    5. Stacey Lynn Schulman, “The Community-Identity Junction” (paper presented to the Interpublic Group of Cos, New York, September 15, 2006).return to text

    6. TouchGraph, Live Journal Browser, http://www.touchgraph.com/TG_LJ_Browser.html (accessed February 28, 2007).return to text

    7. The Consumer Experience Practice with ComScore, Nielsen Buzzmetrics, “Small Business Owner Study” (internal corporate presentation, May, 2006).return to text

    8. Toyota Hybrid Synergy, Advertising Microsite Home Page, http://www.toyota.com/vehicles/minisite/hsd/?s_van=http://www.toyota.com/hsd&ref= (accessed February 28, 2007).return to text

    9. Stacey Lynn Schulman, “Future Thinking” (paper presented at the Family Friendly Programming Forum Conference, Los Angeles, November 28, 2006).return to text

    5. Stacey Lynn Koerner, “Consumer-Centric Research: Insight from the Inside Out,” Hub (Association of National Advertisers), May–June 2005, 10–13.return to text

    Page  159

    Hyperlinking and Advertising Strategy

    Hyperlinking lets people control their own destiny—lets them drive their way through a media experience. It lets them choose their own path, focus on what interests them, and ultimately consume media at their own pace—on their own terms. This is a radical change from the relatively passive way that people have been consuming television. And it is a pretty big change in the way that people consume written words—the difference between a newspaper and a Web page. On the Web, hyperlinking is as simple as clicking on a piece of text or graphic to visit another page or document. But the act of hyperlinking is more profound than this: it is the act of controlling media consumption and applies just as well to “old-school” behavior like channel surfing with a remote control.

    Most media are funded by advertising, and the majority of the media industry has always been driven by the economic machine of advertising. It is critical to understand the impact that consumer control will have on traditional ad-funded media. Broad estimates by analysts and experts have set an expectation that digital video recorders (DVRs) will decimate the television advertising industry because a large percentage of DVR users skip over advertising—the numbers have ranged from 30 to 80 percent in various studies. And if media do not figure out how to adapt to a consumer-controlled world, where hyperlinking is a “native” activity, the analysts are right. When the audience can fast-forward through television ads, advertisers will need new advertising scenarios to provide them with value if they are to justify continued ad spending.

    Television is the best illustration of the major changes about to take place. So let’s spend a few minutes learning about advertising in the “old world”—the world where content was delivered linearly while the audience leaned back and consumed it. In this world, the process of selling and buying advertising was driven by scarcity of advertising opportunity for any time slot.

    In the old world, television was linear. Each time slot was filled with a Page  160fixed number of channels that a person could choose from at any given viewing moment. If you were a television executive before cable (let’s say in 1978), there were only the three networks—ABC, CBS, and NBC. Prime time ran from 8:00 p.m. through 11:00 p.m. eastern time, so there were three one-hour or six half-hour time slots available to program television content against for each night of the week. The job of a television programmer back in those days was based on pitting the right offering against a limited set of competitors. Since a consumer could only watch one show at a time and since each household typically only had one television, there would be a clear winner for each block of time on a per-household basis.

    In those days, the job of media buyers at advertising agencies was relatively simple. If they had a new product to launch, they knew that they could get their message in front of the vast majority of the U.S. population with a few simple media buys. With a few phone calls and negotiations, they could put their ads up and reach a huge number of people. For decades, the jobs of television programmers have remained relatively the same—if more complicated by the number of available channels growing from a handful, to dozens, to hundreds of channels competing for the same linear time slots. The programmer looks at what the competition is running in a given time slot and makes a decision about what show to run against that universe of shows running across cable and broadcast channels. If they do a good job of programming content, they will capture a large audience that is desirable to advertisers.

    Simultaneously, the job of media buyers has become more and more complex as audiences have fragmented across all of the various channels. But the job is still relatively the same—the goal remains to find a show with the right mix of audience to match the profile of the product you are trying to sell, then to get your message in front of that audience. A media buyer’s job is to find the biggest audience matching their campaign goals for the lowest price, and this task is quite complex when the available audience is so highly fragmented across so many television channels during every time slot.

    The currency used in advertising is the gross rating point, or GRP. In television, the Nielsen ratings are used as the currency of advertising. Every television show is rated by the market it penetrates and assigned a rating based on the percentage of households that show reached. So in the most basic terms, if a television show reached 10 percent of the total audience, it would have a rating of 10 GRPs. If an advertisement ran in that show one time, it would get 10 GRPs, and if it ran twice during the same show, it would get 20 GRPs.

    Page  161

    When a media buyer negotiates a price for any given piece of advertising, the buyer is negotiating against GRPs. The seller will guarantee a specific number of GRPs for a television spot, and the buyer fits together all the various ads in their campaign to reach a specific number of GRPs across all shows they’ve purchased ads in. Once the show runs, Nielsen ratings are compared to the guarantee given to the buyer. If the show exceeded the number of GRPs sold to the buyer, the buyer got a good deal. If the show fell below the number of GRPs sold to the buyer, the seller must offer a “make good” of some kind, usually giving away more advertising to cover additional GRPs.

    This existing landscape of a linear schedule coupled to a formula based on reach (the number of people an ad is exposed to) and frequency (the number of times each person saw the ad) is about to shatter. The “art” of programming a show for a specific time slot is about to become obsolete, for in the hyperlinked world we live in, the audience can watch content whenever and wherever they like. In the very near future, all television will be available on demand. It will be delivered in numerous ways to numerous devices, and the advertising will not be scheduled to a specific time slot for the entire audience of a show.

    Ultimately, the DVR is a bridging technology—it lets the audience forcefully excise the linear content and make it nonlinear. As long as content continues to be delivered in a linear-only scenario, the DVR will be a popular solution. But ultimately, the content will simply be made available in a nonlinear on-demand way that does not require bridging technology.

    Beyond the DVR, numerous other TV consumption methods are about to blossom. Next-generation cable solutions, such as IPTV, will make almost all content available on demand through a simple set-top box, over a broadband connection. Video delivered to mobile devices over wireless broadband and downloaded to handheld media players will flourish, enabling place shifting as well as time shifting of content.

    Once this huge change in audience behavior has propagated, the way that TV programmers think about their business will shift pretty quickly. They will transition to an approach much like we see today on the Internet. Internet advertising has shown us how to buy and sell media in a nonlinear way. On the Internet, content is consumed according to the whim of the audience, and all advertising is delivered dynamically. That means the decision about which ad to show to the specific audience member viewing a Web page is made at the very moment that the page is viewed. This model is how all advertising will be bought, sold, and delivered in the future.

    Page  162

    Rather than buying a spot and then waiting to find out how many people watch the show an advertisement ran in, a buyer will negotiate a fixed number of advertising “impressions” (one ad delivered to one television set is one impression) for a fixed period of time. In the old world, if an advertiser wanted to reach one million males between the ages of eighteen and thirty-four, the media buyer would place ads into content that young males typically watch and would try to achieve a GRP rating that gave them their reach of one million people. In the new world, the media buyer could buy a fixed number of impressions across a specific date range. The buy could be associated to a specific set of content that appeals to the demographic of the audience they are trying to reach, or they could buy impressions that are targeted by the ad-serving system based on all sorts of data sources, ranging from geography to audience profile data.

    In many ways, this makes the job of buyers more straightforward—they simply buy volume of ads instead of relying on the notoriously distrusted data coming out of the Nielsen ratings. But what happens to the television network programmer in this new, dynamically delivered, consumer-controlled world? In some ways, the job of a programmer becomes much simpler. There will no longer be an artificial scarcity of audience, since the audience will choose to watch whatever they like at any time of day. There will be no reason to agonize over the competition’s placement of shows on the same time slot, since the concept of a programmed time slot will go away. Ultimately, the best content will always drive the biggest audience, and the available audience for every piece of content will become immensely larger.

    In the old world, where one television show might be running at the fixed time slot of, say, Tuesday night at 8:00, there would be a fixed audience size of people who were actually available on Tuesday at 8:00 to watch TV. That audience was fragmented across all the various channels running content at that exact time. If a very popular show were running opposite the content in question, that content was by nature limited in the number of people who could possibly see it. Suddenly, in this new world, content that is desirable to a large audience has an opportunity to shine. And when the ads are delivered dynamically and sold by volume instead of time slot, the television programmer has an opportunity to sell many more ads for more money than they ever could in the old world. Ultimately, in this world, it becomes about the content—not about strategically running that content at the “right time of day.” Quality content will gather an audience and will therefore gather ad revenue. The old Page  163methods of programming and media buying will shift, and this requires both new strategies and new technologies to manage the buying and selling of media.

    TV programmers will need much better access to analytics of the available audience to ensure that their programming decisions are providing quality content that attracts the biggest possible audience. We’ll likely see networks making bigger bets on content investments that align well with each other, either to capture a big chunk of a specific demographic or to spread across the spectrum more evenly. Production houses ultimately will have more power in this world, where distribution is less of a barrier and where successful production firms can self-fund speculative content creation. These production studios may even “win,” in that the costs of distribution could drop away significantly and the lack of legacy investment may enable them to be more nimble.

    We’ll likely see more content tie-ins, with interwoven story lines across multiple shows that will attract the audience to watch content they wouldn’t have seen in the past. In a nonlinear consumption model, this becomes much easier—the audience will simply flow from one show to another, not having to wait a week and hope that they have the time slot free. The content world will change, and the advertising models and strategies will change with them.

    But there is a problem. So far, the technology that has enabled the audience to be in control has also enabled them to skip over advertising. This fact has caused widespread panic among the programmers and media buyers alike. But those of us in the field of advertising strategy are less concerned. We know that advertising technology will need to adapt to compensate for this new world.

    When the ads delivered to any given person are targeted to his or her demographic and behavioral profile and even to preferences that person has intentionally exposed, the advertising will be more relevant and less obtrusive. Existing ad-serving technologies from the online advertising world are now being extended to include these scenarios. Targeting technologies will extend to include a comprehensive profile of an individual’s interests and media consumption habits—in completely anonymous and privacy-appropriate ways—across all media. Tracking what someone is searching for online or which sites they visit will create an anonymous profile of that person’s interests. Those interests can be segmented out and compared to advertiser goals. Then the ads can be delivered to the right person across all media.

    On the format side of the ad business, we will see big changes. Rather Page  164than restricting the audience from fast-forwarding over TV advertising, we will let them fast-forward over an ad but will show a “down-level” ad experience, such as a five-second version of the same ad, while they fast-forward. More interactive formats will evolve, giving the audience the ability to hyperlink from a short version of the ad into a longer version of the ad. We will let them request more information or choose their own path through the narrative of an ad. And all sorts of new unforeseen controls will fall into their hands.

    Many technology startups playing in the cable advertising space today are testing and developing these scenarios. We will see the winning solutions emerge in the market over the next few years. As the next-generation scenarios become reality, the chaotic change settles out, and the new technologies and platforms propagate, a new age of advertising will dawn—an age where the audience is in control. Ultimately, the power of hyperlinking will completely transform media and advertising as we know it.

    Page  165

    From Hyperlinks to Hyperties

    A new form of hyperlink is emerging, the “hypertie,” which bridges the gap between links created in computational media and those authored in the physical world when people interact with one another and the objects around them. The hypertie is an innovation in the interaction order, the result of the merger of existing social practices of association with the technical affordances of mobile networked information systems and the existing hyperlink infrastructure. A new era in social life is arriving when the ties that bind people can be inscribed with decreasing effort into forms similar to the ways hyperlinks create connections between resources on the Internet and World Wide Web. New mobile devices represent a novel innovation in an otherwise slow-to-change realm of social interaction—face-to-face encounters. The result is a shift from a social world in which much is ephemeral to one in which even the most trivial of passings is archival.

    The Interaction Order

    The sociologist Erving Goffman coined the term interaction order to label the realm of face-to-face naturally occurring social interaction.[1] Most social life takes place in this medium through various means of self-presentation and perception. Body posture and adornment, speech, inscription, and proximity are resources used to present oneself and interpret the presentations of others. Goffman studied this realm as a distinct domain of sociological inquiry and found a range of structural properties and practices. In Goffman’s eyes, people actively make presentations to one another, laboring with costumes, sets, and props to give a particular kind of impression to other people. Simultaneously, in slips and gaffes, through involuntary responses like blushing or eye motion, people also give off impressions that others are highly attuned to discovering and interpreting. Page  166This dance of symbols—authored intentionally and not, exchanged between actors in shifting roles with shifting audiences—is the setting for much of Goffman’s vision of the social world. He highlights a complex landscape with sophisticated signaling and artifacts.

    Tie signs comprise one element of the interaction-order landscape that Goffman describes in his book Relations in Public.[2] Tie signs are practices that indicate linkages between social actors and artifacts and that signal the nature of the relationship between them. Holding hands with someone is a good signal that you know them. Less explicit links are also commonly recognized as marks of a common bond or prior history. Shared costume, language, mannerism, and insignia are all good ways to tell if someone is from “around here” and is expressing a tie to a geographic region or social status. Related work from Edward Twitchell Hall defined and explored a realm he labeled “proxemics”—the study of proximity and orientation among social actors.[3] Hall highlights the ways cultures generate norms about how far different types of people should stand apart from one another, who has rights to look at whom, and how and when physical contact is permitted.

    In the history of the interaction order of the sort that Goffman and Hall describe, there have only been a few significant innovations. The basic equipment of speech and costume is an integral part of human societies. Amphitheaters expanded the population that could usefully interact in one place. Breakthroughs like calendars and clock time allowed separated individuals to converge in space and time to engage in interaction. Innovations such as clocks and maps enhanced people’s ability to both navigate and coordinate their actions. Innovations in the past century or so—the telegraph, radio, telephone, television—are predominantly technologies for interaction at a distance, not altering the primary (face-to-face) interaction order itself.

    The Web hyperlink, while it doesn’t directly impact face-to-face interactions, does point toward technologies that will do that. The hyperlink is a specific form of tie between resources or entities represented in computational media. These links, in aggregate, now affect most areas of commerce and culture. They are a new means of inscription for relationships, revealing connections that were previously latent or represented in ways that could not be aggregated and searched easily. When these ties are inscribed in computational media, new applications become possible for building connections, evaluating others, and gaining status and value from the accreted history of prior relationships. In contrast, many forms of social tie signs have been ephemeral or stubbornly physical. They have Page  167also lacked easily generated digital traces that describe their presence and dimension. Bridges, contracts, handshakes, and shared opinions are hard to catalog, aggregate, analyze, and track in near real time. In contrast to the digital qualities of hyperlinks, social ties remain mostly analog in nature.

    That is beginning to change. The growth and widespread adoption of computer-mediated communication channels and their widespread adoption illustrates a major way that the social world is becoming “machine readable.” Social networking sites (like MySpace and Orkut), Web discussion boards, e-mail lists, private instant messaging, and such emerging channels as graphical worlds are all examples of the expansion of the interaction order into machine-readable media. But they also illustrate the limits of these tools for impacting the primary interaction order of face-to-face encounters. Some edge toward the interaction order, as when people use mobile phones or laptops to instant message or e-mail one another while in the same meeting or room. But much of the activity of the face-to-face interaction order is not inscribed in a systematic and widespread manner.

    Ties in computational media take on new attributes that are distinct from ties in the physical world. Computational ties are machine readable; can be collected from a wide range of ongoing events and systems; and can be aggregated, searched, and analyzed in ways that reveal patterns and connections not previously visible. The patterns that emerge from the analysis of machine-readable linkages have a range of practical applications, from enhancing searches of the World Wide Web to predicting toll fraud on the commercial phone network. Hyperlinks, one of the most visible forms of computational tie, impact commercial and social practices in multiple ways, driving many toward search engine optimization, which seeks to optimize the visibility of Web content to prospective partners. If Web content is created that is not well linked to, the investment in its creation is likely to underperform. The concept of “Google juice” expresses the need for explicit approaches to building positive patterns of linkage that stand as a proxy for many leading search engines for value.

    In view of their critical role in commerce, hyperlinks have become a new form of currency. Links to and from sites act as forms of endorsement and sources of traffic. When a high-traffic, high-status site links to another site, it acts as implicit endorsement and yields increased visibility for the target site. The result is often more traffic to the target site and increased ranking in search engine results. Since higher-ranked results often correlate with increased traffic and since traffic rates often map to Page  168revenue, more inbound links of the right quality can equal greater income and value for a Web site.

    Social network systems have become a rapidly growing form of computer-mediated social space. Systems like the SixDegrees launched by Andrew Weinreich in 1997, Friendster, LinkedIn, Plaxo, Orkut, Facebook, and MySpace—and, increasingly, any system for end-user content creation—have provided a means for individuals to link to other users of the same or related systems. The results are webs of associations that trace the connections between tens of millions of users, all explicitly authored at keyboards with mice and big screens. Studies of these systems have revealed highly structured behaviors, or roles, being performed by users who occupy positions within an ecosystem of actors. These desktop- and laptop-bounded systems are about to spill over into the physical world. The interaction order is changing as these systems are extended into the site of face-to-face interaction, the “synapse of society,” the gap between people when they associate.

    The Hypertie

    The hypertie expresses relationships in a form that is similar to hyperlinks and is different in kind and quality from the ways such social ties have previously been expressed. Social ties are widespread, created whenever people or other entities share or exchange resources. In some cases, these exchanges leave behind durable artifacts that represent the previous or continuing existence of the tie. A bridge is a good example, but so are artifacts like trade contracts, shared languages, and written citations linking one textual work to another. Among some animals, chemical pheromones are another form of tie, linking nest mates and conveying information about resources like food and water. Simple behaviors toward common objects, like two people emerging from a swimming pool at different times and using the same suntan oil, can indicate the presence of a linkage between two people.

    The mobile digital device, the replacement for the cell phone, is a recent and emerging innovation in the interaction order in that it enables novel forms of tie signs to be created and displayed. The mobile device is the first artifact that is aware of events in the interaction order to any extent. Its awareness takes place through the device’s use of a number of sensors, such as radios, GPS, infrared light, and sound. These sensors allow the detection of other similarly enabled mobiles in the device’s proximity. Page  169Given the intimate association of many mobile devices with individuals, these technologies allow for the mechanical sensing of the presence of people and the creation and inscription of ties—perhaps better thought of as hyperties—in increasingly implicit, passive, automatic, and pervasive ways.

    The emergence of mobile devices in the forms of cell phones, PDAs, MP3 players, cameras, personal video players, and navigation devices like GPS provide a new platform for the creation of a range of novel classes of devices able to author finely detailed social ties. As the sensor capacities of these devices are developed, their ability to note their location in absolute terms as well as in terms of proximity to similar devices will become highly accurate and widespread. And given the personal nature of these devices, detecting them is a reasonable proxy for detecting a person, with some useful levels of precision. In some cases, existing widespread technologies like Bluetooth and WiFi, while suboptimal for a variety of technical reasons, provide an already broadly available base for devices to sense the presence of one another. Wireless devices can be programmed to monitor the ongoing stream of passing equipment like themselves, each of which is often provided with a unique identifier. When these data are stored and analyzed, the result is a self-documenting social world in which casual encounters are noted with the same detail as long-term relationships. Projects like the Jabberwocky system from Intel Research, along with commercial systems like nTAG and Spotme, explore this implicit hypertie concept.[4] These and other related projects and products are described in the next section of this essay.

    When mobile devices are widespread, social ties can be authored and inscribed in a number of ways, most of which are passive and without explicit intervention by the participants. Machines accomplish this sensing in two broad ways; they can directly link to one another, sensing the presence of other radios, light beacons, or sound sources. Alternatively, machines can independently determine their location using a variety of technologies—from GPS to terrestrial radio and other location beacons—and can share that information with a common repository such that their proximity can be calculated from the joint data set and reported back to the mobile devices.

    Hypertie Systems

    Here are capsule descriptions of early examples of hypertie systems.

    Page  170

    “Life logging,” a concept championed by Gordon Bell at Microsoft Research, describes a set of technologies that allow a large number of people to continuously capture many aspects of their lives from cradle to grave. The resulting data, compiled from video and audio recordings as well as from the capture of, for example, every keystroke, mouse tap, GPS reading, and heart rate data point, would amount to a manageably low number of terabytes. The recognition of people in the resulting data stream is just one of the many applications being considered for exploiting this new data resource. Given the existing commercial availability of terabyte storage for a few hundred dollars (and dropping rapidly) and the low cost of low-end video cameras and microphones and other sensors, the prospects for the vision of complete life logging seem bright. I here describe the fragments of this vision that are already in demonstration form and a few that are already in more stable commercial forms of use.

    Trace Encounters, deployed at the Ars Electronica conference in Linz, Austria, in 2004, was a system built around a small computerized tag in the form of a lapel pin, which contained an infrared mechanism for exchanging data with similar devices. When one person wearing a tag encountered another person who also wore a tag, each transmitted a string of data that represented the wearer’s interests and prior encounters. The result is a display of one or more LED lights that indicated to what extent two individuals shared common interests, perhaps encouraging the individuals to engage in interaction to discover their shared interests. When an individual’s tag came into range of a PC at a base station, it also provided information about its previous encounters with people wearing other tags, which was collected and aggregated with information from all other tags that linked to the base station. The resulting data set painted a macropicture of the encounters between each tag and thus between each person who attended the conference and consented to wearing the tag.

    The nTAG system is a commercial product that extends the core concepts explored in the Trace Encounters system by making the device’s display of information far richer. The nTAG device was designed to be worn in the same way a name tag at a conference would be displayed, replacing the paper name card with a thin LCD display. Where Trace Encounters displayed only a series of LEDs, nTAG displayed grayscale text and images, allowing the device to send more sophisticated messages beyond the general rate of overlap between two user’s profiles. The extra signaling space was used to exchange information about topics that were of possible mutual interest, creating a kind of context-aware form of the “ticket to talk” concept described by sociologist Harvey Sacks. “Tickets Page  171to talk” are signs or behaviors that invite others to engage in conversation. A sports team’s emblem, particularly when worn far from their home city, is an example of a ticket that invites a kind of recognition behavior. The nTAG “ticket” is more aware of the context, displaying specific messages depending on the viewer, behaving much more like their socially aware wearers, who also shape their interaction presentations to their interaction partners. Subsequent to a meeting, users of nTAG can recall a list of who they interacted with and for how long, highlighting the frequent short meetings and significant long ones.

    Spotme is a related commercial product available predominantly in Europe. The Spotme device is a handheld, similar in form to a PDA and not intended for worn display the way the nTAG device is. The handheld device uses radio frequency (RF) communication rather than the infrared (IR) technology used in Trace Encounters and early versions of the nTAG system. The use of RF rather than IR has important implications; IR is a line-of-sight technology that requires that two devices be within modest range (ten feet or so) and in proper orientation toward one another in order to exchange data. These limitations are also affordances in that they require that social proximity and orientation be achieved prior to the exchange of data. In contrast, RF systems are often omnidirectional and may have greater range than IR systems. The absence of direct line-of-sight requirements for the exchange of data mean that those devices and their bearers need not be as aware of one another or even in sight to connect and exchange data. This means that RF devices can exchange data between people otherwise separated by walls that may or may not be desired. Spotme also generates “dwell time” reports on who the user interacts with and for how long.

    The Jabberwocky project explored an alternative RF mechanism, Bluetooth, widely available on millions of mobile devices worldwide, to illuminate the population of social beacons already present in the wilds of the Bay Area of California.[5] Bluetooth radios are designed to discover other Bluetooth devices in order to facilitate the pairing of devices like headsets and phones. An unintended consequence of this design is the ability to monitor the region of ten to thirty feet around a radio for the presence of any other Bluetooth devices. Since many users of Bluetooth devices make their radios discoverable or never changed the default settings with which the device shipped, an avid listener to the Bluetooth bands is able to hear a wide range of radios emitting identifiers that often reveal aspects of their human owner’s identities. The Jabberwocky system processed the aggregate data about past discoveries of Bluetooth radios Page  172and presented users with a “familiarity” display that indicated how many people nearby were people ever previously seen.

    SenseCam is a prototype device developed in the Microsoft Research laboratory in Cambridge, England.[6] The device resembles a digital camera the size of a credit card and has significant enhancements in the form of sensors. Accelerometers, thermometers, visible and IR cameras, and Bluetooth radios are combined in the SenseCam to provide the device with the means of recording ongoing sensor data and then determining when to take a picture. Its programming selects for volatility events, or points of transition between states, such as generated when a walking person comes to a halt or when a sitting person stands and begins to walk. When worn from rise through to bed, the result is between two hundred and four hundred photographs of the transition points in each person’s day. Research involving the SenseCam’s utility in therapeutic contexts explored their use by Alzheimer’s patients. Initial findings showed improved recall of prior events when each user reviewed the day’s images and events each evening. SenseCams are likely to be able to detect one another and, through techniques like facial recognition, to identify people seen by the person’s device throughout the day. These sightings could be transformed into the kinds of social reporting delivered by systems like nTAG and Spotme.

    SlamXR, a research project developed in the Microsoft Research lab in Redmond, Washington, is a project that explores the automatic inscription of space-time trails and hyperties. It extends the scenarios explored in the other hypertie systems described here, in that it incorporates a range of sensors in addition to the radios and IR beacons used in other devices. Sensors like accelerometers, thermometers, altimeters, GPS, and devices that measure biological input (e.g., heart rate and blood oxygen levels) are increasingly affordable and miniaturized and may soon become standard features of many consumer mobile devices. Each sensor has a capability to measure aspects of the user’s state in surprisingly refined ways. Accelerometers measure acceleration, or movement. A three-axis accelerometer can generate data about the patterns of force applied to it and, by extension, to its owner. Motions like standing, sitting, walking, and riding in a variety of vehicles all apply distinct force patterns that can be machine interpreted and identified with high levels of accuracy. The forces applied to a person by an elevator ride and an airplane are very distinct. Recording the output of an accelerometer over time results in a continuous map of a person’s (or their device’s) motions. Research using accelerometers suggests that rich diaries of activity can be Page  173generated cheaply and efficiently for vast numbers of people. Combined with GPS and related technologies like altimeters (which help correct altitude errors that are often generated by GPS devices), a package of sensors can locate a person precisely on the surface of the planet while simultaneously characterizing the range of forces and motions applied to that person. The recent release of a joint effort between Apple’s iPod product and a Nike running shoe is an early intimation of this trend.[7] The Nike + iPod product is intended to measure a runner’s footfalls and thus map the runner’s exertion over time. This data is recorded on the iPod and can later be uploaded to a shared Web application where people can contrast their progress with others. When combined with biological sensors that determine, for example, heart rate, temperature, and blood oxidation, a detailed picture of where a person is and what their physical state is can be generated at reasonably low (and dropping) costs. SlamXR highlights the ways hyperties can be created even in the absence of direct device-to-device detection. Colocation can be calculated based on individual devices’ reports of their location. Interestingly, this allows hyperties to be created even when colocated individuals are not cotemporal; that is, people who were where you were but not when you were there can be linked together by matching their location data without regard to time.

    Implications of Hypertie Systems

    Some affordances of these technologies are already relatively clear. Copresence is about to be increasingly automatically documented in such a way that our blurry social backgrounds will likely resolve into a detailed pattern of passing profiles, while our primary relationships will be documented in remarkable detail. As even casual crossings become increasingly visible, existing patterns that are latent or previously ephemeral are made visible and available for collection, aggregation, and analysis. Once generated in machine-readable form, sensor data can be merged with a wide range of other data and correlated with selected collections of traces from other people or groups. From credit and census records, to crop and weather patterns, to Web browsing and system configuration patterns, mesostructural and macrostructural patterns will emerge from the collective behavior of millions of people moving through the spaces and places they inhabit. The result is a kind of explosion of social science data that is unprecedented.

    Page  174

    The resulting data will have many implications. One in particular is the amenability of hypertie data to social network analysis. This form of inquiry focuses explicitly on the patterns created by ties or links between people or any kind of entity. The resulting directed graph data structures are considered to be social networks when people are in the population of connected entities. These networks can be complex and large and can be summarized in a number of ways that capture dimensions of their level of interconnection and the key people or nodes that occupy significant roles as indicated by their patterns of connection with others. The resulting analysis can highlight the range of different roles people play in the social world and show their change over time, making individual behaviors visible at the population scale. Social network analysis, when fed sufficient data, can create a more global view of a society’s interlocking social networks than has even been perceived by any individual observer.

    The digital quality of these observations introduces other issues as well. Once collected within the context of a specific social setting, these observations are likely to be available to people a world away. The erosion of control over audience is a critical shift that is already in play as people upload video captured from mobile devices to video sharing sites on the Internet, making the potential audience for an event far larger than the population present at the actual occurrence. Given Goffman’s focus on the careful crafting of interaction presentations for the specific audience, loss of control over the possible audience is a significant hurdle. Almost any event can be recast into a less flattering frame, increasing the uncertainty and risk of social encounters. Alternatively, the possession of a personal “black box recording” of moment-by-moment events allows for a counterperspective to be offered providing a different framing of the event (i.e., “I did not say that and here is the tape to prove it”).

    The sum of these changes could be considered to be a kind of pervasive inscription revolution, an era in which practices of inscription explode to include almost all human actions. The signs of the expansion of inscription are visible in the behavior patterns seen in many online services. For many of these systems, the hurdle to cross for minimal active contribution has been systematically reduced over time. Early systems, like e-mail, required active contributions of content in order for a user to be visible in the space. A widespread concern was for the disproportionate numbers of “lurkers,” read-only users who contributed no visible content. Over time, computer-mediated interaction systems have evolved so that the hurdles preventing users from leaving traces in systems are smaller, allowing the act of “viewing” a piece of content to be visible to Page  175others. Making objects into “favorites,” adding someone to a watch list, and similar features allow people to browse content as before but now leave a series of traces behind that are visible to others. As a result, writing is easier than ever: we are all writers now, if only because reading is now writing. Few systems allow for the unnoticed and unreflected consumption of content. Such behavior is valuable, socially and practically interesting, and cheap to collect. In such a situation, privacy issues are sharpened. The walls have ears and eyes, and others’ eyes and ears are now high-fidelity and archival.

    Notes

    1. E. Goffman, Relations in Public: Microstudies of the Public Order (Harmonds-worth: Penguin, 1972).return to text

    2. Ibid.return to text

    3. E. Hall, The Hidden Dimension (New York: Doubleday, 1966).return to text

    4. “Jabberwocky,” http://www.urban-atmospheres.net/Jabberwocky/.return to text

    5. Ibid.return to text

    6. “Current Project—SenseCam,” http://research.microsoft.com/sendev/projects/sensecam/.return to text

    7. “Nike + iPod,” http://www.apple.com/ipod/nike/.return to text

    Page  176