10 8 dculture nmw 8859947.0001.001 in
    Page  17

    Part 1 • Processes

    Page  18
    Page  19

     One //Adoption of New Media

    How do new media make their way into users’ hands? This chapter reviews several common patterns, both recent and in the past, associated with the adoption of new media products and services and suggests ways in which these patterns can inform the development and introduction of new media. In assessing patterns of development, the chapter examines price trends, rate of adoption, categories of early and later adopters, the attainment of critical mass, applications that drive adoption, replacement cycles, failures, cyclical technologies, and lifestyle issues, among other topics. Contrary to popular wisdom that the present new media environment differs completely from that of the past, many parallels exist in the adoption processes for the telephone and advanced mobile phones, for TV and HDTV, for VCRs and DVRs, and for dial-up and broadband Web access.

    Everett Rogers and the Diffusion of Innovations

    The late Everett Rogers is a seminal figure in the study of how individuals and organizations adopt innovations, and his theory provides a foundation for this chapter. His work is broad in scope, covering farming, health campaigns, and changes in the workplace as well as consumer adoption of new media technologies.[1] He made several key contributions to our understanding of the diffusion of innovations in general and of the adoption of media in particular that are relevant here. Rogers has pointed out that the diffusion of an innovation is a process that takes place over time; it involves people who learn about the innovation in different ways and who operate in a social context into which the new technology or way of doing things may or may not fit easily. He raises a series of questions that are useful in assessing what barriers must be overcome for an innovation to be accepted and at what rate it will diffuse if they are overcome. For example, how much complexity is involved—do people understand what a new technology does, and must they learn or be trained how to use it? Is the new technology or the new way of doing things compatible with the existing values, past experiences, and needs of prospective adopters, or does it require them to make significant changes in their values or behavior? Is the adoption

    Page  20

    a matter for individuals or for organizations? In the latter case, many bureaucratic and political processes may enter into the process. For example, schools have hierarchies of decision makers, budget cycles, and even union rules that may affect the adoption of a new computer network.

    Rogers has also pointed out that exposure to information about a new technology or other innovation can be direct or indirect. People can learn about it from mass media, such as advertising, or by word of mouth. Both generally are present, but advertising is typically a stronger force early in the rollout of a new technology, when few people own it, and word of mouth typically becomes stronger later, when more people are likely to own it. Further, some products generate a lot of word of mouth, positive (e.g., the iPod) or negative (e.g., early DSL service), while other products sneak under the radar, thereby affecting the relative impact of advertising and word of mouth.

    A concept to which Rogers ascribes particular importance, especially in the case of interactive technologies, is critical mass—when adoption reaches this level, additional promotion becomes unnecessary because diffusion is propelled by the innovation’s own social momentum. Many variations on this concept have emerged in the academic and popular business literature—for example, takeoff point, tipping point, and (a misnomer) inflection point.[2] Attaining critical mass is clearly a major goal of those who introduce a new technology.

    Rogers has emphasized the importance of examining the entire adoption process and all groups of adopters, not just those who are the first to use a technology. Early users and later users often differ substantially, but how many categories of adopters are necessary to capture a complete picture? Figure 1.1 presents his model of adopters at different stages in the process. Drawing on a number of studies, he makes some generalizations about each group. It must be borne in mind that these are generalizations, and the type as well as the

    Page  21

    number of groupings can vary depending on the technology, the social or organizational context, and the time frame.

    In Rogers’s framework, Innovators often are willing to take risks and accept uncertainty. As individuals, they tend to have higher incomes, communicate with other innovators, and act as gatekeepers for those who will adopt later but who are not part of the innovator group. Early Adopters are respected opinion leaders who advise others in their peer and near-peer network of contacts. They are not as daring as Innovators but are willing to try new products before they are widely accepted. The Early Majority is a large group whose members are deliberative, not likely to be opinion leaders, and often, after waiting a while, follow the advice of early adopters. The members of the Late Majority are generally skeptical and cautious and require that the uncertainties associated with new technologies or other innovations are substantially reduced before adopting them. Frequently, they wait until sufficiently motivated to adopt for economic reasons or as a result of pressure from peers. Laggards have traditional values and are reluctant to change. They have limited resources, are often isolated from social networks, and take a long time to come to decisions.

    Rogers emphasizes that a term such as Laggards is not intended to be pejorative. In fact, he argues that insufficient research has examined why people fail to adopt innovations and that there has been a bias in much of the research implying that adoption is inherently the wiser course of action.

    We turn now to a series of factors that complement and build on Rogers’s model of the diffusion of innovation, focusing on new media adoption patterns.

    Price Trends

    Media Products

    The price of consumer electronic products has played an important role in their rate of adoption by the public and in determining the overall size of their market. Historically, new media technologies have been introduced at high prices, which drop over time, as shown in table 1.1. Early manufacturing of such a product is generally expensive, largely because it cannot realize the economies of scale that are possible in mass production, and demand is often unknown, which can lead companies to try to maximize revenue from the first group of purchasers, many of whom are willing to pay high prices to be among the first to get it. Alternatively, companies can subsidize the price to try to build a user base rapidly and gain the benefits of the network effect (see The Network Effect and the Chicken-and-Egg Problem below), where the value of a service such as mobile phones is increased as more people acquire and use the technology.

    Page  22

    It is revealing to translate these actual price figures into a common element that spans time. Table 1.2 expresses the prices in terms of weekly household income—that is, for an average U.S. household, how many weeks of income were required to purchase the technology? (Data are not available for income prior to 1929.) Notice the similarity of costs to households in terms of weekly household income when radio, black and white TV, and color TV were entering half of U.S. households: 1.8 or 1.9 weeks of household income. Also, notice the very high cost of early black and white TVs, color TVs, and VCRs: approximately 6 weeks of household income. More recent technologies such as CD and DVD players were introduced at lower costs in terms of household income (1.8 and 0.8 weeks of household income, respectively) and declined to much lower costs (0.2 and 0.1 weeks of household income, respectively) at the point when they entered half of U.S. households.

    Manufacturers of more recent technologies face greater pressure to introduce them at a lower price and then to drive the price down rapidly. This development is partly a consequence of the greater number of electronic products in the marketplace compared to the 1920s or 1950s, together with greater competition for the information and entertainment dollars of the household budget. CD and DVD players offered simple enhancements to competitive predecessors (audiocassettes and VCRs), which had been in the marketplace for some time

    Page  23

    and had already reduced their prices, thereby putting considerable price pressure on manufacturers of CD and DVD players.

    Rapid decreases in the price of recent new media technologies also occur because of advances in the design and manufacturing of chip sets, which lie at the heart of these new technologies. Chips are more reliable than the tubes and transistors that were at the heart of earlier generations of radios and TV sets and in high volume are, on average, much cheaper to manufacture. Further, chip sets can be redesigned multiple times after a product is introduced, reducing their cost, size, weight, and heat emission and thereby enabling the product’s cost to be brought down more rapidly than was possible with earlier electronic products.

    The same pattern has occurred with many telecommunication devices—for example, fax machines, which declined in price dramatically between the early 1980s and late 1990s. During the 1980s and early 1990s, fax machines were

    Page  24

    predominantly a technology for business. They began to enter some homes in the 1990s, although they still served mainly business. Early fax machines had to overcome not only a price barrier but also slow transmission speeds and the lack of a single standard, which meant that a given fax machine could communicate only with another fax machine that employed the same standard. Only after a single standard (the Group 3 standard) was adopted in 1983 did fax machines become interoperable. This development also helped to lower their costs through greater economies of scale in manufacturing and made them much more useful to businesses and consumers. By the 1990s, it was cheaper in most cases to send a fax than to send a single-page letter through the postal service. Fax machines became so widely accepted that most businesspeople put their fax numbers on their business cards.[3]

    Personal computers (PCs) have followed a different price trajectory. Rather than reduce their price, the industry for a long time increased their capabilities each year. This approach was an appropriate response to the early market for PCs, represented by office workers and working professionals who used

    Page  25

    them at home. Consumers received the benefits of technological advances and ­economies of scale in the form of improved performance rather than as a decline in price. In the early and mid-1980s, some companies (for example, Acorn in the United Kingdom and Atari and Commodore in the United States) offered low-end computers to try to reach households but they met with limited success. In late 1997, personal computers had not yet entered half of U.S. households, and some research indicated that the penetration of PCs into households was hitting a wall of resistance.[4] This barrier appeared to be related to price. Responding to this perception, the industry began to offer low-end computers at lower prices for mass market adoption in homes and schools. As a result, the average price dipped below $1,000, a new group of users adopted personal

    Page  26

    computers, and the PC surpassed the 50 percent penetration mark in homes. The lower price also helped to bring many more computers into schools.

    Media Services

    The price of media services often drops over time, but the pattern is not as strong as in the case of media products. Telephone service provides a good example. The adoption of telephone service in households was linked to reductions in the cost of both basic service and long distance calls. In 1896, the fee for basic telephone service in New York City was $20 per month; in 1902, the cost of a three-minute call between New York and Chicago was more than $5—a week’s wages for some households.[5] In 2005, the cost of basic telephone service in New York was approximately the same—$20—and the cost of a three-minute call between New York and Chicago had dropped to 30 cents under a typical calling plan (see figure 1.2).

    Table 1.3 provides examples of some media services where the price dropped over time and others where it did not. The key variable distinguishing between the two groups is content. Where there is no content or the service provider does not have to pay for content (as when users create it), the cost of the service has fallen. When the service provides and pays for content, the costs of talent

    Page  27

    and production increase over time. Further, competition and demands from end users often require that the supplier expand service offerings, which leads to greater costs. For example, while the price of cable TV increased over time, so did the number of channels offered. Many other factors, such as regulations, can intervene and affect the price of a service.

    A new technology or service must attract initial users who are able and willing to pay a relatively high price for it to move toward the economies of scale in manufacturing that will reduce the price for the general public. (Most new services have required the manufacture of new technology components.) Who purchases new products and services when they are expensive? The answer varies somewhat by product, but the initial group of consumer purchasers is often wealthy, has an insatiable desire for the product, or loves electronic gadgets and is willing to pay a high price to be one of the first to own “the latest.” In addition to individual consumers, many of the purchasers are businesses or schools that need the product. The adoption of new media technology first by businesses and later by consumers is illustrated in the case of mobile phones (see chapter 10).

    The S-Curve

    The cumulative purchases of a new technology over time typically take the form of an S-curve—a curve that rises slowly at first, then much faster after reaching some threshold, and finally slows down as a saturation level is approached. The elements required to reach the threshold are not the same for all technologies, and the timetable for reaching the threshold can vary from a few years to many decades. Indeed, the crucial question associated with these S-curves is the time required to move from the launch of a new technology to the threshold point

    Page  28

    where rapid growth begins (from point A to point B in figure 1.3). The many technologies that do not succeed in the marketplace simply fail to gain any acceptance or never reach the point at which rapid growth occurs.

    Of course, the ceiling at which the S-curve levels off does not necessarily represent all households. While TV, radio, and telephones reached nearly all U.S. households at their peaks (98, 99, and 95 percent, respectively), cable TV appears to have reached the top of its S-curve with approximately two-thirds of U.S. households (figure 1.4). The penetration of personal computers may fall well short of all households, though computing technology will likely enter all households in the form of appliances that have microprocessors. It is too soon to know where DVRs, MP-3 players, and satellite radio will top out.

    Many successful products and services are characterized by curves showing their adoption rates—e.g., annual adoption—that take the familiar bell-shaped form suggested by figure 1.1. However, curves showing the annual adoption rates of others—in particular, those discussed later in this volume that are failures or fads or that have cyclical patterns of adoption—may be very different, even though the curves showing their diffusion (i.e., cumulative adoption) will typically be broadly S-shaped.

    Start-Up Issues

    What is involved in getting to critical mass—that is, the takeoff point on the S-curve? Here a distinction must be made between interactive technologies

    Page  29

    (e.g., telephones, email, and text messaging) that link people together and technologies that are used by people individually (e.g., an iPod or a satellite radio). In the case of products that stand alone, critical mass can be achieved when enough users talk about a product in a positive way, people see others

    Page  30

    using it in many settings, newspapers and TV proclaim it to be a hit, and so on, with the result that many nonusers want to acquire it. In the case of communication technologies that link people together, the core value of the product is associated with how many people are already using it. For example, a telephone has less value when only a small number of people have it as compared with when millions have it. This is called the network effect.

    Some innovations achieve critical mass more easily than others. We consider three cases: when the innovation is an enhancement of an existing service, when it can piggyback on replacement cycles, and a third category which we call “charmed lives.”

    How New Is New?

    A difference exists between enhancing existing technologies and developing entirely new services. Introducing qualitative enhancements to existing technologies often provides a good path for the development of new media technologies. Consumers have responded positively to enhancements such as adding color to black and white television, higher fidelity for recordings, stereo sound for television, touch-tone telephones for rotary dial telephones, and broadband Internet access as a replacement for dial-up service. From a supply-side perspective, consumers were adopting new technologies and services. From a demand-side perspective, they were simply upgrading to a better version of a familiar and desirable technology or service.

    Sometimes the introduction of new services does not mean that users need to acquire new technology. This was the case with telephone-based voice services (sometimes called audiotex), used, for example, to check a bank balance through an automated voice system, and more recently with streaming audio and video on the Web. Often, however, new services as well as enhancements require new user technology. Table 1.4 illustrates four different possibilities.

    Page  31

    The combination of new services with new user technologies is the most problematic of the four cells in the table. High cost and uncertainty occur when, as is often the case, new content must be created (for example, for early online services and interactive television) or even when existing content must be bought from outside sources or distributed in new ways, as in the case of prerecorded videocassette movies, which, while successful, required several years to achieve significant usage.

    Then there are the uncertainties surrounding whether the new technology will work well enough and whether consumers will pay for it. There are also issues associated with the necessity of consumers changing how they use media. For example, a person watching HDTV uses the content in the same way as someone watching standard-definition television even though there is a qualitative difference in their experiences. However, a teenager acquiring a video game console for the first time in 1985 or an adult using the Web for the first time in 2000 needed to learn and employ new behaviors. These changes were all the more significant because they required that users alter existing media habits. Such change often requires time. Indeed, both the growth in hours per week spent on the Web and the rise in time spent on video games have spanned a considerable number of years.

    Piggybacking on Replacement Cycles

    Sometimes, the adoption of one technology is linked to the purchase of another. For example, while few people in the 1980s bought a TV set or a VCR just to obtain a remote control device or stereo sound, many consumers chose these features as options when they purchased new VCRs or replaced old TV sets. Thus, replacement cycles for existing technologies may provide an important opportunity to introduce new technologies. In U.S. households, color TVs are replaced after an average of eight years, personal computers are replaced after two years, and mobile phones last a little more than one year before owners seek new models (or replace lost or broken ones). In the case of both mobile phones and PCs, the replacement cycle has shortened considerably over just the past few years to the levels shown in table 1.5. This more rapid rate of replacement has provided an opportunity for providers of new hardware and services to introduce their technologies more quickly. This is a conservative model of adoption in which new technologies piggyback on replacement cycles. It provides one way in which Rogers’s Late Majority and Laggards acquire new electronic technologies.

    In the mid-1970s, a variation of this process was associated with a cultural quirk in the United Kingdom in the years following the launch of teletext there (see chapter 7). At the time, the majority of consumers still rented their TV sets

    Page  32

    from local stores. Upgrading the sets occurred relatively frequently since it required little if any effort or outlay by users. This provided a highly favorable context for the rapid diffusion of teletext in the U.K. market.

    New models of an existing technology are purchased for at least four reasons: to replace an existing model that no longer works; to obtain an additional unit of the technology; to upgrade an existing model that works but does not have a desired feature or is of lower quality than the upgrade model; or as a by-product of another purchase. Upgrade purchases have been very important to products for which the pace of technological change has been rapid. For example, in the late 1980s, very few personal computers had modems, the technology necessary to link the computer to the telephone network, primarily for use of online services. While people who wanted to go online could purchase stand-alone modems that connected computers to telephone jacks, relatively few did so. In the late 1980s, the industry began building modems into most computers. Then, with people’s purchases of new or replacement computers, the number of computer-owning households that had modems grew from less than 10 percent of the total in 1988 to more than 60 percent in early 1997. This was one of the vital elements that made it possible for people to use the Web. The building of modems into nearly all computers by 1995 helped this key peripheral device to cycle through the population of PC owners. CD drives and DVD drives were later introduced into households in the same way.

    Charmed Lives

    On some occasions, a new service or technology seems to have come from nowhere and to have enjoyed an effortless ride to spectacular success, to the amazement even of those introducing it—and of independent analysts. The original Sony Walkman was one such product. Mobile phones have provided

    Page  33

    the platform for more recent examples: mobile phones incorporating still cameras, text messaging (SMS), and applications (Apps) such as maps or games. Far from being user-friendly, the user interface of early mobile phones for creating text was positively user-hostile, and before SMS took hold, it seemed bizarre to believe that people would find uses for it that would justify the effort involved. But they certainly did—billions of text messages are sent each month. It would be a serious mistake to regard these surprise successes as no more than lucky flukes. The phenomenon has important implications for large corporations that introduce new products or services as well as for makers of public policy (see chapter 3).

    There is a variation of “charmed lives” in the dynamic relationship between a dazzling but initially disappointing new service and a humdrum but commercially successful counterpart—for example, the Prestel videotex service (dazzler that disappointed) and teletext (humdrum success) in the United Kingdom (see chapter 7). Does the publicity surrounding the dazzler help promote its lesser counterpart? Very probably. Does the Cinderella service spoil the market for the dazzler? Over the years we have certainly come across bitter remarks to this effect from marketing executives who were failing to make sufficient progress with one of the former. Does the Cinderella service help prepare the market for the successful entry of the more powerful version at a later time? Possibly, but it is unlikely that the technology of the more powerful version will stand still in the interim.

    Rate of Adoption

    Although it is common knowledge that, over recent decades, the pace at which consumers take up new technologies has been accelerating, it is worth quantifying the phenomenon by examining how many years it took different media to reach the point at which the median household decided that they were ­affordable—that is, to reach levels of penetration of 50 percent of U.S. households. Newspapers were introduced in the precolonial America, and more than 100 years passed before half of U.S. households read them regularly—in the 1890s. The slow growth resulted from high costs of production and distribution as well as low literacy rates among the American population. Telephones were introduced in the late 1870s and did not enter half of U.S. households until 1946. High early costs of telephone service (figure 1.2) were a major factor in the slow adoption.

    Radio achieved a 50 percent penetration of U.S. homes in 1929, nine years after becoming widely available. Black and white TV achieved a 50 percent penetration in 1955, eight years after it became readily available to the public.[6] Color television reached this level of penetration in 1972 after seventeen years. VCRs

    Page  34

    achieved a 50 percent penetration at the end of 1987, and CD players did so at the end of 1993, each ten years after becoming widely available. DVD players grew at a rate faster than any other new media technology, entering half of U.S. households in 2003, just six years after entering the marketplace (table 1.6).

    Why did the market penetration of DVDs increase so rapidly? From a pricing perspective, manufacturers introduced the technology at a lower price point, in terms of weekly household income, than many other technologies and were able to bring the price down very rapidly. DVDs also entered a marketplace where VCRs had already established a strong demand for movie rentals and sales. They offered better quality than videocassettes but could not record. Why didn’t this limitation reduce interest in them? At the time when DVDs were introduced, most households did not use the recording function of VCRs (only one in five households with VCRs did any recording), so the lack of a recording feature was not a major obstacle. Further, the movie studios saw DVDs as a major source of income (they quickly surpassed movie theaters) and released thousands of movies on DVDs in just a few years. From a consumer perspective, in addition to providing better picture quality, DVDs took up less space than videocassettes; lasted longer (at least in theory; in practice, durability depends on how people handle and store them); had extra features, such as outtakes; quickly became affordable; and soon had a large inventory of movies. All of these factors came together to make U.S. homes quickly adopt DVDs.

    Page  35

    In entering the market at high prices that drop over time, DVD players and DVRs have followed the same pattern as earlier technologies. However, a compression factor appears to be developing in which the introductory price is not as high relative to earlier technologies and the price drops more rapidly and to a lower level than with earlier technologies. This trend appears to hold for recently introduced new media technologies more generally, though some exceptions, such as HDTV, exist.

    False Starts

    Some successful new services or technologies suffered from false starts or languished for a long time on the initial, low-growth part of the S-curve. For example, television was launched as a commercial service in the late 1930s, but the high price of TV sets ($600) and the disruption caused by World War II led to a suspension of most service. The technology was reintroduced after the war and grew rapidly. Similarly, two home video recording technologies were launched and then withdrawn in the early 1970s (CBS’s EVR system and Avco’s Cartrivision system) before the modern VCR finally took hold in the mid-1970s. Fax technology wins the prize for false starts. It was invented in the 1840s and tested in the 1860s, with no significant adoption; reintroduced unsuccessfully in the 1930s and the 1950s; then achieved widespread adoption in the business market during the 1980s; and finally entered a moderate numbers of households in the 1990s.[7]

    The Advantages and Pitfalls of Being First

    There has been a long-running debate about the advantages and disadvantages associated with early entry into the media marketplace. Brian Arthur makes the case for early entry.[8] He starts by noting that economists generally view competition in terms of equilibrium. For example, a hydroelectric power company that gains an advantage in the marketplace will eventually run out of good locations to build new plants, thereby providing an opportunity for power companies using other sources of energy to compete effectively and thus restore equilibrium. He then argues that this classic economic model does not adequately explain what happens with new media technologies. Here, small competitive advantages gained early often escalate over time and lead to market dominance. Arthur describes this process in terms of positive feedback, citing many examples. From this perspective, a technology with a small marketplace advantage may benefit from positive market feedback that strengthens the advantage and generates more positive feedback, with the process continuing until market leadership is established. For example, when VHS and Beta formats were competing

    Page  36

    for the videocassette market, VHS had a longer recording capability and developed better marketing agreements with electronics manufacturers and movie distributors. These advantages, in turn, attracted more retailers to market VHS. With this positive feedback, VHS’s small advantage grew larger and larger over time, and Beta was essentially eliminated from the consumer marketplace even though many observers argued that it was technically superior.

    A technology’s small early advantage over its competition may arise from chance, a favorable geographic location, or a seemingly inconsequential event, such as favorable coverage in a magazine story. The importance of lucky breaks has been overlooked by many who have studied the adoption of media, in part, quite possibly, because the involvement of luck suggests that the process of adoption cannot be fully analyzed in advance and that the growth rate for a new service cannot be forecast. However, there are many examples of chance playing a crucial role in the adoption process, from the mom-and-pop videocassette rental shops that emerged spontaneously during the development of the VCR marketplace to the development of cybercafés that brought Internet service to millions of people worldwide who might not otherwise have experienced it.

    Arthur’s model of positive economic feedback can be used to support the case for early market entry as a means of generating positive feedback. Indeed, many instances of early market entry have escalated into market dominance. AM radio preceded FM into the marketplace and dominated radio for fifty years; HBO was the first to develop a national pay-cable service and quickly dominated the market; and the three broadcast networks that entered television in the late 1940s achieved a lock on the market that was not challenged for 30 years.

    However, for each example of early entry that led to marketplace dominance, there is an example of early entry that led to failure or weak market performance. Among other examples, a 45-RPM automobile record player developed in the 1950s (the “Highway HiFi”) failed to achieve any significant market; two-way video trials and services in the 1970s for business meetings and medical applications were largely unsuccessful; and a broadcast pay-TV service developed by Zenith in the 1950s failed. Yet each of these cases was followed by technologies and services for somewhat similar purposes that succeeded. There are many reasons why early market entrants fail. In some cases, the technology simply does not work properly: for example, the 45-RPM automobile record player skipped whenever the car hit a bump. In other cases, the costs associated with marketing and launching a service overwhelm an early entrant: several groups that planned to launch direct broadcast satellite services in the early 1980s abandoned their plans as they faced the huge costs associated with launching the services. In still other cases, an inhospitable regulatory climate can cripple an early entrant, or consumers’ lack of skill in using the new technology can lead to failure. Those who enter a market later may find

    Page  37

    that their technology works better, costs are lower, consumers have improved their skills in using the technology, the regulatory climate is more hospitable, and so on.

    A historical review of new media technologies suggests that early entry is an advantage in some cases and a disadvantage in others. It is an advantage when all the pieces are in place (or soon will be) to launch the technology successfully. It is a disadvantage when the technology suffers from one or more serious weaknesses. XM Satellite Radio is an example of the former, a new service that grew rapidly even though it was missing a number of vital pieces during its prelaunch planning phase. For example, satellite radio sets did not exist, and the company had no agreements with automobile manufacturers to place receivers in cars. However, the company put these pieces into place before or shortly after the actual launch of the service (see chapter 9).

    How Applications Can Change over Time

    Another example of the valuable concepts that Rogers introduced is reinvention, which he defined as “the degree to which an innovation is changed or modified by a user in the process of its adoption and implementation.”[9] Reinvention occurs far more frequently than is generally acknowledged. Mobile phones are a good example. When first introduced, they were intended for dispatching police and emergency calls. Later, usage evolved into general business calls and then to social chitchat by consumers. In the case of new media, modifications to an innovation can be introduced by the industry concerned as well as by users. In this sense, we would expand the definition of reinvention to include industry modification. The various combinations include new purposes (maybe associated with new types of user) but unchanged technology, new technology but unchanged purposes, and new purposes together with new technology. This definition can cause confusion: since the associated changes may be gradual rather than abrupt, it is not always clear whether and when a new product or service should be regarded as having become a different entity. For example, is a new model of a personal computer with an upgraded central processor a new product? In some cases, the upgrades are minor; in other cases, the new CPU adds significant capacity, such as the ability to use the PC for watching and editing video or playing fast-action games.

    In addition to the time-based distinctions that Rogers introduced between adopters (Innovators and so on), another type of time-based difference is associated with reinvention. There is a broad generalization that applies to many new technologies: the early uses as well as the early users for a new product or service often differ substantially from the later uses and later users. This difference may be explained with a staircase analogy.

    Page  38
    A Staircase Analogy

    For a technology to be adopted, several steps must be climbed. The first step represents one group of users and uses, but the group or mix of groups at the second and third steps may change. It follows that there must be a first step if a technology is to reach the second step. Ideally, those who are introducing a technology will anticipate the mix of users and uses at each step. Since doing so is very difficult, however, they must be prepared to shift strategies as they climb each new step. VCRs illustrate this process. When VCRs were first introduced in the United States, they were quite expensive—approximately $1,200. Early users included businesses and schools that had been using 3/4 inch U-Matic recorders, costing $2,000 or more, for training and education. For them, a $1,200 VCR was a cheaper alternative. The early adopters also included some high-income households, especially those with an interest in the latest electronic gadgets. Household usage included time-shift viewing of television programs and a considerable amount of pornography; a majority of videocassettes sold and rented in the late 1970s featured pornography.[10] Pornography or risqué content also played a part in the early adoption of books, films, 900 pay-telephone services (often called premium telephone services outside the United States), and various other media.[11]

    Businesses, schools, people who were willing to pay a high price for time-shift viewing of programs, and those who wanted to see pornography comprised the first step of users and uses. They made it possible for a second step of adoption to occur, at a lower price and with a different mix of users (more middle-income households) and uses (more videocassette rentals of Hollywood movies and later sales). This example suggests that some services might appeal to a mass market at the second or third step in the adoption process but never have the opportunity to test the mass market because no group is prepared to pay the higher price at the first step or because of some other early barrier. Teletext in the United States is an example. When tested in a number of markets during the early and mid-1980s, teletext generated positive consumer reactions, but few were willing to spend more than $150 for a teletext decoder attached to or built into a TV set. However, service providers and television set manufacturers were not able to agree on a standard, and the Federal Communications Commission (FCC) declined to set one. In the absence of a single standard, early teletext receivers cost approximately $700. At that price, there was no user group or application that could be used to climb the first step in the adoption process and bring the price down for a mass market.

    In the case of VCRs, some important unanticipated events also occurred. One was the emergence of local shops set up by individuals who purchased copies of videocassette movies and then rented them to the public. (Blockbuster

    Page  39

    and other large chains later replaced many of these small shops.) The major movie distributors tried to prevent these mom-and-pop rental shops from doing business independently, suing to stop them but losing in court under the “first sale doctrine,” which held that a store could buy a copy of a movie on videocassette and rent it to consumers. Ironically, this defeat led to tens of billions of dollars in revenue for those major distributors.

    These shops were critical for the second and third steps of VCR adoption to occur, suggesting that the growth of a technology is often a fragile, changing process. Early use can differ from later use, and the elements that are critical to success at various steps along the way can come from unplanned and unanticipated sources.

    Killer Applications or a Confluence of Factors?

    The staircase analogy does not preclude the possibility that a “killer application” can get a new media technology beyond the first and second steps to mass market acceptance. But how often does it happen? Those who develop and market new media technologies often herald killer applications and magic bullets that will lead to a decisive home run for a new technology or service. Indeed, some very popular applications helped technologies to gain quick acceptance in millions of American homes. For example, a few very popular radio programs drove the sale of radio sets, and a few early video games drove the adoption of video game consoles. Within the business community, spreadsheet programs played a similar role in the early days of personal computers.

    More commonly, however, a confluence of several factors is required for a new media technology to take off and gain widespread acceptance. This confluence may take only a few years, but a technology sometimes has rested on the first step for decades. DVDs and black and white TV (when reintroduced after World War II) provide examples of the former—they grew very rapidly after their introduction. Cable television and FM radio provide examples of the latter—they were in the marketplace for many years before they experienced a period of rapid growth.

    Cable television was introduced in Oregon and Pennsylvania during 1948. From 1948 to 1972, cable television grew from no penetration of U.S. households to 10 percent penetration.[12] In other words, it required more than 20 years to achieve a 10 percent penetration level. From 1972 to 2005, cable grew from 10 percent penetration to more than 65 percent penetration (figure 1.4). Why did penetration jump so rapidly in the 1970s and 1980s? The cause was not a single factor or killer application.

    In the 1950s and 1960s, cable TV represented a way to improve reception of over-the-air broadcast signals in communities that could not otherwise

    Page  40

    watch television or could not get a good signal. Generally, these were small rural towns and some suburban areas 50 or more miles from a broadcast transmitter. Cable offered very few extra channels or services, so it had little appeal in areas where the over-the-air reception was good. Indeed, one of the most common cable-originated channels during this early period was a channel that showed a clock 24 hours a day. While practical, such “programming” was not likely to attract new subscribers. In the 1970s, a confluence of several new elements acted as a starter motor for a powerful growth engine to kick in. For the first time, it appeared that cable could be profitable in large cities; at the same time, the FCC lifted a freeze on franchise awards in major markets. As a result, major media companies competed fiercely to win franchises in Boston, Cincinnati, Pittsburgh, Dallas, San Diego, and elsewhere and built large cable systems where none existed previously. In addition, satellite transmission made national cable program distribution easier and less costly, which gave rise to the launch of many basic channels, such as superstation WTBS, and significantly improved the distribution of the pay-movie channel HBO, which had previously mailed tapes of movies to cable outlets, a process called bicycling. Then, in the late 1970s, several popular new channels with specialized programming—for example, Nickelodeon and MTV—were developed as a result of a flurry of experimentation with program formats triggered by interest in interactive cable services.[13] This confluence of elements provided a powerful engine for growth.

    Similarly, FM radio experienced slow growth for many years before a period of rapid expansion kicked in. It was introduced in the 1930s and widely available to the public in the 1940s. For 30 years, audience share grew very slowly, but in the 1970s, a dramatic growth in audience share began. Between 1973 and 1985, FM’s share of the radio audience climbed from 28 percent to 72 percent. In other words, the FM/AM share of audience changed from approximately 30/70 to 70/30. Why did such a dramatic change occur over this period? The answer appears to be related not to any single element but (in the United States at least) to a confluence of elements that triggered a growth spurt.

    A variety of barriers and potholes along the way had created problems in the start-up of FM broadcasting. In 1945, the FCC changed its spectrum allocation for FM, with the result that FM radios purchased before that date were useless and the industry essentially had to start the adoption process over again. During the 1940s and 1950s, most FM stations were co-owned by an AM station operator, which carried the same programming on both stations. In addition, FM receivers were expensive, and there were decidedly fewer FM stations than AM stations. In 1960, for example, there were four AM stations for every FM station. With relatively few stations on the air, little original programming, and expensive receivers, consumers were reluctant to adopt FM even though it was

    Page  41

    technically superior to AM. Nonetheless, FM gained a foothold thanks largely to classical music devotees who valued its superior sound quality.

    Beginning in the late 1950s, a series of events helped FM gain strength in the marketplace. In 1958, AM frequency allocation reached a saturation level in major markets, thereby encouraging new groups that wanted to launch stations to apply for FM licenses in those markets. In 1961, FM stereo was launched, and in 1965 the FCC ruled that co-owners of AM and FM stations in the same market could not transmit the same content on both of them. Stereo provided a qualitative enhancement to FM, while the requirement to offer original programming led many FM stations to explore new formats. In addition, the price of FM receivers declined, and many manufacturers began to offer combined AM/FM receivers for both homes and cars. Many households then moved to acquire FM receivers. By 1970, the ratio of AM/FM stations was 3/2, and 74 percent of households had FM receivers. This confluence of elements placed FM in a strong position to challenge AM.

    Disruptive Technologies

    A dramatic example of the dynamics of changing uses and users is provided by what Clayton Christensen has termed a disruptive technology.[14] Sometimes a technology is introduced at substantially lower levels of price and performance than technologies already available in the market and creates a new market for itself among those who are particularly price conscious. Because of its low performance, it initially poses no threat to suppliers already in the market. However, with the positive feedback resulting from direct or indirect network effects, together with economies of scale, significant improvements in performance are progressively implemented without commensurate increases in price (as with personal computers). As it improves, the new technology attracts new types of users. At some point, it invades and takes an existing market away from its incumbents. Calculators were a disruptive technology for the makers of slide rules; other disruptive technologies have included low-cost photocopiers, ink-jet printers, and, in the corporate computing market, personal computers.

    Natural Selection—When a Better Format Comes Along

    Recorded music illustrates how adoption of new media can take place when a new format is introduced. Long-playing records (LPs) were the format of choice for a few decades. They were replaced by audiocassettes, which were replaced by CDs, which are being replaced by digital downloads. Figure 1.5 shows the transitions from one format to another.

    Page  42

    CDs were challenged by the MP-3 format in the early 2000s. However, attempted transitions to a new format do not always succeed. In the 1960s, prerecorded music on reel-to-reel tape was technically superior to LPs but it was difficult to use (the tape could spill onto the floor if not handled carefully) and achieved little acceptance in the home market. Eight-track cartridges were introduced in the mid-1960s as a more user-friendly tape format. They were offered first for cars and later for homes. Eight-track cartridges competed for market share with LPs. They achieved modest success by the mid-1970s, but at that point, audiocassettes were growing in popularity, and they replaced both LPs and eight-track.

    For a new format to be adopted, end users must perceive advantages (more capacity, better quality, valued extra features, and so forth) sufficient to overcome the hurdles of purchasing new hardware and accepting the eventual obsolescence of recordings in the older format. More recently, high-definition DVDs challenged standard-definition DVDs and faced the same obstacles.

    Demographic Characteristics and Lifestyle Issues Associated with Adoption

    Adoption of new media technologies is associated with age (generally, younger and middle-aged groups are more likely to adopt a new technology), income (those with higher incomes adopt technology sooner), education (those with more education adopt technology more readily), gender (while evidence is mixed and patterns may be changing, a greater proportion of males were early

    Page  43

    users of household personal computers, VCRs, and the Web), and in some cases, moving patterns (moving to another location appears to trigger a reevaluation of what technologies or services are desirable). In a large metastudy of computer adoption research, William Dutton and his colleagues have argued convincingly that among all the demographic factors positively associated with adoption, income is the most important.[15]

    One of the barriers to mass adoption of many new media technologies is distribution of income. The distribution of income in the United States has been characterized as a fishbowl, with a small top, small bottom, and large middle section where most of the fish swim. To a large degree, this image is correct and has been supported by the income data collected since the 1920s. The typical middle-income American household has also been characterized as a family unit in the suburbs with the husband working and the wife taking care of the house while raising their children. This latter image was accurate for many households in the 1950s and 1960s. However, as more women entered the workforce in the 1970s, 1980s, and 1990s, some characteristics of the typical American household unit began to change. From the perspective of technology adoption, the most important changes have been increasing wealth at the top of the fishbowl and increasing poverty at the bottom, along with the characteristics of those who compose these groups. At the top, a large group has annual incomes of $100,000 or more. This group of households consists predominantly of married couples in which both spouses work. They are generally middle-aged (35 to 55), are well educated, and have children. This is a very important target group for early adoption of new media technologies. At the bottom, a large group lives in greater poverty than did so a few decades ago. This group cannot afford many of the new technologies until their prices have dropped a long way but may experience these technologies in schools, libraries, the workplace, and other locations outside the home.

    Age is another important demographic characteristic that affects adoption. A distinction must be made, however, between age and generation. That is, when one age group has adopted a technology, did they do so because of their age or because they were a new generation? For example, it is reasonable to assume that young people have adopted MP-3 players because people between 15 and 30 like to hear music as they move from one setting to another, just as young people in past decades adopted portable radio, Walkman audiocassettes and portable CD players. However, what about young people’s preference to get news from the Web rather than newspapers? Does this phenomenon relate to their age, or is it a generational shift in which this generation has adopted the Web and will continue to prefer it even as it gets older?

    Among the many lifestyle issues that affect adoption of new media are hectic schedules, home offices, mobility, and individualism. Two-income households

    Page  44

    with children are prime candidates for early adoption of new media. However, adult members of these households are also characterized by very hectic schedules and a feeling that they are pressed for time.[16] The evidence is mixed about how much free time they actually have but it clearly shows that they believe that they have too little. This is particularly true for working women, who have responsibilities for child rearing and household chores on top of work obligations. A key issue for this group is whether a new technology takes up or saves time. Further, does it take time to learn, and can it easily fit into their schedule, or must they change their schedule to use it? Some of the appeal of MP-3 players and the Web is that they are schedule-free; part of the appeal of the DVR is that it frees users from the restrictions of a television schedule.

    Households with home offices are also associated with early adoption of many new media. They include people who operate businesses from their homes, telecommuters who work part-time at home and part-time at regular offices, and professionals such as teachers or lawyers who maintain an office at home to complement their regular place of work. The number of home offices has grown sharply from about 20 million in 1990 to more than 50 million by the mid-2000s. Households with home offices have a need for many new media technologies. For example, they were among the first households to have two or more telephone lines, fax machines, Web access, and broadband. This example also highlights some of the limitations of the categories we use to classify types of adopters. A “business” category encompasses many different types of users.

    Two social trends, mobility and individualism, are coming together to encourage the adoption of new media. Mobility involves the greater use of cars, more air travel, jobs that are farther from home, and more activities outside the home. Individualism (as the term is coming to be used in the sociodemographic field) is a long-term social trend in which people are participating less in groups. In politics, citizens have been identifying less and less with political parties. In communities, fewer people identify closely with social organizations such as the Lions Club or the Kiwanis. This development carries over to media. In print media, general-interest magazines have declined, and special-interest magazines have increased. In television, ratings for general-interest programs that appear on the major networks have declined, and ratings for special-­interest cable channels have increased. Individualism and mobility have combined to lead many people to consume personalized media anywhere and at any time. This long-term trend began with households acquiring multiple units of technologies such as radio, TV, and the telephone, thereby encouraging personal access to and usage of media. Walkman radios, mobile phones, PDAs, MP-3 players, and laptop computers later strengthened the habit of using personalized and customized media in any setting.

    Page  45

    With multiple units of technology in homes and many personalized and portable media, where do people put all this equipment? Part of the answer is that miniaturization of electronics allows more equipment to fit into smaller spaces than were necessary in earlier decades. Perhaps more important is a changing social pattern in using media. More technology has been integrated into the social fabric of everyday life, especially in mobile settings, where many people wear technology such as mobile phones or MP-3 players in a manner similar to clothing (see chapter 10).

    Failures, Fads, and Cyclicals

    So far, we have been concerned primarily with new technologies that became successes. Inevitably, however, failures are much more numerous than successes, and much of value can be learned from them. Many analysts have noted that new technologies are often created by engineers who have little knowledge about whether there is a demand for these technologies.[17] In this sense, new services often result from “technology push” rather than “demand pull.” This practice has been correctly cited as a reason why many technologies fail. While the criticism is correct, it would be facile to leave it at that: many of the most successful communication technologies of the past 125 years—telephones, motion pictures, radio, phonographs, and television—entered the marketplace as technology push, in a context of uncertain demand. Technologies do not falter simply because they represent technology push; they fail because they cannot meet the challenge of finding or creating applications that people want. As one of the pioneers of videotex in the United Kingdom presciently asked, Will we succeed in reaching takeoff speed before we run out of runway?

    After a short discussion of failures, we turn to fads and cyclical technologies, which have patterns of adoption that differ from those of either successes or failures.

    Failures

    A common pattern is associated with the marketing of many unsuccessful products and services—a phantom S-curve. These technologies frequently languish with low consumer acceptance while their advocates proclaim that they are about to take off and project rapid near-term growth based on an S-curve pattern of adoption. Videotex, eight-track audio cartridges, and over-the-air subscription television (STV) are among the technologies that have followed this pattern.

    STV was a form of movie channel available by subscription, like HBO, but delivered by means of a scrambled broadcast signal. STV was tried in the

    Page  46

    1950s and 1960s with little success. Again in 1977, a few companies launched STV services and achieved modest penetration of homes. By 1980, more than 600,000 people subscribed to these services; two years later, that number had topped 1.3 million.[18] At this time, some media analysts projected that STV was about to take off. However, at the same time, cable TV was entering major markets across the United Service offering technically superior service and multiple channels, not just one. STV soon faded away.

    More generally, many lessons can be learned from technologies that failed in the marketplace or lost ground after achieving a significant penetration of households. First, many technologies have failed because the benefit they offered was at best superficial. For example, quadraphonic sound (four-channel sound) was introduced in the 1970s but did not represent a significant advance in technology for the consumer market. Rather, it represented an attempted transfer of existing industrial technology (multitrack recording and playback) that provided a genuine benefit in an industrial setting (control of editing) to a home market in which no benefit could be demonstrated. In addition, very few recordings took advantage of the new system, thus further reducing its appeal to consumers. From a consumer’s point of view, quadraphonic sound offered no advantage over existing stereophonic sound. Though its failure could have been anticipated, proponents ignored its weaknesses and instead tried to market advantages that were ephemeral from a consumer’s perspective. Videophones (see chapter 6) and Smell-O-Vision are illustrative examples of failure. Smell-O-Vision was a 1950s gimmick to try to bring more people into movie theaters in the face of competition from television. The concept was to ­introduce scents into the theater that complemented the scene in the movie—for example, the smell of the sea in a scene on an island. It was short-lived. The problem was not so much introducing the scents but getting them out of the theater before the next scene, which might require an entirely different scent.

    In general, companies have ignored failures and the many lessons that can be derived from them. In addition to the fact that an understanding of failures can help technology start-ups avoid making the same mistakes as in the past, it is also possible for a phoenix to arise from the ashes of failure. For example, the failed videotex trials of the 1970s and 1980s offered many clues about how online services could succeed (see chapter 7). Unfortunately, when a technology or service fails, the company that initiated it often lays off the personnel who gained the learning and literally throws out the records showing what happened, including the research. Many years ago, communication researcher Harold Mendelsohn suggested that a good way to identify potentially successful new media services and applications is to look for near misses of the past.[19]

    Page  47
    Fads

    Some seemingly successful technologies prove to be fads. We are familiar with fads in leisure products such as hula hoops, yo-yos, tamagotchis, and pet rocks. Consumer electronic technologies and services, too, may be fads or have a faddish component. A good example is citizens band (CB) two-way radio, which in the early 1970s had a steady population of approximately 200,000 users. CB became a fad in the mid-1970s, and many consumers bought CB radios for their car or home. The population of users grew to a peak of 10 million in 1976 before declining rapidly and leveling off at approximately 1 million by the early 1980s (see chapter 10). Other media fads have included boom boxes, beepers for teenagers, and minidisc players.

    Cyclical Adoption

    Some technologies exhibit a cyclical pattern of adoption: strong adoption, followed by decline in usage, followed by periodic returns to popularity. 3-D movies, for example, were popular during the mid-1950s then faded away, experiencing renewed interest in the 1960s and for brief periods in each decade thereafter. 3-D movies are essentially a one-trick pony. They are appealing when an object comes at the audience—an arrow, perhaps, or the hand of a monster—but have little value for most scenes. They found a niche market in exhibitions at amusement parks such as DisneyWorld and for some large-screen IMAX movies. 3-D technology has also been tried a number of times on television with the same results as in movie theaters.

    Video game consoles are another cyclical technology that has had much more success than 3-D movies. These consoles and associated software surged in the early 1980s, collapsed in the mid-1980s, and were resurrected in the late 1980s. From the 1990s onward, they have experienced cyclical growth and decline, although the fluctuations have been less extreme than in the 1980s. These peaks and valleys are associated with the introduction of new generations of equipment—8-, 16-, 32-, and 64-bit microprocessors, each of which was replaced by faster processors after a few years.

    Cyclical patterns of adoption and decline can sometimes be anticipated, as in the case of console video games. After two phases of the cyclical pattern, it could be anticipated that the pattern would continue. Cyclical phases can sometimes be controlled. For decades, the Walt Disney Company built cyclical phases into the distribution of its children’s movies. The company released a movie into theaters and later showed it a few times on television before withdrawing it, only to reintroduce it several years later when a subsequent generation of children would perceive it as new. When videocassettes and DVDs were

    Page  48

    introduced, Disney followed a similar pattern, distributing the movie on videocassette or DVD for a fixed period of time, withdrawing it, and redistributing it several years later.

    Some Practical Implications for Those Introducing New Media

    How can an understanding of patterns of adoption be applied to the introduction of a new media technology or service? The first step is to identify which lessons are likely to be relevant. A technical standard is important in some instances but not others. The initial price for a technology may be completely beyond the reach of consumers, meaning that businesses, schools, or government must be the early adopters or that the group introducing it must subsidize the price. An existing technology may be present in most homes, allowing the new technology or service to piggyback on the replacement cycle for the existing technology, or the new service may involve disruptive adoption and change in behavior. A close parallel may exist between a new technology and one that entered the marketplace in the past, so that several lessons apply, or there may be no close match, in which case it is necessary to pick and choose relevant lessons from several earlier technologies.

    The Context for Adoption

    It is important to understand the broad social, media, and business contexts in which a technology or service must take hold. What are the likely competitive technologies? (Those investing in satellite telephony did not see cellular telephony as a threat but they miscalculated as cellular services expanded very rapidly throughout the world.) What social trends support adoption or present a barrier to adoption? Which organizations might be appropriate partners in the marketing of a new technology? Many social, economic, and demographic trends are likely to affect the demand for new media services, including the aging of the population, more work at home, and a harried lifestyle of middle-aged, middle-income households. In assessing these and other trends, it is important to ask what content needs may arise because of a change in population demographics or social lifestyle and what are the implications for the technology or service that is under development or in a planning stage. For example, will professionals working at home need new multimedia information services, and will households need more bandwidth coming into the home to receive new services? In addition, what long-term media technology and service needs does an aging population have?

    At present, perhaps the most intriguing social trend that can inform the planning of future technologies and services is the combination of mobility

    Page  49

    and individualism. These ingrained habits and desires have already had a major impact on society through the widespread adoption of mobile phones, laptop computers, MP-3 players, and portable video game players. What new media technologies and services are likely to follow? Will people carry even more devices around with them, or will new services be built into existing technology as has happened with mobile phones? With individualism have come more personal and customized media and associated values of control, convenience, and links between technology and social identification. How much further will mobile devices go in the direction of being integrated with clothing and becoming fashion statements as well as functional devices? Taken together, they provide a road map for many new media devices that will emerge in the future. At the moment, the map is incomplete, and much more research is needed to fill it in.

    General Lessons

    A review of consumer adoption patterns for new media technologies and services suggests a number of general lessons that may help us to understand how consumers are likely to respond to new electronic technologies and services.

    1. Service matters more than technology. The technology used in delivering a service is far less important to ordinary consumers than the service itself and how they perceive it. Therefore, it is important to emphasize the services that will be delivered rather than the way they will be delivered. Similarly, consumers may not distinguish services in terms of technological characteristics such as the bandwidth or storage capacity, even though those characteristics are very important to network providers, equipment manufacturers, and the engineers who designed and built the technology. Consumers are certainly able to distinguish a text Web page with sports scores from a video sports segment on a PC or TV. They are more likely to think, however, about advantages such as convenient access, timeliness of information, and fun in watching video clips of games rather than about the bandwidth of the distribution channel or storage requirements for the file.
    2. Experiencing the new media product or service before adopting it is very important in some cases. This was clearly the case with early DVRs such as TiVo. Many people did not understand what the product did. On hearing a description, they thought it was a glorified VCR. However, experiencing it in a friend’s home turned many into adopters. It can also be argued that this was a problem in the early marketing of HDTV. Most people had never seen it, and television set manufacturers did a poor job of placing it in public locations where many people could see it. Even in electronics stores, many early HDTV sets displayed regular television signals that did not convey the true character-Page  50istics of high-definition television. Those who did see true HDTV were impressed. The remaining hurdle was price, which declined over time and led to widespread adoption.
    Transparency

    Design transparency is crucial if consumers are to adopt and use new media technologies and services. In the past, the term user friendly was employed to convey the idea that a technology could be used by anyone, with little or no training. However, the term has become a cliché, and many products that claim to be user friendly are anything but easy to use. Transparency may be a better term, indicating that the technology does not get in the way of its use—the equipment and technical aspects of the service are transparent. This is more difficult to achieve than many assume. It requires considerable effort by designers, engineers, and human-factor specialists, followed by extensive testing of the product with consumers and changes in design based on the research findings.

    1. Complexity has become a greater obstacle over time. One of the hurdles in achieving transparency is the increasing complexity of many new media technologies. Users often cannot figure out how to navigate through complex menus or work remote controls with a seemingly endless array of buttons. Furthermore, most people (especially men) have little tolerance for any form of written instructions. To some degree, these problems may be associated with the rapid dissemination of so many new electronic products. In the past, consumers were confronted with simpler technologies—for instance, tuning a radio or dialing a telephone—and most had years of experience (as children growing up) observing others use the device before being called on to make it work themselves. By contrast, digital television grew rapidly in the first decade of this century and brought into the previously simple world of television video on demand, interactive program guides, menus of services, and hundreds of extra channels. People had to learn how to use these new options within a short time, and the transition was not smooth.
    2. The skill level of the general public in using technology has improved over time but is not sufficient when technology is poorly designed. What is the skill level of the general public in using technology, and what strategies are available if target adopters lack all the skills needed to use a device? The good news is that the skill levels of an average person are much higher than was the case 20 or 30 years ago. Groups introducing new media services back then encountered many people who did not understand the concept of hitting an “enter”Page  51button or pressing buttons to control information on a screen.[20] School and work environments have introduced millions of people to computers and helped to develop technology skills that can be applied in the household. In addition, automated teller machines (ATMs) and automated information kiosks at airports or shopping malls have helped in training the mass public to use computer-based technologies. The bad news is that most people still lack advanced degrees in engineering, which some organizations appear to take for granted when designing new media devices and services. The first generation of DVD recorders is a good example—poor design made them very difficult to use.
    3. Good design takes time. Often there is a tradeoff between the time and effort put into the design phase for new media and transparency to a user. Companies pay one way or the other. Putting in the necessary effort to achieve transparency has a cost in personnel and time. Taking shortcuts in the design process can lead to more calls to customer service later as well as disgruntled customers.
    Targeting Early Adopters

    Who are the candidate early adopter for a new media technology or service?

    1. Technophiles, households with home offices, and households with two working spouses are strong candidates to be early adopters. A review of adoption trends for earlier technologies suggests that three important groups of consumer early adopters exist. The first is technophiles, who feel compelled to have the latest electronic gadgets and hot-rod delivery systems. Traditionally, most members of this group have been male and middle-aged and have had high incomes. More recently, many younger people, both males and females, have joined the ranks of early adopters for technologies such as next-generation mobile phones. A second group consists of households with home offices and a need for many of the new technologies. Third, households with two working spouses and children often have the financial resources and need to acquire many new media services. Many parents believe that their ­investment in some information technology, especially computers, and some services, such as broadband access to the Web, will help prepare their children to do better at school and to find better jobs in the future. Children also have shown a strong interest in computers, mobile phones, and the Web, although this interest may result as much from entertainment considerations and a desire to stay in touch with friends as from an interest in education.
    2. Page  52
    3. Businesses and education groups are target early adopters. Business and education are important target groups that may lead to consumer adoption of new media services. In the past, many technologies first entered businesses and schools, creating habits and appetites that people eventually brought home. The telephone and mobile phones, for example, were predominantly business services at first and then entered households of businesspeople who wanted the same services at home. Similarly, many people developed an appetite for personal computers and the Web in business or education settings and then brought those technologies home.
    4. Plan ahead for second and third waves of adopters. In targeting early adopters, it is important to plan ahead for the second and third wave of adopters and not to assume that later adopters will be just like the first group in terms of demographic characteristics or uses of the product. For example, later adopters of DVRs were not as fanatical about this technology as the first wave of adopters; the second wave of cable television adopters (who lived in cities and wanted more choice and better reception) were very different from the first wave (who lived in rural areas and were happy to have any access to TV signals).
    5. Be flexible. Since it is hard to plan far ahead, it is important to be flexible in product development, content creation, and marketing so you can adapt to the changing environment. An early example of poor planning and holding onto an early adopter group as the model for those to follow was the phonograph. Thomas Edison invented it as a device that could both record and play back. The early adopters were businesspeople who used it for taking dictation and for later playback to secretaries, who would type up the contents. When Edison began to offer the phonograph to the general public, he believed that the recording feature was very important and had available relatively little prerecorded content such as songs. A rival, the Victor Talking Machine Company, emerged with a playback-only phonograph and extensive prerecorded songs. Stuck in his concept of business use for the phonograph based on the early adopters, Edison lost the battle.[21] A similar case can be made about IBM and the PC. IBM dominated the market when the PC was a tool for businesses and used for spreadsheets and word processing. However, the company lacked the vision to foresee the landscape of applications after the PC became a consumer staple and was used widely for entertainment.
    Motivations and Desires

    Why do consumers adopt new technologies and services? What are their motives and desires?

    Page  53
    1. Strong need is a motivator. One important motivation to adopt new technology and services is a strong need: a consumer has an existing unmet need in his or her life; a new service meets the need at an acceptable cost; the consumer adopts it. There are many examples of the adoption of new technology based on need: even in the 1980s, when home satellite dishes cost between $2,000 and $3,000, they were adopted by 20–25 percent of households in western states such as Montana and Idaho, where there were few local broadcast signals or cable systems. These people had a strong need.
    2. Latent needs can sometimes be identified. In the absence of a strong need, those marketing a new service can sometimes identify a latent need—that is, one that consumers do not recognize until it is explained through marketing or demonstrations or until they see friends using it. For example, ample evidence from the first decade of mobile phone use showed that millions of people, not just business executives and emergency workers, had a latent need to make and receive phone calls in many locations where no landline service was available (see chapter 10). Further, many new media, including the DVR, have created a demand by providing a very appealing service that consumers want after experiencing it.
    3. Insatiable appetites drive behavior. Another motivator is an insatiable appetite for some content or service. Many people cannot get enough of some content, such as movies, soap operas, or sports. Consumers with insatiable appetites often add the new technology or service to the old rather than substitute the new for the old. Many sports fans were early adopters of sports video over the Web. They did not stop watching sports on television but added new sports content, such as games not carried on their local cable system, to their viewing habits.
    4. Inconvenience and pain are motivators. Inconvenience is a notable motivation for adoption and change. Many models of change are based on positive motivations, but painful experience with an existing service can provide an incentive to adopt a new one. In planning advanced media services, it is useful to ask where consumers are experiencing inconvenience or bad service that might be relieved by a high-capacity communication networks and new communication devices. Painful experience can lead to an erosion or churn in an existing service and an opportunity for a new service. For example, the inconvenience of going to video rental shops with limited inventories led many people to adopt video rentals delivered by mail, at a kiosk, or over the Web.
    5. Supplier pressure can lead to adoption and change but can also backfire. Suppliers of services may have sufficient control over markets to force consumers to change behavior. For example, a bank that dominates a market might inPage  54crease the price of all services provided by tellers and thereby motivate consumers to make greater use of ATMs. Software suppliers that dominate a market have also been able to force change. However, in a competitive media environment, it is not clear if such a strategy would work. For example, a cable operator could require customers to adopt a new service, but they might switch to a satellite or a telephone supplier of video services.
    6. Motivations and desires can change over time. As a service is adopted more widely, the mix of users and their wants can change. For example, the wants of Internet users changed when it became a mass market service. While the early users of the Internet included mostly well-educated information seekers, it then became a mass market service with users from mixed education backgrounds, members of younger and older age groups, and people looking for entertainment as well as information.
    Content

    Who controls content creation for a new technology?

    1. Entrepreneurs or existing players can lead in content creation. Sometimes a new group of entrepreneurs leads content development, as with early personal computer software, much of the early content on the Web, and much video content when broadband was introduced. In other instances, existing players control content for the new technology, as in the cases of CDs, DVDs, and high-definition DVDs, which were produced by the same industry that created record albums, audiocassettes, movies, and television.
    2. Different groups bring different strengths and weaknesses. Entrepreneurs are more likely to bring creativity to the process and to generate new ideas; existing players are more likely to bring financial resources, organizational relationships, and marketing skills to help ensure that the technology gets a reasonable opportunity in the marketplace. Each group has strengths and weaknesses that are likely to affect the adoption process. In addition, amateurs and entrepreneurs can get the process started, only to be replaced or bought out by larger organizations.
    3. User-generated content is not free or carefree. In some cases, content is created by users, as with a mobile phone call or text message. User-generated content has no cost to create—users pay for the right to create the content that they consume. However, costs are incurred in developing the device and, users must acquire software to generate content. Further, the management of user-generated content is not without problems. In the case of e-mail, person-to-person messaging opened up the door to spam. In the case of onPage  55line forums and chat rooms, civil exchanges can be overpowered by crazies or used by predators to lure the unsuspecting. Developers of these services have to find ways to manage user-generated content without impinging on the rights of end users or becoming so heavy-handed with rules that users are driven away.
    The Trojan Horse Strategy

    Existing technology platforms and infrastructures can be helpful in the uptake of certain new services by making potential users effortlessly aware of them and letting people try them with little if any risk or cost. This is a Trojan Horse strategy. Harold Innis has noted that popular books in eighteenth-century England developed on a platform for the distribution of patent medicines.[22] Peddlers who went from town to town selling patent medicines began to carry and sell books as well. Many other examples of this strategy can be found: games and ring tones were added to existing mobile phone service; automated voice services (audiotex) were added to the telephone network; the sale of products on the Web built on top of a base established by e-mail and information services; text-based information services (stock quotes and sports scores) were added to satellite radio; and video games have been added to many cable systems. In each case, it was easy for people to see or hear the service and decide if they wanted it.

    The Trojan Horse strategy eliminates the need to develop a new platform or infrastructure for a service, though there may be a need to develop application software. It also reduces marketing costs, to some degree. At the same time, there is no guarantee of acceptance by end users.

    Stunning Innovations

    The Holy Grail in new media adoption is stunning innovation—a device or service that not only meets a need but dazzles people through its creativity, user interface design, and appealing content. It is a worthy but rarely achieved goal. It is more common to introduce a technology or service with modest enhancements over incumbents, better marketing of a similar product, or a confluence of services that collectively has appeal.

    Apple’s iPod, introduced in 2001, is an example of a stunning innovation. How does one plan such a device? Understanding needs in the marketplace, social trends, and principles of adoption can help bring a new media device several steps along the road to wide acceptance, but chance invariably plays a role. In the case of the iPod, the product appealed to values of convenience and control while on the move. There was a great deal of available content, both le-

    Page  56

    gal and illegal. It allowed people to customize their content by organizing it in several ways. The user interface was well designed and elegantly simple. Apple introduced the iPod at a high price but then provided lower-price models to appeal to second and third waves of adopters, who likely would not have paid for the expensive model. The later models were designed explicitly to be worn like clothing, adding to their appeal as fashion statements, and the white earbuds made a statement that the user owned a product everyone was talking about. Apple continued to innovate with new applications for the iPod, such as video. However, even superlative design and strong functionality need an element of good fortune, in the form of favorable reviews, photos of famous personalities carrying iPods, and so on, to achieve such a high level of status and become an icon for cool technology.

    Strategies for Assessing the Possibility of Failure and the Opportunity of Finding Success in the Ashes of Failure

    Failure is more common than success with new media as well as other products. There are strategies to assess the chances of failure, identify end users’ willingness to adopt a product that might fail, and capitalize on earlier failures that might have a second, successful life.

    1. Assess the risk of failure. It is possible to assess the risk of failure by identifying the key elements that are likely to influence success or failure and making judgments about where the new media product or service stands in relation to them. One can consider, for example, how much time is needed to establish a base of acceptance in the marketplace and whether the company or group supporting the new technology or service is likely to provide sufficient time before pulling the plug; how difficult it will be to compete with entrenched incumbents; whether there is a target early adopter group with a clear or latent need; and so on. This assessment can aid a decision about whether to introduce the new media product or service or what additional work is required before introducing it.
    2. Assess end user willingness to adopt a product that might fail. Innovators have a high tolerance for the possibility that a new media product might fail. The reward of being the first to own a product comes with that risk. Later adopters’ tolerance for possible failure is influenced by a few factors, such as expected length of ownership and the impact on use if the product is removed from the marketplace. If a product has a short life—for example, a mobile phone—it does not matter so much if a user is stuck with useless features for the short period of time until the phone would be replaced anyway. If it has a long expected length of ownership—for example, a high-definitionPage  57DVD player—then getting stuck with a player for which there are no DVDs (should the player be withdrawn from the market and content suppliers no longer produce DVDs for it) is more consequential. If the product has a long expected life and can be used without loss of functionality, the downside of its being withdrawn from the market is also lower. In the case of an MP-3 player, for example, the risk from failure is low if a user can continue to download music and later transfer the music to another player; it is high if the user can no longer download music when the model is withdrawn or if the user subsequently loses the ability to transfer songs to another player because of digital rights restrictions.
    3. Comb the ashes of failure for near misses. Many new media products—early subscription television, online services, and VCRs, for example—failed, only to succeed when reintroduced later. In some cases, the product was introduced too early; in other cases, the product lacked one or more critical elements necessary for success. With this history in mind, recent failures can be assessed for near misses that might succeed if reintroduced at the right time and with all the elements necessary for success in place.
    Page  58

     Two //The Fragility of Forecasting

    ’Tis easy to see, hard to foresee
    —Poor Richard’s Almanack

    Generally, when dealing with the weather a few days ahead, the monthly demand for electricity a few years ahead, or the size of national populations two or three decades ahead, professional forecasters can justifiably lay claim to increasingly impressive results. Weather forecasts are considerably more accurate than they were a few decades ago, and population forecasts have proven to be reasonably accurate. There are exceptions, of course, and they can occur even when professional forecasters deal with well-known products or events and have many data points at their disposal. Exceptions do not, however, diminish the overall trend toward greater accuracy.

    One may be tempted to believe that forecasting methodology has been improving across the board, and provided that the necessary financial and intellectual resources are devoted to it, forecasting the demand for a new media product or service could be equally impressive. This belief, however, would ignore the fact that if it is still too early to know whether the product will succeed or fail, forecasting its future demand is a very different—and far more difficult—type of undertaking.

    The forecasts considered in this chapter are estimates of the level of demand for a new media product at some point or points in the future. (Except where otherwise indicated, the term product will be used in this chapter to include intangible services as well as tangible products.) Forecasts are made before the product has established itself in the marketplace—in many cases, even before it has been launched. There are three categories of forecasts: (1) forecasts before a concept has been turned into a concrete design; (2) forecasts in the immediate prelaunch phase, after it has been turned into a concrete design; and (3) forecasts after launch but before takeoff.

    Forecasts sometimes take the form of an estimate for only one period, but they usually provide estimates of the demand for each of a range of periods and are presented as graphs or as time series—estimated figures for each of a succession of periods. (For example, a forecast might estimate the sale

    Page  59

    of high-definition TV sets [HDTVs] in 2012 or the cumulative penetration of HDTVs in households from 2006 to 2012.) An alternative is to forecast an upper bound—a ceiling—on sales or uptake rather than actual sales or uptake, but such forecasts are much less common. Organizations have various good reasons to feel that they need such forecasts, but they may not be able to satisfy this need.

    One of the prerequisites for a soundly based forecast of the demand for a new product is an understanding of what it will be used for. How can one forecast the growth of a new media product when one doesn’t know how it will be used? Obtaining such an understanding is often more difficult than it might seem. If Alexander Graham Bell had commissioned a forecast of the demand for his new invention, the telephone, he would probably not have done particularly well: he would have told the forecasters that it would be used for relaying concerts and church sermons and for important conversations by elite business­people. Along the same lines, Western Union initially dismissed the telephone, turning down the opportunity to buy its patents for $100,000.[1] Should one be confident that today’s corporate world would do any better? In the late 1970s, about a century after Bell, the British Post Office, soon to be followed by its French counterpart, launched what came to be called videotex—a service that transmitted data over telephone lines for display on computers and, in its early days, television sets (see chapter 7). Both organizations saw videotex primarily as a service through which consumers would buy information in the form of screen-based text and graphics from a large array of different firms, each deciding for itself how it would price its information. Videotex had a messaging component, too, but it was treated as a relatively unimportant add-on when first launched. As things turned out, however, the person-to-person messaging component achieved moderate success; the one-to-many information access component was a failure (except in France, where the service enjoyed significant subsidy from the government). The providers of videotex made the same kind of conceptual mistake as Bell had made a hundred years earlier.

    Videotex illustrates a common pattern: when ambitious new communication products are launched into the residential or business market, the question is often whether they can reach takeoff speed before they run out of runway. For applications that succeed, those that turn out to provide sufficient value (reach takeoff speed) sometimes differ substantially from those originally envisaged by marketers and their supporting casts of consultants and experts from academe. And by the time these applications are found, demand forecasts may be much less important.

    Videotex was a radical and complex new service that required substantial change in users’ behavior. What of less bold ventures? Many other anticipated

    Page  60

    uses for new media products or services also were too limited or too expansive in scope or were just plain wrong. Some groups thought that a major use for the radio would be to deliver speeches to the public; the transistor was seen initially as just a replacement for vacuum tubes in radios, though its range of applications turned out to be very much wider; the mobile phone was envisaged primarily as a tool for emergency workers; and no one who developed HDTV anticipated that its principal use in the first few years of rollout would be to display DVD movies.

    While it is rather unlikely that those who introduce less radical products will confuse services that should be for user-generated content with services that should be for professionally created content, gross errors in forecasting the demand for new communication products are overwhelmingly the rule rather than the exception. When one examines the business context of forecasting efforts and the methods available to undertake them, it is not surprising that their track record in this field is as bad as it is. Significant early improvement appears unlikely; what is necessary is to make the best of a bad job.

    The Core Forecasting Problem

    The chapter will focus on forecasting the demand (in terms of revenues, number of units sold, number of adopters, or any other appropriate measure) for certain classes of new media products. New products are defined as ones which have not yet established themselves in the marketplace—as not yet having reached the take-off point in the diffusion curve defined by Everett Rogers as the point of critical mass.[2] (In this sense, videophones, first introduced in the 1960s, are new, but cellular telephones, introduced almost two decades later, are not.) We consider demand at the level of a market sector, not demand for individual products within a sector—the distinction is that between the demand for satellite radio as a whole and the demand for either XM Satellite Radio or for Sirius before their merger.

    A distinction can be made between discretionary and nondiscretionary products. The former are those for which the prospective purchasers or users are individuals or families who have the discretion to make for themselves the decision to purchase or use a product—for example, a digital video recorder. For nondiscretionary items, the purchase decision is typically made by an ­organization; those who work for it—and, often, those who wish to do business with it—may not have this discretion. For example, a corporation may decide to adopt a particular software platform, requiring all workers as well as outside organizations that wish to do business with the corporation to adopt the same platform. Though many of the forecasting issues are the same, there

    Page  61

    are some differences in the methods available. We focus here on discretionary products.

    A new product, in our sense, is either one that has not yet been launched, so there are no sales data to inform the forecast, or if launched already, one for which it is too soon for sales data to allow a good estimate of whether or when it will become established. The uncertainty about when, or even whether, the take-off point will be reached is the key characteristic distinguishing this problem from other demand forecasting problems. A corresponding mathematical formulation appears in the box. Although this is not the usual formulation, it provides the advantage of distinguishing the two types of uncertainty: when, if at all, demand for the product will take off, and if it does, what the demand will be at any time thereafter.

    This formulation suggests a question about how those who directly provide subjective estimates of the demand in successive periods incorporate the possibility that the new product will not be a success. A highly simplified example illustrates an interesting point. Suppose that someone feels that the probability a new product will succeed is 0.8; that the forecaster’s best guess is that, if the product succeeds, it will generate 100 units of demand in year N; if it does not, it will generate 0 units of demand in year N. Would the estimate of the demand in year N be 100 units or 80 units? If it were 100, the possibility that the product would fail would have been ignored. If, however, the estimate were 80 and the product were a success, then even in the event that the best guess of 100 units would have been exactly right, the estimate provided would have been 20 percent too low. More generally, those who provide subjective forecasts of the future demand for new products without separately estimating the prob-

    Page  62

    ability of failure can be right on average only if they deflate their estimates to allow for the fact that the probability of failure is not zero. They would thereby underestimate the demand of those new products that succeeded.

    From a practical standpoint, forecasting the demand for new media products is at its core so rough and ready that this effect is unlikely to matter. Nevertheless, the effect appears not to have received attention from authors who believe there is a scientific basis for such forecasting and seek to improve it.

    In some cases, the object of interest is not the particular product but an upgrade of infrastructure or a new platform. An example of the former would be fiber-to-the-home (FTTH); videotex was and MP-3 players are examples of the latter. Here one must try to forecast the aggregate demand for all the new products or services that will be supported. As a result, the forecasting problem acquires two additional aspects. One is the risk of entirely overlooking new products that will succeed in the not-too-distant future. This is a matter not of developing too low a forecast for any particular product but of failing to realize that a viable service or application needs forecasting. An example was the failure to predict the success of a variety of new touch-tone services when rotary dialing was to be replaced. When an upgrade in infrastructure is justified primarily by greater efficiency in the provision of established services, such oversights may not matter. In the context of FTTH, however, where the cost of an upgrade may be $1,000 per household, or more than $100 billion across the whole United States, these oversights probably matter a great deal.

    When considering an upgrade in the infrastructure, the other distinctive aspect of the problem is that even though a new service may establish itself, some other new technology may siphon demand away from the enhanced capability in the infrastructure. Bandwidth compression technology made it possible to offer some of the services touted by the promoters of FTTH in the late 1980s at rates up to 400 kilobits per second (within the compass of ISDN, the basic digital telephone offering of that time); such a mistake should have been easily avoided. For an upgraded infrastructure, it becomes necessary to forecast the demand not only for a new service but also for the portion of that demand that would be carried on the infrastructure in question but could not be carried on some lesser infrastructure.

    An upgraded infrastructure is likely to support many new services, some of the less profitable of which may compete for attention and revenue with the principal services for which the system was upgraded. For example, in upgrading cable networks to digital transmission, cable operators sought to gain new revenue from video-on-demand (VOD) services. The digital networks supported not just VOD but many extra regular channels and HDTV programming. The latter compete with VOD for viewers’ attention and can affect the revenue from VOD.

    Page  63

    Track Record

    Who Creates Forecasts?

    Relatively few forecasts of the demand for new media products are created by academics or those working in research institutes and published in the open literature. In his valuable review of the research literature—such as it is—on telecommunications demand forecasting, Robert Fildes comments, “In particular there has been limited discussion comparing the usefulness (and accuracy) of alternative approaches when applied to the same problem.”[3]

    Many forecasts are prepared by what journalists often term “consulting firms” and are featured in market intelligence reports that are sold to whichever corporations choose to buy them. (Some are produced after a core group of purchasers has made commitments; others are produced on a speculative basis.) A seeming advantage they offer over in-house forecasts is that the outsiders are seen both as expert in forecasting and as more objective and hence more credible than those who have a direct stake in the market concerned. While the reports are not in the public domain, the forecasts they contain generally feature prominently in press releases sent to journalists covering the industry. Moreover, it is understood that purchasers and journalists can quote from the reports, so the forecasts generally become public information. However, the full reports are not generally available to journalists for scrutiny about assumptions and methodology, and many reporters lack the training to assess them as an academic might.

    Other forecasts are prepared by appropriate departments for in-house use at large corporations or are prepared by outside consultants for the use of a single client. Naturally, many of these are initially treated as confidential. (An exception occurred in the days when forecasts had to be provided to regulators who needed to ensure that a monopoly would not cross-subsidize a new venture at the expense of existing captive customers.) They, too, often remain unpublished: if the new product is a success, it speaks for itself; if it is a failure, managers seek to bury the past and move on without adding further to the embarrassment.

    Discussions of forecasts runs the risk of being disproportionately influenced by forecasts sold to multiple users, which tend to become public, at the expense of those prepared for single users, which tend to remain private. By necessity, most of the examples used in this chapter come from the former domain. However, our observations over the years—when acting as consultants to companies in the communication sector—as well as discussions with those employed in this sector have provided us with no evidence to believe that single-user forecasts are less prone to error than the multiuser variety.

    Page  64
    How Accurate Have the Forecasts Been?

    Forecasting has almost certainly become far more difficult in the past 20 years or so as fierce competition and turbulence have replaced the earlier tranquility of the new media field. However, forecasting the demand for new consumer communication and information products and services has always been problematic. The past century is littered with erroneous forecasts and predictions. Examples of underestimation have included the telephone, VCRs, answering machines, cellular telephones, personal computers, and the Web.[4]

    While some forecasts have seriously underestimated the demand, most have overestimated it.[5] This finding is in line with the first half of a widely quoted observation, attributed to a variety of sources, that people generally overestimate the short-term impact of a technological change but underestimate its long-term impact. Table 2.1 shows six different forecasts from the early and mid-1980s for the household penetration of videotex in the United States in 1990. The median forecast predicted a penetration level of around 9 million households, but only about 1 million households subscribed to a videotex service by 1990. In other words, the forecasts had errors ranging between a factor of 7 and a factor of 20 or more. It may be more revealing to regard these forecasts for videotex as primarily qualitative rather than quantitative failures. All indicated that by 1990, videotex would be a success or on its way to becoming one. In fact, neither was the case.

    Table 2.2 shows six late-1980s forecasts of the penetration of HDTV in households. As in the case of videotex, the forecasts show incredible range. Two strongly overestimated demand, while one slightly underestimated it. Presumably, all of the forecasts anticipated that people would buy HDTV sets to watch high-definition television. However, this was not the case in the early 2000s. In

    Page  65

    2003, 9 percent of U.S. households had sets capable of displaying HDTV, but most were used to watch DVD movies, not HDTV. HDTV signals were not yet widely available on cable or satellite, and few households chose to buy separate HDTV receivers and install HDTV antennas to pick up digital terrestrial television from local broadcasters.

    Forecasts for similar products often err in assuming which features of the competitive products will have appeal or how marketplace conditions will influence adoption. Figure 2.1 shows a forecast made in 1980 for VCR and videodisc player sales. In 1980, many people assumed that the videodisc, which offered better picture quality, would eventually win out over VCRs. However, VCRs had already been in the marketplace for five years, and the price had dropped considerably below the price of videodisc players, which were just entering the marketplace. Further, VCRs could record, and movie studios had by this time released thousands of movies for distribution on videocassettes. Videodiscs were an unknown, struggling to license movies for rental or sale. In the end, that generation of videodiscs achieved a very small market share among movie aficionados who valued the slightly better picture quality.

    In other cases, wildly optimistic forecasts of demand have proven even more embarrassing—actual penetration of the product was zero. In the 1960s, AT&T forecast that one million Picturephones would be in use by 1980 and two million by 1985.[6] However, picture telephones failed in the marketplace, achieving virtually no penetration. Similarly, Link Resources forecast in 1983 that 1.8 million U.S. households would subscribe to direct broadcast satellite services (DBS) by 1985.[7] However, DBS proved too costly to launch, and the service had no subscribers in 1985. In 1995, a telecommunications consulting group forecast that 50 to 60 percent of U.S. households would have enhanced “screen phones” by 2005. In reality, screen phones, which were intended as an alternative to a PC for accessing online information, never took off and had zero penetration of

    Page  66

    households in 2005. In 1997, Jupiter Communications forecast that smart cards (a more advanced form of credit card) would account for 10 percent of online transactions by the end of 1998; in reality, smart cards were slow to develop and accounted for almost no transactions in 1998.[8]

    Context

    When a corporation is considering whether to launch a new product or planning to do so, a forecast of its demand would seem to be essential to rational decision making. Estimating the return on investment in a new product, for example, requires a forecast of its future revenues and hence of its future demand. Or if a new service will require new infrastructure, a decision on the capacity of the infrastructure will have to be made ahead of the launch, and the necessary capacity will depend on demand. A sad illustration of the need for forecasts can be found in the satellite-based mobile telephony services Iridium, Globalstar, and ICO. In 1998, Wall Street analysts forecast that 30 million people would be using satellite phones by 2006.[9] Reality differed greatly:

    The failure of the mobile-phone satellites is legendary: aside from their huge costs and complexity, they underestimated the speed at which terrestrial competition, in the form of wireless networks, would take off. By the time Iridium came to market, most of the customers it was hoping for were well served by ordinary mobile phones. . . .

    This highlights a big problem with satellites: the long wait between design and profitability. Manufacturers must “lock down” the technology at

    Page  67

    least three years before launch; but many satellites do not make money until 10–12 years after they go into orbit. So satellite firms must make a bet on a market as much as 15 years in the future. For new services with untested demand, the risk that the market will shift dramatically between design and orbit—or never emerge at all—is huge.[10]

    In addition to the vendors of new products, potential investors—possible partners or venture capitalists—use forecasts, as do those who need to assess the future social impact of new technologies. Journalists, too, can be regarded as users of forecasts since they like to include forecasts in articles about new technologies.

    The Failure to Account for the Possibility of Failure

    Users of forecasts must realize that the majority of new communications­products fail. The same holds true in other, long-established, and better-­understood markets. According to Clayton M. Christensen et al., of the 30,000 new consumer products launched each year, more than 90 percent fail.[11] It would be surprising if a higher success rate were found in the much less well understood markets for new media. Table 2.3 presents a short list of new media failures.

    We are not aware of any published forecasts that predicted the failure of a new media product but a simple line of reasoning illustrates why failure is likely. An estimate of the probability that a new product will attain a specified level of demand by a certain date is less complex than most forecasts (which usually deal with a range of time in the future, not just a single date). Suppose that corporations considering major investments in bringing new products to market could estimate these particular probabilities with a high level of confidence. Above what threshold of probability would proceeding become rational? The level clearly would vary from one situation to another, but it often might

    Page  68

    be quite low. Provided that a corporation has deep enough pockets to afford some failures, what matters is the ratio of the value of a success to the cost of a failure. If this ratio is, say, 10 to 1, then it could be rational to proceed with any project that had little more than a 10 percent probability of success. Moreover, a corporation may need to weigh the risk that delay associated with an attempt to improve its assessment of the probability would enable a competitor—maybe from a different industry sector—to beat it to market in such a way as to reduce its probability of success or eventual market share. This risk clearly is considerably greater in today’s much more competitive environment than in the fairly recent past.

    This analysis represents a considerable oversimplification for a variety of reasons: it ignores both the fact that there would be degrees of possible success and the fact that it might be necessary to select from a set of candidate proj-

    Page  69

    ects each of which has a probability of success above the appropriate threshold. Nevertheless, the conclusion that it may well be rational to bring new products to market even if their chances of success are closer to 0 than to 100 percent is valid. No need to tell this to a venture capitalist or to a professional gambler! In an advanced economy, particularly in the United States, a relative lack of failures would be a sign of excessive timidity and thus a matter for concern.

    Bias

    Bias may be attributable to the fact that a dramatically high forecast produced on spec rather than commissioned will advertise itself. For some, the name of the game is getting the forecasting firm’s name prominently into as many newspaper articles as possible and preferably into the headlines, and thereby generating numerous telephone inquiries that will lead to the sale of consulting services.[12]

    A serious risk of bias comes from the fact that forecasts accurately reflecting low probabilities of success would be decidedly unwelcome to certain of their users—in particular, to the subset of users from whose budgets they are bought. (Indeed, a sponsoring organization may suppress results from a study that yields a conservative forecast. We are aware of a number of cases in which this phenomenon occurred.) A forecast is not just a tool for making decisions about a product; it may also be a useful tool for promoting that product. As noted in chapter 1, new products are more likely to succeed if others—in some cases, prospective suppliers of content, in some cases prospective purchasers—judge that they will succeed. In addition, at an earlier stage in the process, those who want to bring new products to market must secure the necessary investment funds, whether from boards of directors or from outsiders. For this purpose, realistic forecasts may well be interpreted as indicating that promoters lack confidence in their projects, especially in an atmosphere of pervasive hype surrounding the prospects of new products.

    Whether they will be produced within the corporation concerned or by outside consulting firms, forecasts are generally ordered or purchased by the promoters of new products rather than by boards of directors or outside investors who might wish to strike a safer balance between bullishness and realism. Bias is also present in the government environment, where there also appears to be an inclination to support studies that will yield optimistic forecasts. In the United States, a member of Congress or a state legislator may have taken a position favoring the deployment of certain technologies, committee staff may see future job prospects in the industry, and, equally important, optimistic forecasts imply increased revenues from taxes on the industries concerned.

    Page  70Page  71

    Moreover, government studies tend to request forecasting data that no legitimate forecasting organization could possibly provide. A particular absurdity was one federal agency’s late-1980s request for a forecast of penetration levels for all present and future information/entertainment services by race, income level, and other demographic characteristics of households—25 years into the future!

    Those who produce forecasts can be assumed to know on which side their bread is buttered. If there were proven techniques for forecasting that did not require a great deal of judgment, structural biases might not matter so much; it would then be possible in principle to examine any forecast and determine whether it had been “properly” produced. But such techniques do not exist (and even if they did, a comparison with the more codified field of accounting would not be encouraging in light of its scandals around the turn of this century).

    Self-serving bias toward overly optimistic forecasts is built into the way decisions about new media products are made. Corrective mechanisms might appear if boards of directors or venture capitalists commissioned independent forecasts or if journalists more often treated forecasts with a questioning attitude rather than as convenient copy. Conversely, the fact that the time horizons of these forecasts are generally years—often many years—into the future means that embarrassment or financial loss resulting from inaccuracy contributes little as a possible corrective mechanism. By the time that any inaccuracies are evident, the individuals responsible may well have moved on to other jobs or retired, and probably few others will remember the name of the consulting firm from which the forecasts came.

    Notwithstanding the built-in biases favoring hype, gross errors in forecasting are not universally excessively optimistic. Some very costly failures have occurred when forecasters did not predict that particular products would be highly successful. In the early 1980s, AT&T commissioned a forecast from McKinsey and Company, a highly respected management consulting firm. The forecast suggested that the total worldwide market for mobile phones in 2000 would be 900,000 subscribers. (In reality, the United States had 106 million mobile phone subscribers in the United States by 2000.) AT&T subsequently pulled out of the mobile phone market. In 1992, it reentered the market by buying one-third of McCaw’s cellular holdings for $3.8 billion; in 1993 it paid $12.6 billion for the remainder.[13] Such failures rightly suggest that much more is wrong than bias alone. In the case of the mobile phone forecast, the analysts apparently focused too narrowly on the mobile phone as a tool for emergency calls and lacked the vision to foresee that it might become a ubiquitous device for communicating with others about nearly anything from nearly anywhere.

    Page  72
    Assumptions

    Another weakness in the field is the pervasive neglect of the assumptions on which forecasts are based. In some cases, the assumptions are never stated; in other instances, they are only weakly alluded to. Even when they are made explicit in the initial statement of any forecast, they tend to be somewhat boring and to turn a pithy statement or graphic into something far more unwieldy. That is probably the main reason why they so often and so soon fall by the wayside. But since forecasts extend years into the future, assumptions about associated markets and technologies, about the economy, and about society in general can be crucial.

    Many of the assumptions are of the form that no significant changes will occur in this or that external factor. The example of how cellular telephony destroyed the potential market for satellite telephony offers a good illustration of the associated perils. The problem with assumptions that other things will be equal often is not that they fall by the wayside but rather that they are not made explicit.

    Some assumptions have rested on false premises or comparisons that should have received more careful scrutiny before they were built into forecasts. For example, in the early days of online services (see chapter 7), some providers reasoned that if many customers were willing to pay $100 per hour for an online information service, a mass market would consider $20 per hour a bargain. This idea seems ludicrous in hindsight, but many forecasts were built on this kind of assumption. The problem was that the many customers who were willing to pay $100 per hour for online information were businesses with very specific information needs for which they would pay a great deal, but the price comparison had little meaning for consumers. A more recent example is VOD. Many analysts assumed that the movie studios would readily license their content to VOD on the same basis and timescale that they licensed movies to Blockbuster and other video rental houses. Why would they not pursue this additional revenue stream? However, some studios feared that the new service might cannibalize the $20 billion in yearly revenue from DVD sales and rentals without replacing the revenue lost from this distribution path with equal or greater revenue from VOD.

    Other assumptions presume a neutrality of attitudes—about a specific company, for example—when there may actually be hostility or concern. Microsoft has made a number of forays into the market for set-top boxes (for television), seemingly assuming existing industry groups consider the company a neutral or positive organization. However, it encountered much suspicion based on concerns that it might try to use its new platform to build a monopoly, just as

    Page  73

    it did in the personal computer industry. The company’s attempt to enter the market for operating systems for advanced mobile phones encountered a similar reaction, and players in that industry formed the Symbian alliance to counter Microsoft.

    Language and Definition

    Imprecision in the use of language is more of a problem in forecasting than one might expect. A classic example was an early 1970s forecast in which a leading expert predicted that the cost of long-distance telephony would fall by a factor of ten by the end of the decade. He was referring to the cost of the trunk portion of the transmission. Before long, however, his prediction was applied to the price of end-to-end service. Since the latter had to cover the costs of the local portion of the transmission and the costs of switching, neither of which was falling at such a rapid rate, the prediction was transformed into nonsense. However, this failure to use language with sufficient precision did not detract from the popularity of the forecast for a few years. Other problems of language are less obvious. For example, slippery definitions commonly plague forecasting studies. When videotex first appeared in North America, its supporters were quick to differentiate it from existing online services such as Compuserve. Videotex is different, backers said: its page format is much more user-friendly; it supports color and graphics; and its tree-and-branch organization makes searching easy. A great deal of money was spent promoting the new concept, and its name became a valuable property in its own right. Not for the first time, however, a newcomer failed to live up to the hype. Supporters then found it convenient to broaden the definition so that preexisting services—not formatted into pages, without color and graphics, not structured in tree-and-branch form—could be encompassed and a more respectable penetration claimed. Not surprisingly, those associated with the more rudimentary predecessor services did not complain. Why should they object to the sex appeal conferred by the new term? In similar fashion, in some quarters in the late 1970s, the term videoconferencing grew to encompass the combination of audioconferencing and freeze-frame television (see chapter 6). (Early in the decade, AT&T apparently decided that all that could be salvaged from its Picturephone Service was its name, so it confusingly christened its new public studio videoconferencing service Picturephone Meeting Service despite the fact the technology, the concept, and their applications differed completely from those of the picture telephone.)

    The importance of care in defining exactly what is being forecast cannot be overstated. Sony introduced its Betamax VCR in 1975. After a promising start,

    Page  74

    VCRs using that standard lost the market to those on the VHS standard, introduced two years later. An optimistic 1975 forecast for VCRs could have proved either fairly accurate or wildly inaccurate, depending on whether it was meant to apply to VCRs based on Sony’s standard or VCRs in general.

    Forecasting Methods

    Many forecasts are based on indications by a sample of the public that they would buy a product or use a service. In this case, the critical issues are whether respondents have sufficient experience with or knowledge of the product to make an informed response and whether the verbal response is a reliable indicator of future purchasing behavior.

    In the case of products that are well known to the public, the issue of how to pose the question is straightforward—e.g., “Do you plan to buy a new car in the next six months?” While respondents may overestimate or underestimate the likelihood that they will buy a car in the next six months, at least they understand the question and can provide reasonably informed responses. The issue is much more complex when people have no experience with a product and may not have any understanding about what the product is or what it can do, as is the case with many new information and communication products.

    In response to this problem, researchers have employed a number of techniques. One of the most common is to use a standard telephone survey and try to describe the new product or service over the telephone. This technique is the least expensive way to approach the problem—and probably the weakest. It tends to inflate positive responses, since verbal descriptions can emphasize positive attributes, and respondents have no experience with the product that would help them to understand potential negative attributes. They may also mistake the new product for an existing one. This problem occurred in early surveys about intent to purchase high-definition TV sets. Some respondents thought they were being asked merely if they intended to buy a new TV set with a good picture. Of course they did!

    A second technique is to intercept individuals in a mall setting and bring them into a room or recruit a large group into a theater setting and show them still photographs, drawings, or a simulation of the service. This approach is used when the product itself is not yet ready or if it would be very expensive to bring the technology into the research setting. The same problem arises here: respondents do not really experience the product or service, and verbal descriptions that accompany the drawings or simulation tend to emphasize positive attributes. With little chance to understand potential negative attributes, respondents tend to report high interest. A major multiclient propri-

    Page  75

    etary study in the early 1980s used a simulated home banking and shopping service in a theater setting. The results were extremely positive, as were the forecasts derived from the study. They indicated that a majority of consumers would readily subscribe to electronic home banking and shopping services. A number of large corporations used the results to develop and launch these services. However, when they were introduced, the general public showed very little interest.

    A somewhat better technique is to allow a sample of the public to try the service or product in a laboratory or field setting. Here, people can experience the product for 20 or 30 minutes or longer and provide more informed responses about potential interest. While this method is clearly superior to using responses based on verbal descriptions, it is subject to a novelty effect that can last for several weeks: when people first start using a product, they often respond positively based on its novelty and use it more than they will subsequently (i.e., after several weeks of experience with it). One way to overcome the novelty effect is through a field trial, in which the product is placed in dozens or hundreds of homes and usage is tracked over a long enough period of time for the novelty effect to have worn off; once it has done so, indications of users’ willingness to pay for the product are more meaningful. This approach, described in the appendix to chapter 4, is time-consuming and expensive, however. In a competitive market, many companies hesitate to postpone the launch of a new media product for six months or longer (as well as incur costs that can run into millions of dollars) to gain a better understanding of potential demand.

    It is generally accepted that even when respondents have seen or tried a new product, market research studies of their intentions to purchase it have not had great success. People’s answers to questions about whether they would buy the product provide very unreliable indicators of future behavior.

    Those who produce forecasts of demand for new products have an even harder task than those who undertake market research studies of these items; the former must provide quantitative estimates of what demand will be at a specific time or times in the future. What tools are available to them?

    Where the One-Eyed Man Is King

    The Delphi Method is one of the best-known approaches to forecasting the demand for new communication products. The technique seeks primarily to derive a consensus forecast from a group of experts.[14] In an iterative process, members of the group independently and anonymously provide subjective forecasts; the aggregate results are fed back to each participant, sometimes to-

    Page  76

    gether with an indication of how his or her forecast deviated from the average. Each then independently provides a revised forecast, perhaps with some justification for its difference from the previous aggregate forecast. The new group forecast is fed back, perhaps with some of the anonymous justifications, and so on. After three to five iterations, the process usually converges in the sense that progressively smaller differences from the average have diminished to a level at which a consensus can be declared. Some observers would argue that, in essence, the technique is less a forecasting technique than a technique for bringing a group to a consensus.[15] Variations include allowing members of the group to educate one another by meeting for group discussion between iterations (this is sometimes termed the Modified Delphi Technique) and weighting initial inputs to reflect participants’ varying degrees of expertise or confidence.

    The Delphi Method and its variants have proved very popular as a means of forecasting demand for new media products, and these methods do have some value. Interaction among the participants in the modified version can certainly produce insights that may be of practical value to marketers. The techniques bring some discipline to guesswork. And they have a certain credibility: How can one do better than to rely on acknowledged experts in the field? But is this credibility merited? Though evidence indicates that group predictions are less inaccurate than those of individuals,[16] little evidence shows that experts’ guesses about the future are better than anyone else’s. Indeed, there is some reason to expect that experts’ guesses may be worse if, as may well be the case, they are likely to be biased by personal interest in the success of the technology in question. And although the question is less relevant than it has been in the past, one can also ask how a person can be an expert in something that is radically new. Where should one have looked for experts in residential picture telephony in the 1960s or videotex in the 1970s? Expertise in the technology would surely have been of little value. Maybe one should have turned to experts in human behavior. But what kind of behavior, and from what disciplinary perspective? There was and is an enormous variety of specialties from which to choose. It is now possible to find experts with decades of research experience relating to the marketing of new media products. But it may not have been easy to know where to look if one had been creating a Delphi panel in the early 1990s to forecast the demand for personal video recorders or in the early 2000s for VOD. (The box on binary predictions suggests the kinds of expertise that may be relevant in the type of assessment of a new product that should be undertaken before any forecasting process is started.)

    In his definitive work on long-range forecasting, J. Scott Armstrong has commented on the use of experts:

    Page  77

    Expertise beyond a minimal level in the subject that is being forecast is of little value in forecasting change. . . . Do not hire the best expert you can—or even close to the best. Hire the cheapest expert.

    . . . [T]heir place is in saying how things are (estimating current status), rather than in predicting how things will be (forecasting change). The estimation of current status does, of course, play an important role in forecasting. (Expertise in forecasting methods is also valuable.)[17]

    His conclusions rested on a wide range of studies and he indicated that they applied to the use of experts in Delphi studies as well as in other frameworks.

    We are aware of no evidence to confirm the accuracy of Delphi forecasts of the demand for new media products. Table 2.4 shows the results of a 1994 Delphi research project that tried to forecast the adoption of several new media in 2005. The participants came from several countries in Europe, North America, and Asia (although the results were not broken down by country). For comparative purposes, we provide the actual 2005 penetration figures for the United States.

    The Bass Model

    The most sophisticated method of forecasting the demand for new products was created by Frank Bass and those who have built on his work.[18] (Since we will now explain why this sophistication should not blind those with little experience in mathematical models to the method’s substantial limitations if used in making the kind of forecasts considered here, less technically minded readers may wish to skip this subsection.) In line with Everett Rogers’s theory (see chapter 1), Bass developed a mathematical model of the diffusion of new products that assumes that potential adopters are influenced by one of two types of communication channels. One group is influenced only by mass media (ex-

    Page  78

    ternal influence). Though their decisions to adopt occur continuously during the process of diffusion, they are concentrated in the earlier periods. The other group is influenced only by interpersonal word of mouth (internal influence). Members of this group increasingly decide to adopt during the first half of the diffusion process and then decline (see figure 2.2).

    The result is an S-shaped diffusion curve characterized by a formula with three parameters, which Everett Rogers, a supporter of the approach, describes as

    • “a coefficient of mass media influence,”
    • “a coefficient of interpersonal influence,”
    • “and an index of market potential, which is estimated by data from the first few time periods of diffusion of a new product.”

    Rogers summarizes the “assumptions necessary for the basic simplicity of the original work”:

    1. That the market potential, m, of a new product remains constant over time.
    2. That the diffusion of the new product is independent of other innovations.
    3. That the nature of the innovation does not change over time.
    4. That the diffusion process is not influenced by marketing strategies, such as changing a product’s price, advertising it more heavily, and so forth.
    5. Page  79
    6. That supply restrictions do not limit the rate of diffusion of a new product.[19]

    With the possible exception of the last assumption, it is most unlikely that any of these would fit the cases of interest to us. Consider, for example, the assumption that the market potential of a new product remains constant over time. In 1982, the market potential for the personal computer would have reflected the fact that it was then used primarily for spreadsheets and would have changed subsequently to reflect a succession of new uses such as for word processing, for accessing the Web, and for entertainment. Or consider the omission of marketing variables such as price. Regarding diffusion models in general, Robert Fildes remarks, “The consistent failure to estimate marketing effects illustrates the problem [of limited data on potentially important (and often unmeasured) variables] as no economic model of adoption justifies their omission.”[20] However, as described in the valuable review of diffusion models by Vijay Mahajan, Eitan Muller, and Frank Bass, others have shown how—at the cost of making it more complex—the original model may be refined and extended to relax these and other assumptions.[21] The model’s weakness for forecasting demand for a new product that has not yet established itself in the marketplace can be seen in the estimation of its parameters rather than in its assumptions.

    Discussing statistical methods for estimating the parameters from sales data, Mahajan, Muller, and Bass state, “Parameter estimation for diffusion models is primarily of historical interest; by the time sufficient observations have developed for reliable estimation, it is too late to use the estimates for forecasting purposes.” They cite studies that “suggest that stable and robust parameter estimates for the Bass model are obtained only if the data under consideration include the peak of the non-cumulative adoption curve” (which is roughly halfway through the period of rapid growth after the curve takes off—that is, when the product is no longer new in our sense of the term). They also state, however, “If no data are available, parameter estimates can be obtained by using either management judgments or the diffusion history of analogous products.” They also refer to some proposed methods for deriving estimates of parameters from judgments made by managers.[22]

    Forecasts will be no better than the judgment about what product to use as an analogy or more detailed management estimates relating to the curve’s parameters. Implicitly, moreover, either type of judgment almost certainly rests on the implicit—and frequently incorrect—belief that the new product in question will be a success. As a consequence, this family of models should not inspire confidence when used for forecasting the demand for products that have not yet established themselves in the marketplace.

    Page  80
    The Seductive S-Curve

    Other approaches to forecasting are simpler. The starting point for some is the assumption that, through time, demand will follow an S-shaped curve—a safe cumulative assumption provided that the product will be a success. It starts slowly, at some point develops rapid momentum, and eventually flattens out as the saturation level is reached. A typical way of proceeding is to estimate the saturation level, to estimate when it will be reached, maybe to estimate when the takeoff point will occur, and then to fit an S-shaped curve to these estimates, perhaps borrowing the particular S-shape from some past product that shares attributes with the one being forecast.

    This approach avoids two ways of failing: having an unrealistically shaped curve for the progression of demand through time, which is a most unlikely mistake whatever method is used, and having an unrealistically high level for demand at saturation. But other—easier—ways of failing remain open. In particular, there may be little basis for estimating when, if ever, the curve will start to rise sharply; the service may never come near its projected saturation level; and even if it does, there may be no basis for estimating how long it will take to get there. At its weakest, this kind of approach assumes the success of the product and uses the S-shaped curve as a framework for a set of guesses. Strengthening this method would require explicit consideration of the probability that the product will succeed and some quantitative justification for estimates of time to takeoff, time to saturation, and level of saturation. There are no credible techniques to fill these gaps.

    Gilt by Association

    Another seductive and widely used approach to forecasting involves deriving a forecast for the total demand in an established market that will contain the new service among other existing services and then multiplying this large number by an estimate of the market share of the new service. So, demand for videotex was seen as a percentage of consumers’ expenditures on information gathering (note the misconception of how videotex would provide value to its users); demand for teleconferencing was seen as a percentage of the projected number of in-person and electronic business meetings; VOD was considered as a percentage of expenditures on cable television and video rentals/purchases. Forecasting the already established base market is not the problem. A reasonable base of understanding and data exists, meaning that one is unlikely to go seriously astray. The problem lies in obtaining a static estimate, let alone a dynamic forecast, of the market share of the new product.

    Page  81

    This approach to demand forecasting would almost certainly have yielded erroneously positive forecasts for 3-D movies in the 1960s, residential picture telephony in the 1970s, and videotex in the 1980s. It would also have ­underestimated the demand for MP-3 players, including iPods, since the base would have been existing portable music players, and for mobile phones in developing countries, since the base would have been existing landlines.

    Some forecasters appear to avoid the problem by offering seemingly conservative forecasts. They estimate a small fraction for market share, thus appearing to be highly cautious. But a small fraction applied to a large base can yield a number that looks quite respectable. For example, if by 1996, only 10 percent of revenue from cable television and videocassette rentals and sales in the United States had moved over to DBS services, $4 billion in revenue would have ­resulted—quite impressive. However, the possibility that the fraction might be zero, thus yielding a forecast of zero demand for the newcomer, must also be taken into account.

    In the 1970s, a collaborative project of several European telecommunications administrations attempted a scientific approach to estimating the share of a larger base market when forecasting future demand for teleconferencing.[23] The research team disaggregated business meetings into different types and used the results of controlled laboratory experiments on media effectiveness to derive estimates of the fraction of meetings for which users should in the future substitute different forms of teleconferencing for each type. The fractions were derived from models of user choice—in this case, normative models of the rational economic man or woman—informed by the results of an impressive program of psychological research. The model was intended to provide a forecast of the demand for each form of teleconferencing if potential users were to choose rationally. It could not produce forecasts of what the demand would actually be since it was too early to develop and test models of the processes by which potential users of teleconferencing would actually make their choices. Moreover, as chapter 6 explains, assumptions about how potential users should make rational choices were substantially flawed at that time.

    Bottoms Up

    Rather than work downward from a larger market to the one of interest, other analysts attempt to proceed in the opposite direction. They start with the individual using or purchasing unit (an individual or a family) and, with widely different degrees of sophistication, build a model of this unit’s choice behavior in a market in which the new service is present.[24] For different values of its parameters, the model represents the behavior of different kinds of individuals or

    Page  82

    families that will be present in the marketplace. The model may be expressed mathematically or may take the form of a computer simulation. Either way, aggregating the individual decisions to obtain estimates for the market as a whole is methodologically straightforward, although extensive survey data may be necessary to obtain estimates of how many purchasing/using units of each kind exist. Further, considerable uncertainty may arise about estimates of the values of certain variables (e.g., the future price of competing products). Here, too, however, the main problem lies in the model of choice behavior.

    Such a model cannot be properly tested if one is dealing with a new service. Nevertheless, using historical data, one can and certainly should test the generic model to see how well it would have forecast demand for comparable new products and services in the past, of course including failures as well as successes. Such models also are likely to require forecasts of other variables, such as disposable income, interest rates, or price levels. Difficulty in accurately forecasting these economic measures is well known and does not lend confidence to this technique.

    The construction of a valid model of this kind clearly requires an understanding, at an appropriate level, of how the user will derive value from the product in question, an area in which many failures have occurred.

    Prediction Markets

    Will prediction markets be the “next big thing” in forecasting the sales of new media products and services? A prediction market is a specially created market for trading assets

    whose final cash value is tied to a particular event (e.g., will the next US president be a Republican?) or parameter (e.g., total sales next quarter). The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter. Prediction markets are thus structured as betting exchanges, without any risk for the bookmaker.

    People who buy low and sell high are rewarded for improving the market prediction, while those who buy high and sell low are punished for degrading the market prediction. Evidence so far suggests that prediction markets are at least as accurate as other institutions predicting the same events with a similar pool of participants. Many prediction markets are open to the public.[25]

    Devotees of this method of forecasting have suggested its application to much more unlikely topics than the sales of new media products. One of the

    Page  83

    more creative plans was that the U.S. Department of Defense should operate a “Policy Analysis Market.” In the summer of 2003, however, the public outcry provoked by the proposal that topics for this market could include future terrorist attacks soon caused the department to cancel the plan.

    One variation is the “corporate prediction market,” in which corporate employees receive “virtual trading accounts and virtual money [to] buy and sell ‘shares’ in such things as project schedules or next quarter’s sales.” Hewlett-Packard has experimented with applying the method to sales forecasting; Intel, Microsoft, and Google have also tried it for various purposes. In 2005, Yahoo, in partnership with O’Reilly & Associates, an organizer of technology conferences, launched the “Tech Buzz Game,” described as “a fantasy prediction market for high-tech products, concepts and trends.”[26]

    If a new media product is about to be or recently has been launched, and if one wishes to forecast whether it will reach a particular level of sales within a year or two, it would be possible to formulate the issue with sufficient precision to make a prediction market applicable. It will be interesting to see whether this approach is used in such situations; if it is, it is very much to be hoped that it will be evaluated relative to more conventional methods. Since it may be less prone to bias, particularly if it is open rather than, say, restricted to corporate employees, it is conceivable that it will perform reasonably well relative to other approaches to forecasting—but that is not saying much.

    If, conversely, one wished to look further into the future, forecasting the sales of the same product when it is still at the concept stage, formulating the issue appropriately would be very much harder. The costs of mistaken forecasts will, unfortunately, generally be much higher in this situation, since this is when winners may be missed or large but fruitless investments may be made. Also, it will unfortunately but inevitably take longer for evidence to accumulate regarding the effectiveness of this approach for forecasting further into the future. At least in this field, however, lack of evidence that a forecasting technique works has not inhibited its use (especially when the stakes are high).

    Making Good Use of Early Postlaunch Information

    A forecaster can, of course, do better when early sales data for the new media service are available because it has already been launched, even though it has not (yet) taken off. Nigel Meade provides an intriguing and instructive treatment of such forecasting problems that involves estimating the probabilities that the available data sets (sales) could have been generated by each of four different forms of diffusion curve and then using the sum of forecasts individually yielded by each of these curves.[27] The forecast formed from the weighted

    Page  84

    sum of the four component forecasts has been shown to be more accurate than the forecast provided by the best of the component forecasts.

    The Leading Edge Method is used in business settings, where it has enjoyed some success. The technique develops forecasts based on the experience of those pioneering early corporate users of new systems that have established reputations as successful trendsetters in the use of new technologies. It may be adaptable to products for the residential market. It might also be useful in connection with field trials, which make users’ experiences with a new product available for research before the product has become established. However, the users involved in trials generally will not be the types of people who have established reputations as trendsetters.

    For new residential products, certain types of users may provide a useful early experience base—e.g., male teenagers, consumers of adult entertainment, and other early-adopter groups. At the same time, care must be exercised in applying what is learned from these groups to other segments of the population. Similarly, caution is necessary in applying what is learned from a group in one culture to a comparable group in another. For example, when text messaging (SMS) became popular among teenagers in Japan well before it did among teenagers in the United States, how safe would it have been for the former’s usage patterns to provide the basis for forecasting adoption in the United States?

    Boundary Markers

    One family of techniques, of which the Historical Analogy Method and Income and Expenditure Analysis are key members, can be useful after a service has started to establish itself. Care is needed, however, because they assume the market success of the product in question. Consequently, for products that have not reached this point, these techniques can only provide an upper bound to a forecast.

    The Historical Analogy Method is based on time series data on the early sales growth of past communications products. It provides a means of making projections using only the first few years sales data. The technique was developed by Roger Hough at the Stanford Research Institute around 1969 for use in the National Aeronautics and Space Administration’s forecasting of the demand for new telecommunications services that might be relevant to its future programs. He used sets of time series data of the annual sales of a number of new “information-transfer” products and services, starting from the year of their introduction. He focused on the average annual growth rate from the year of introduction through a particular number of years later. He noted that (in the

    Page  85

    United States) this number always remained at or above 200 percent in the first and/or second year before decreasing over time.[28]

    Table 2.5 shows some of the figures he derived for the United States and Canada. For the two products that can be compared, the growth rates were lower in Canada. Such rates clearly must be expected to vary across countries.

    Income and Expenditure Analysis is based on constancies or trends in the proportion of a household budget that is spent on certain forms of consumption—e.g., entertainment and information.[29] It is, therefore, a more disciplined variant of the method described earlier in which demand for a new product or service is estimated as a fraction of the total demand in a larger market. Unless used to provide an upper bound, this method is subject to the same underlying problem of estimating what value (other than zero) the fraction should take. In this sense, it is not really a forecasting technique.

    Other regularities, too, can be employed to obtain upper bounds. For example, spending patterns for communication information and/or entertainment services can be translated into a percentage of weekly household income, as in chapter 1. However, while useful in understanding historical trends and helping to rule out unrealistically high forecasts, this form of analysis cannot help in estimating whether or when a new product or service will take off. Like the study of political history, it can inform an understanding of what may happen but cannot predict it.

    Page  86

    Making the Best of a Bad Job

    Progress in improving the methodological tools available for the kind of forecasting considered here is likely to be both difficult and very slow at best. Nevertheless, a focus on related nontechnical issues could substantially reduce the risks that forecasts will lead people astray. These include being an informed consumer, integration of forecasting and decision making, and reduction of bias.

    The main reason for lack of optimism on the technical front is that forecasts of new media products will continue to rely primarily on judgment, although complicated mathematical formulas or complex computer models may sometimes obscure this fact. Another reason is the role of sheer luck combined with positive feedback in the success of product launches, as Brian Arthur observes.[30] (How, for example, will reviewers treat it? Will talk show hosts showcase it? Will celebrities use it in a public forum?) Those to whom the forecasts are served up need only ask themselves a few simple questions to appreciate the inevitable subjectivity of the process: for example, how did the forecasters deal with the probability that the product would not reach its takeoff point, or what potentially significant external factors were submerged under assumptions—probably implicit—of ceteris paribus?

    Becoming an Informed Consumer

    Some users of forecasts—that is, those who receive them direct from the ­forecasters—are better placed than others to be informed consumers. These users may be the managers who purchase the forecasts from an outside source, the managers who commission them from an in-house department, or specialized journalists who quote them in articles. These users should already know—or should be sure to ascertain—the forecasters’ qualifications and track record. They should not need to request that any forecast be accompanied by clear statements both of the methodology used and of the underlying assumptions. When either of these statements is omitted, users should downgrade the credibility of what has been provided, even if they insist that the omission be rectified.

    These opportunities are not enjoyed by those who do not order forecasts unless they are journalists in a position to withhold publicity. Beyond never trusting press release forecasts, what can they do? They would be wise to consider carefully information provided regarding source, methodology, and ­assumptions—or, too often, to consider the implications of omission of such information—before deciding how much, if any, weight to give a forecast. When writing about some new technology or some proposed service, journalists like to include forecasts to give concrete (or seemingly concrete) indications of its potential. It is as though citing the source along with the forecast is

    Page  87

    enough: the quality of the forecast is assumed to be the responsibility solely of the source and not of the journalist. But surely this is too lazy a practice. Many readers will not be as well placed as journalists who have been following the communication industries for years to appreciate just how fragile forecasts are. The Wall Street Journal has done a commendable job in creating a new beat, “The Numbers Guy,” a journalist who scrutinizes forecasts and other data served up by organizations that otherwise would pass unchecked.

    Integrating Forecasting and Decision Making

    For corporate planning regarding new products, some researchers have suggested the value of employing more than one methodology.[31] We strongly agree. Multiple perspectives can provide a range in forecasts rather than a single estimate and can highlight any differences in the assumptions that underlie each forecast. Planning must not be based on the approach that says, “We will make (or obtain) the best forecast possible, then we will make the best plan we can in light of it.” It is far more productive to recognize from the start that any single forecast can very easily turn out to be seriously in error and to plan accordingly.

    The use of scenarios can be a good way to construct a long-term plan. A scenario provides an internally consistent picture of a possible future based on a particular set of broad assumptions. If a family of alternative scenarios is constructed, one can explore how well the plan that created the need for a forecast would work for each member of the family.[32]

    If a forecast appears necessary, the first thing to do is to search for ways to reduce the need rather than consider ways to meet it. A key means to this end is to build in as much flexibility as possible to adapt very quickly in the light of what emerges following the launch of a product and to minimize the costs of its possible failure. We were once contracted to conduct research on the likely characteristics of demand for an electronic messaging service to be offered on a proprietary videotex platform soon to be introduced in the United States (see chapter 7). The research was commissioned to help the company decide on the capacity required in certain computer-related subsystems of the platform. The results were intended to contribute to forecasting the demand for the messaging service, which, in turn, would be used to estimate the necessary capacity. Realizing how prone to error any such forecast a few years into the future would be, we emphasized the importance of designing the system so that capacity could be added within a time frame consistent with what could be expected from short-term forecasting of demand after the service had been introduced.

    Setting the need for a forecast in the context of the decisions to which it will be relevant should indicate the uncertainties on which focusing will be most useful. It could, for example, be much more important to estimate the prob-

    Page  88

    ability that sales will reach their takeoff point within four years of a product’s launch than to estimate the level in each of the first ten years. In principle, the more precisely one can specify what needs to be estimated, the better the estimate is likely to be.

    Within a corporation considering or planning the launch of a new media product, forecasting must not be divorced from decision making. Much less progress is likely to be made in improving forecasting methodology than in the design of relevant decision options and the design of mechanisms for speedy gathering of information from the field.

    Reducing Bias

    Boards of directors, investors, and others who must assess proposals in which demand forecasts are significant should certainly consider the extent to which projects have been planned to minimize reliance on the accuracy of the associated forecasts. Difficult as it may be in a business climate of bravado and hype, those responsible for investment decisions should beware of the easy assumption that if the authors of a proposal explicitly address the possibility that a forecast is wrong, they are betraying a lack of the confidence necessary to make a new product a success. Responsible parties could also consider whether to commission either independent forecasts or expert critiques of forecasts presented to them.

    Such actions may help counter self-serving biases toward underestimation of the probability that a product will fail in the marketplace. Other means of reducing bias also exist. Corporations that are regular purchasers of multi­client forecasts could use simple performance indicators to track their accuracy, as could journalists who specialize in the communications industries. Another possibility would be for a corporation to commission forecasts simultaneously for a set of several essentially unrelated products. If, as a set, the resulting forecasts suggested that the associated probabilities of failure were appreciably lower than those observed historically, the companies would have a strong case for concluding that the forecasts in the set had a strong bias toward optimism and away from realism.

    It appears unrealistic, however, to expect change in the established pattern in which forecasts produced within the private sector—the vast majority—are hardly ever released in full to the press or the academic community. Instead, a press release is prepared with major findings from the study, while the study itself, the methodology employed, and the assumptions made are not subject to the scientific scrutiny that a university or government study would receive. As a result, weak or even fraudulent research may pass under the guise of a press release and be published in major newspapers and trade magazines.

    Page  89

    When Prelaunch Forecasts Are Needed for Decision Making: Some Practical Guidelines

    By now, it is abundantly clear that we have very little confidence in forecasts of the demand for new media products before they have established themselves in the market. Nevertheless, we can offer some guidelines to those who have to make the best of a bad job. While they inevitably fall far short of providing a strategy for success, we are confident that ignoring them will likely make a difficult situation even worse—probably a good deal worse.

    1. Follow good practice in obtaining judgmental forecasts. If the product in question has not yet been launched, sales data will be unavailable, and one or more judgmental methods will have to be used. The two most frequently employed approaches are the Delphi Method and group meetings intended to result in an agreed forecast; the former seems likely to produce the better results.
      • Whatever method is used, good practice will very probably involve the following:[33]
      • Decomposition—dividing the forecasting problem into subproblems, solving them separately, and combining these individual solutions to obtain an overall solution.
      • Appropriate structuring of group meetings if they are used to generate forecasts.
      • Appropriate selection of experts if expert opinion is used to generate forecasts. Here, the major danger to be avoided is bias. One should particularly avoid experts who would be affected by the success of the product being forecast. Nor should one expect experts inside an organization for which the forecast is being produced to be unbiased. Beyond a minimal level, degree of expertise is unlikely to improve accuracy.
      • Critical examination of the assumptions underlying any forecast. This includes using “what if” questions to challenge assumptions, which may well be implicit, along the lines of “other things being equal.”
      • Combining forecasts obtained by different methods or from different sources. For example, if expert judgment is used, different forecasts may be produced by using different groups of experts or by posing questions in different forms. This is not a matter of selecting what one hopes is the ­single-best candidate from two or more forecasts obtained by different means; rather, it involves combining these different forecasts appropriately. There is good evidence that the combined forecast will outperform the best of its component forecasts.[34]
    2. Page  90
    3. Make an explicit forecast of the probability that the product will fail after it has been introduced into the marketplace. This could be seen as repetition of the advice about decomposition. (The forecasting problem can be divided into two: what is the probability of takeoff being reached by a particular point in time, and conditional on that answer, what will the sales be at some particular point there­after?) Even so, the advice is more than important enough to bear repetition.
    4. Retain a focus on the decision-making context. Why is the forecast necessary? What exactly needs to be forecast? What are the possible costs of error in the forecast? How can the possible adverse consequences of error be minimized (possibly by increasing flexibility of downstream decision options and increasing the speed of gathering and analyzing market data)?
      • Follow accepted good practice in using forecasts for decision making, including the following:
      • Obtaining different forecasts and considering the possible reasons for variation between them (see “Combining Forecasts”).
      • Assessing the uncertainty in the forecasts.
      • Conducting sensitivity analysis—varying the assumptions on which a forecast using any one method is based and examining how the decision seemingly implied by the forecast would work out in practice.
      • Considering the applicability of criteria such as robustness as alternatives to optimality when developing a multistage plan. (When plans are conceptualized as a series of decisions to be made at different points in the future, the robustness of any decision is a measure of the extent to which it leaves open later decisions that would be well suited to any of a range of situations that may develop after the decision in question has been implemented.)[35]
    5. Capture the potentially useful by-products of the forecasting process. The process of forecasting is likely to produce insights of potential value to those responsible for marketing: for example, regarding contingencies that may arise and options for addressing them or consumer groups that should be targeted at an early stage and ways of reaching them. Indeed such by-products may turn out to be more useful than the forecast itself. They need to be captured.
    6. Conduct the forecasting and associated decision making in a way that will provide information about how to improve the processes. If it is probable that the company will again be involved in prelaunch forecasting of the demand for new products, it should organize these processes to learn in light of how events turn out. At a minimum, processes should be documented, and that documentation should be retained in spite of possible embarrassment if results are disappointing—in which case the documentation becomes even more important.
    Page  91

     Three// Implementation

    “If you build it, will they come?”

    Implementation, which is where the rubber hits the road for new media projects, suffers from an image problem. It is widely considered to be a rather dull, largely technical process that, when successful, stretches between approval of the funding for an exciting concept and the concrete realization of a new service with actual, satisfied users. Yet on the basis of more than 25 years experience in a wide variety of new media start-up services—sometimes conducting research on their implementation, sometimes involved in other capacities—we have found the reality to differ greatly from this image. It usually involves far more than selection and installation of equipment, along with an as-needed sprinkling of changes in procedure and a little training. Rather, it is often a complex, broadly defined, process with multiple phases encompassing social, technical, and organizational components that, in addition to installation of a technical system, include managing relationships with partners, dealing with end users, marketing, interacting with equipment suppliers or manufacturers, overcoming obstacles, managing costs, and much more. As such, it presents significant challenges, failures are frequent, and it can often provide the research community as well as media professionals with valuable opportunities for learning at a time when the media concerned are not at all well understood.

    While a project management literature treats implementation of information technology (IT) infrastructure and services, issues that are distinctive to the implementation of new media technologies and services have received relatively little attention in academic or industry circles.[1] In part, this gap may reflect a reluctance on the part of companies and public service agencies to report on the many problems encountered in implementing new media services. If the implementation succeeded, it did so because of a brilliant vision; if it failed, bury the evidence! This is a shame: implementation projects that have been dismissed as failures as well as those that succeeded have often provided us with valuable pointers to the future of new media. We have also encountered many unsung heroes.

    Page  92

    This chapter is directed toward a broader audience than just practitioners directly involved in the process. If those responsible for funding implementation do not understand the types of challenges that typically arise, their expectations will be unrealistic. If those who conduct research, write, or teach about new media do not understand the major challenges, they may draw incorrect conclusions from a project that fails, perhaps seeing the concept of a particular new medium or service as having failed, whereas the true reason for failure lay at the level of implementation rather than in the concept. Furthermore, as we discussed in chapter 1, innovations often go undergo reinvention; this process may be facilitated during implementation.

    A comprehensive treatment of the implementation process would require much more space than is available here. Rather, we discuss the major topics that arise, emphasizing aspects that are distinctive to today’s new media and their users. The projects with which this chapter is concerned cover a wide range in each of three characteristics:

    • technology,
    • users, and
    • purpose.

    The service being implemented may be based on any of the rapidly proliferating new media technologies, or it may be an innovative service based on a mature technology. An example of the latter was a mid-1970s trial telemedicine service whose considerable success was based on radical redesign of service delivery along with use of the simple telephone (see “The Value of Simplicity: Lessons from Telemedicine”). We are not concerned with technologies whose implementation has become routine in the settings considered.

    Users may be a closed community formed by those working in a defined organization or set of related organizations, as when a new service is introduced for internal communication. Users may come from a combination of two different kinds of community: a closed community of service providers within an organization and an open community of members of the general public who are their clients, as when a new media service is introduced into processes by which the clients are served. Or users may form an open community: members of the general public who are the intended consumers of, say, a new interactive television service.

    Intended users also differ in their degree of discretion in deciding whether to use a new media service. Although discretion is associated with whether the user community is closed, open, or mixed, members of the general public do not always have the most discretion and employees within an organization do

    Page  93

    not always have the least. For example, members of the general public may be unable to avoid interacting with a user-hostile interactive digital response system when telephoning a corporation, while managers may be well be able to avoid using a new videoconferencing system installed within an organization.

    At an obvious level, the purpose of an implementation project is to reach the point at which the proposed service is working as intended for enough users at an acceptable cost. The question also arises of why the service is being introduced. It may be a pilot, field, or market trial; a demonstration project; or the rollout of a new service to the public. This aspect, too, should be regarded as a part of the purpose of a project that affects how it should be implemented.

    The next section deals with planning an implementation project. It covers the importance of identifying and addressing possible conflicts among the interests of different stakeholders, together with the sometimes associated matter of resistance to change as well as more obvious issues such as estimating cost and time components. Implementation of new media is necessarily concerned with what users need at a time when their relevant needs may not be clear enough, so the section that follows is concerned with needs, including the process of needs assessment. We then turn to the intended users of the innovations. Here, somewhat greater emphasis is placed on the challenges of implementation within organizations, such as turf issues, staff turnover and training, product champions, and budget cycles, but many of the challenges in introducing new services for the general public are also discussed, including the technology skill levels of consumers and difficulties in providing training for end users who are not working within an organization but sitting at home and seeking an entertainment experience. The final section examines a set of issues relating to equipment, such as the location of equipment that is to serve a group of users and its reliability, as well as matters of users’ comfort. These might appear to be a set of obvious and simple issues to resolve, but our research has found them to be surprisingly problematic in many instances.

    While the chapter is divided into sections on “Planning,” “Users,” “Needs” and “Equipment,” these compartments are anything but watertight. Most of the considerations raised in the final three sections should be considered at the planning stage. Most of the matters discussed under “Planning,” “Needs,” and even “Equipment” relate directly to users.

    Planning

    It may seem unnecessary to suggest raising at the start the simple question of who’s in charge. Experience suggests otherwise, however, at least for the many large media conglomerates in which two, three, or ten people may have responsibility for “new media.” In such organizations, people can often compete

    Page  94

    for control of a project, with all those responsible trying to put their stamp on the plan.

    An implementation plan identifies where one wants to go and how one intends to get there. A good plan will also describe who will accompany one (partners), how long it will take, what it will cost, and what one expects to happen (outcomes) after one gets there. Planning should be a rigorous and disciplined process, built on a broad vision of what the project seeks to accomplish.

    In any but the simplest projects, there are two straightforward reasons why there is unlikely to be a single implementation plan, laid out before the process

    Page  95

    starts and implemented as is. First, it is normal for unanticipated delays or difficulties to arise, requiring plans to be revised in light of events. Second, the full information necessary for detailed planning of some aspects may not be available until implementation is well under way; one of many possible examples occurs when locations for equipment are to be decided jointly with users, whose participation in the planning does not start until after implementation has commenced.

    This means that a plan should build in flexibility so that the project can adapt to the unanticipated. It is also important to monitor the implementation process and change the plan as needed.

    One of the distinctive features of new media projects is the way that technophiles—i.e., people who love the latest gadgets—take to them like wasps to honey. These technophiles bring enthusiasm and often strong technical skills, but many technophiles cannot distinguish features and services that people want from those that make little sense other than to boost the total number of features offered. If a technology can offer 33 features, why settle for 8, even if users want only 8 and the others will both get in the way of transparent usage and increase costs? It is important to seek out technical people who are not only competent but also understand the needs of end users and can speak in plain language to the implementation team. Whether utilizing internal resources or outsourcing the project implementation, it makes a great deal of sense to meet directly with the technology team and assess their skill levels, attitudes about features in the product or service, and ability to communicate with a wide range of managers and end users.

    Research Objectives, Demonstration Objectives, and Funding Purposes

    Some issues need to be thought through at the outset of planning the implementation process. One is agreement on and formulation of the project’s objectives. Alongside the points usually emphasized in the wider project-planning literature, others can arise from the fact that new media implementation projects may include research or demonstration objectives.

    Some new media projects evidence a surprising failure to agree at the outset (and then to remember) what the implementation team is conducting—i.e., a trial, a demonstration project, or the rollout of a service. Is there to be a pilot phase, or is the product or service going directly to full implementation? Disconnects at times arise between the implementation team and senior management. If they do not agree, proper planning can become difficult or impossible, and end users and journalists who follow the industry can develop false expectations. This was one of the problems with Time Warner’s Full Service Network (see chapter 8). The team implementing the interactive television ser-

    Page  96

    vice thought that it was a trial, whereas senior management told the press that it was the start of a national rollout.

    In field trials in particular, planners need to be aware of the possibility of conflicts between research objectives and service objectives after the project is under way (see chapter 4).

    As in other kinds of implementation projects, one needs to consider the purpose of any document containing an implementation plan. The document as a whole may really be a proposal to raise money. For a university or nonprofit agency, the document may be a proposal to a funding agency. For a business, the document may be a proposal to corporate management or outside investors for support for a new venture. Plans included within a proposal for funding must be taken with a grain of salt. Proposals are written to sell projects and often ignore obstacles and known problems. Time and budget constraints may, therefore, prevent entirely predictable stumbling blocks from being overcome unless sufficient resources are hidden under other budget headings. The gamesmanship of proposal writing makes a negative contribution to dealing with the real problems an implementation team will face in developing and launching a service. To plan well, a team must identify what may go wrong and then take steps to reduce the corresponding risks and cater to the problems if they do nevertheless occur.

    While implementation plans included in or accompanying business plans may have weaknesses, there is another side to the story for nonprofit organizations, especially in the fairly frequent cases when the implementation of a new media service is funded by an external source. It is much less usual for nonprofit organizations than for corporations to produce business plans. However, many project managers of new media ventures in nonprofit organizations told us that a formal business plan can be very helpful even to nonprofits. The process of developing it focuses attention on costs, revenues, target audiences, marketing, and other vital issues that affect noncommercial as well as commercial ventures.

    Outside funding of new media projects in nonprofit organizations generally has a time limit, after which the organization is expected to pay for the service from its regular budget. From Day 1, the planning process should address the sustainability of the service after the start-up funding ends. One should generally avoid high-end services that can be supported by an initial grant but would be beyond the means of the user organization after the grant period. User organizations need to be aware of costs early on and to know when grants will run out. If they come to perceive a new service as free, they are less likely to retain it later when they would have to pay for it.

    Few authors write about an ethical aspect of funding that can arise in field trials and demonstration projects. While it would definitely be a mistake to take

    Page  97

    success for granted when embarking on an implementation project in a closed or mixed-user community, it would also be a mistake to overlook the possibility that one will enjoy success—and along with it a far-from-welcome problem. The user community will very probably have been cajoled into making a significant investment of time and effort to adjust—maybe in connection with associated changes in organization and procedure, maybe in learning how to use new equipment well—and will be enjoying the benefits of this investment when the funding, which was sufficient to get the new service up and running for long enough to serve the project’s research or demonstration objectives, runs out. If the new service must be withdrawn because there is no longer money to support its technology component, it surely amounts to breaking faith with the user community.

    There are examples of transitions from project funds to regular budgets or other forms of financial support for services that have proved successful in field trials and demonstration projects; unfortunately, there are also examples where seemingly worthwhile services have had to be withdrawn for lack of continuing funding. What seems to distinguish the implementation of the former from that of the latter is that in the former, the need for the transition was identified and planned for from the outset.

    Conflicts of Interest and Resistance to Change

    When new communication technologies are to be introduced into an organization, one of the issues that needs to be thought through when planning the implementation process and kept under review thereafter is the possibility of conflicts between the interests of an organization and certain groups or individuals within or closely associated with it.

    In a closed or mixed-user community, some stakeholders can easily perceive the success of a new media service as likely to disadvantage them. Even if they do not perceive this danger in advance, they may discover it later. Net benefits overall need not mean net benefits for all. In some cases, it is—or should be—hard to miss the possibility. For example, a telecommunication service that allows meetings at a distance and thereby reduces travel may benefit the organization concerned but not employees who like to travel or who feel that too much valuable in-person contact would be lost in a teleconference meeting. Other examples arise when a new service increases scrutiny of job performance. A new use of telecommunications to support distance education may make a teacher more open to evaluation by others. In some cases we have studied, nurses were apprehensive that a new telemedicine system would lead to more frequent checking of their work by physicians. More generally, by offering an additional means of communication, a new media system brings with it

    Page  98

    the possibility of reducing the autonomy of those who may have derived some benefit from traditional barriers to communication. Associated with actual or perceived loss in autonomy may be an actual or perceived drop in status.

    We have encountered an interesting variation of this phenomenon when those who enjoy more power in a system are at the losing end. Organizations often come to operate in ways that suit the convenience of those who are at the top of their hierarchies more than those who are lower down; this development may be accepted on the grounds of efficiency, since the time of the former costs the organization more. In health care systems, the convenience of doctors may take precedence over that of nurses (and patients); in health care systems in prisons, the convenience of personnel can be expected to take precedence over that of prisoners; and in educational organizations, the convenience of teachers may be favored over that of students (though probably not over that of senior administrators). As a result, when service delivery is reorganized around the use of a new media service, some of the previous privileges of position may be at risk. In our research on implementation problems in field trials and demonstration projects of new telecommunications services in educational organizations, health care systems, and prisons, we have found that students, patients, and prisoners were more likely to be satisfied with the innovation than were teachers, physicians, and prison personnel.

    Real and perceived conflicts between an organization’s interests and those of some of its stakeholders may exacerbate the widespread but natural phenomenon of resistance to change and can even result in the emergence of saboteurs. It can therefore be quite helpful to identify and consider those risks early on.

    Product champions—energetic change agents who strongly support and promote a new media innovation—can provide a powerful force in overcoming resistance to change.[2] For new media ventures within organizations, these champions may be senior managers who back the innovation and make sure that the right people within the organization provide whatever support is needed and that personnel use it. Senior managers can also take the necessary steps to remove or work around obstacles. Product champions are often people who see successful implementation of the innovations in question as advancing their careers; in a new media start-up, they are often high-energy salespersons who can inspire staff and promote the product effectively in the marketplace. It is difficult to create or train product champions. They have to be found within an organization or recruited from the outside.

    If product champions can boost the chances for successful implementation of a new media product or service, saboteurs can throw a monkey wrench into the process. When implementation is within an organization, saboteurs may be disgruntled employees who do not want others in the organization to succeed, members of a group that is angry because funding did not go to a project

    Page  99

    they advocated, or workers who had resources diverted from their groups to the new project. Political rivalries within an organization can also breed saboteurs. Unless a successful appeal can be made to their altruism or they are co-opted, bought off, or kicked out, trouble will arise. Saboteurs can drain energy, time, and life out of new media innovations.

    Opposition to a particular project can also arise from those who are not intended directly to use the technology. Within an organization, for example, opposition may result from associated organizational change that reduces the status of some department. Losing parties can also come from outside the system in question, at least as it is defined by those planning a project. For example, a local college may lose students to distance learning courses to the point that the college’s viability is threatened, thus creating a problem for future local students wishing to enroll in other courses for which there is no distance learning option. Should those responsible for implementation of new distance learning programs consider this issue? We believe they should—and not only because local colleges sometimes turn out to have powerful political allies.

    Time and Money

    When dealing with new media, levels of uncertainty are likely to be relatively high. How well will a new media service and its technology work? How can the service best be used, or how should content for it be designed? What hidden problems may emerge? Will intended users be prepared to make the necessary adjustments in their established habits and procedures? These uncertainties can make for additional difficulty in estimating the time and budget required for implementation.

    There are no standard periods for implementing new media services, but many project managers have reported that implementation took appreciably longer than they expected. Indeed, time appears to be a more precious commodity than money: fewer reported insufficient funding for implementation. Common sources of delays have included environmental impact studies, gaining regulatory approval, and partners who did not meet deadlines. As is to be expected, having insufficient time was much more likely to be a problem when organizational innovation was involved.

    When implementation occurs within an organization, rapid technological change can create difficulties. A new technology may be surpassed before the implementation of a service based on it is completed. It may well be necessary to follow technology trends carefully and be prepared to shift technologies or adapt to new ones that come along, which can take up additional time. In consumer markets, the existence of competitors may result in considerable time pressure, causing organizations to “get it out the door” or “open for business”

    Page  100

    before a system or service is fully debugged. Such actions can lead to unsatisfactory initial experiences for users. Time constraints can also set off a chain reaction, as when a shortage of time leads an organization to eliminate a pilot stage in a project, which in turn leads the service to be launched with defects that could have been identified and remedied in a more controlled pilot setting.

    A recurring issue that can affect both timescale and cost is whether to use internal or external resources to manage technology or create content. For example, should a text-based Web group that wants to add a video service to its site build the necessary capability internally or utilize an outside production group? There is no single answer. It depends primarily on the organization’s capabilities and the relative costs of the two approaches. One way to resolve these questions is to examine what other Web sites have done and how well it worked. If using an external resource, it is important to do more than a cursory check on its reputation for reliability. Some observers suggest maintaining a backup external resource in case a primary provider cannot meet project requirements or deadlines.[3]

    Some costs can be anticipated and accurately built into the project budget—e.g., new equipment and regular staff. Other costs sometimes are hidden or escalate quickly beyond the budget. In organizational settings, the unanticipated or higher-than-expected costs that occur frequently include training end users, retrofitting old equipment, internal wiring in buildings, ongoing equipment maintenance, and network connection costs. There are accepted approaches to dealing with these issues—if not always resolving them completely. In the case of retrofitting old equipment, it is important to decide which equipment is worth the cost of upgrading; if the cost of an upgrade begins to approach the cost of new equipment, it is clearly not worth retrofitting. One needs to be aware that the cost of internal wiring in buildings and ongoing maintenance can be higher than expected. Network connection costs can also be significant. The high demand for training end users can cause costs to escalate unexpectedly.

    When new media products and services are introduced into consumer settings, the unanticipated or higher-than-expected costs that occur frequently are acquiring or creating content, gaining regulatory approval, replacing faulty parts, and providing warranty service for products that have been mishandled. Satellite radio is an example where content costs escalated sharply for the service providers; telephone companies such as Verizon and AT&T experienced very high costs in winning regulatory approval to provide television services; a number of digital cameras had to be recalled as a result of faulty batteries; and MP-3 player manufacturers experienced high service costs when the units were mishandled by consumers and required service under warranty.

    Page  101

    Boring practicalities receive almost no attention in the literature. Nevertheless, these examples of factors causing unanticipated or higher-than-expected costs in organizational and consumer settings occur fairly frequently. For how much longer will they remain unanticipated? Implementation is often difficult; it may be too much to expect to avoid all mistakes. But it is better to make new mistakes than to repeat the old ones.

    Market Issues at the Planning Stage

    Planning must be based on a sound understanding of outside forces and organizations that can shape a new media product or service and their likely impact on implementation. In the case of satellite radio, the early service was shaped in part by radio manufacturers and the features they were willing to build into the first generation of radios. For high-definition DVDs, the early product was shaped by movie studios that licensed their content for each format and manufacturers that received licenses and built players. For manufacturers of portable music devices such as MP-3 players or mobile phones that store and play music, the iPod, the dominant product in the marketplace, shaped nearly all competitors since they would be measured against it in product reviews and consumers’ minds.

    New media innovations within organizations are shaped by many factors, such as available personnel to implement the innovation and budget cycles. The planning process can be used to identify and develop a strategy to deal with many of these factors. One of the many contextual elements specific to different markets is the planning of a new media service for the primary and secondary school markets in the United States. In this context, it is essential to understand relevant budget cycles. It often takes a year from a purchase decision to the actual purchase.

    In consumer markets, the cost of making the first-generation product often exceeds what most end users would be willing to pay. In some cases, it is possible to identify innovators and early adopters who are willing to pay a high introductory price so that a second generation of the product and economies of scale in manufacturing can bring the price down to an acceptable level (as discussed in chapter 1). In other cases, there may not be a large enough group of people who are willing to pay a high price and sustain the product until the price can be reduced. Here, the organizations introducing the new media product commonly have to consider subsidizing it. Satellite radio providers Sirius and XM subsidized the first generation of their radios; Sony subsidized the first generation PS3 game consoles, as did Microsoft with its Xbox game console. From a planning perspective, the goal is to sell enough of the product to be viewed

    Page  102

    in the market as a success while containing the loss as a result of the subsidy. Some organizations limit the number of units introduced in the marketplace to control the loss and try to create buzz that the product “sold out” and cannot meet the demand from consumers, as Sony’s PS3 did.[4]

    Needs

    Observers have often commented that a new media product or service failed because it did not meet a “real need.” This assessment is generally understood as a stronger statement: there was no real need it could have met. Such statements may make nice epitaphs, but it is not particularly easy to turn them into practical advice for the living. In part, this seems to result from confused thinking about the concept of need.

    Needs for new media products and services of the kinds that concern us here are often relative rather than absolute. They can be created by providing people with something that they subsequently are not prepared to do without. They can also be created by removing alternative means to desired ends. In this sense, perceived needs or wants matter; real need is not a useful construct.

    The term need can also be used to denote a logical necessity: to speak with someone from one’s car, one needs to use a mobile phone. A new media service with certain characteristics may be logically necessary to some new activity—for example, to watch favorite television shows whenever I want, I need a device such as a personal/digital video recorder (PVR or DVR) that can record and store them for ­schedule-free viewing. The new activity may or may not become a perceived need. Conversely, it can become a habit into which people fall because the new technology is available to them—e.g., many have become addicted to Blackberries though their actual need for portable e-mail is not strong; many others say they could not live without their PVRs, though these users had little apparent need for such devices before trying them. Or an activity may be perceived as an opportunity seemingly presented by a new use of technology, rather than a need.

    Sometimes it is necessary to consider needs at two levels. An example from the early 1970s involved a projected interactive cable television service to provide in-home courses on parenting and other topics. At one level, the question arose as to whether such in-home courses were needed in the community; at a lower level, the question arose whether, if such courses were needed, an interactive cable television service was needed to provide them. In the case of two-level needs, it is important to consider the possibility that some other approach will be more cost-effective in meeting the higher-level need (see “Looking in the Shadows” in chapter 2).

    Needs may range from strong to weak. The strength of a need can have a major impact on a new media project, particularly when technical problems exist.

    Page  103

    In one situation, users who experienced few technical problems reported that they were a major deterrent to use of the equipment. However, users in another situation that had many technical problems reported that they did not deter use. These seemingly contradictory reports are explained by the fact that the second group was widely separated physically and had no adequate option other than using a teleconferencing system that had a number of bugs. The strength of their need affected their willingness to live with technical problems.

    Need is often treated as something static and unambiguous. However, needs change over time, and some needs are surrogates for others—e.g., the desire to make money or advance one’s career. It is useful to understand when a seeming communication need is a surrogate for something else. This knowledge may suggest an alternative approach to a particular problem. One should also not assume that members of an organization always want to meet any of its “obvious needs.” For example, an organization might have an obvious need to earn more revenue. However, because a particular new media service would allow the organization to do more business and earn more revenue, it does not follow that the organization’s employees would necessarily desire that new service. If they perceive the new business to entail extra work for them with no extra compensation, some of them may oppose or reject it.

    It is commonly assumed that better communication is both a need and a want for an organization. This often proves not to be the case, however. For example, a teleconferencing service may encourage more meetings than are necessary or bring people into encounters under the wrong circumstances. Even if an “objective” need for such communication exists, users may not want it. Put simply, “What’s in it for me?” They may not want to do their jobs better, or they may oppose any change in their current pattern of work. Similarly, a planner may assume that people will value more communication, saving time and saving money. Not always. Many new media services for consumers involve spending a lot of money for something that takes up time for no extrinsic purpose and discourages people from communicating with others—e.g., video games.

    New media technologies and services often are purchased to meet one need but are used to meet another. Many parents purchase mobile phones for their teenage children so that the teens and parents can reach each other in case of emergencies, to arrange pickups after school, and to coordinate other activities. However, the teen primarily uses the mobile phone for social networking with friends.[5] Nor can perceived needs be taken at face value. In the past, potential users of a teleconferencing service often expressed a need for full-motion video. Yet experience led many of these users to be satisfied with an audioconferencing service supplemented by still images and graphics distributed via the Web. In sum, analysis of assumptions about need is a decidedly necessary but

    Page  104

    demanding activity. When considering need, it is a useful mental exercise to complete the phrase “need in order to . . .”

    One alternative is to appeal to fear rather than need. Fear can be a powerful motivator, but strategies based on it can backfire. As the year 2000 approached, many information technology companies instilled fear of Y2K (remember that?), the idea that at the turn of the century, computer programs that were not designed to turn their internal calendars to the new century would crash and disaster would follow. Banks would lose track of billions in currency, airplanes would go out of control, company databases would be erased, and so on. Fear led many corporations to invest heavily to overhaul their computer systems. When disaster failed to strike those that were less prepared, many of those same corporations became cynical about information technology companies and for the next several years hesitated to invest in system changes where genuine needs for greater security existed.

    The Substitution Paradigm

    Much current thinking about new media technologies and services is based on a substitution paradigm. In organizational applications such as corporate Web sites for in-house use, the service may be perceived as a substitute for corporate brochures, training manuals, and other documents. In consumer applications such as PVRs, the technology may be perceived as a substitute for VCRs. There is good reason to challenge this thinking. In the case of corporate Web sites, the information provided often duplicates that in printed documents, which continue to be needed even if they circulate less widely. The site is likely to increase rather than reduce overall costs, especially if it utilizes advanced features such as video clips.[6] If designed well, a corporate Web site will stimulate new thinking and provide a central location for corporate information that would not be possible for printed documents. PVRs foster a new style of television viewing that differs substantially from that of households with VCRs. (In the 1990s, when VCRs were widely used, only one in four VCR households recorded any programs at all; most used them to watch rented movies.) Most television programs are time-shifted or watched in a time-delayed buffer so that a viewer can fast-forward through commercials. New communication technologies offer a relaxation of constraints: they make it possible to undertake new activities or to undertake old activities in a wider range of situations. Thinking based only on the substitution paradigm focuses on a technology’s enabling users to perform existing activities better or at lower cost; it is likely to miss the potential benefits of uses other than substitution and to overestimate the degree of direct substitution that will occur. How well would the need for photocopiers have been understood if people had seen them only as substitutes for carbon paper?

    Page  105

    How well would the need for personal computers have been understood if organizations had seen them only as a substitute for adding machines?

    The substitution paradigm has been used extensively and often successfully in marketing new media services, though little evidence supports the contention that significant cost savings accrue when communication technology is used to replace print. In addition to the limitations in the argument for substitution already noted, at an organizational level, substitution often requires that funds shift from one budget to another, which can be problematic and provoke turf wars.

    Needs Assessment

    A needs assessment may precede implementation, as when a company is conducting research to determine whether a particular product or service is worth developing. Alternatively, such an assessment may be part of the implementation process, as when a decision has been made to conduct a field trial or a demonstration project and a needs assessment is conducted to determine the most suitable sites and applications. On the surface, the term suggests that it provides a way of avoiding the criticism that some new technology or service is a solution in search of a problem: needs pull sounds much better than technology push. But that is only on the surface. A needs assessment rarely starts with a blank technological slate. It is an empirical process for systematically identifying and investigating possible needs for a particular technology or family of technologies. Since its starting point is generally a limited range of possible ways (sometimes only one way) to meet any needs that are identified, it is, after all, very much a process in which a solution or limited range of solutions is looking for a problem.

    This fundamental weakness of needs assessments, coupled with our remarks in chapter 1 that many successful new media were introduced into the market in the absence of an apparent need, cause us to have reservations about the methodology. Nevertheless, any systematic, empirical process that focuses on the possible relationship between potential users and a proposed technology or service can certainly be useful, provided one bears its limitations in mind.

    A case can at times be made for conducting a needs assessment before new media technologies and services are created, when one of the assessment’s roles may be to filter out schemes that would not be worth pursuing. At a later stage, there is a greater likelihood of unintended bias, as when it is part of the process of product development—often to justify funding for the project. Bias is less likely if a needs assessment includes sufficient consideration both of users’ other needs—maybe higher priority needs that do not involve the technologies in question—and of the possibilities of meeting any needs that are

    Page  106

    found with other, more cost-effective solutions. Such possibilities too often are overlooked.

    Sometimes one of the purposes of a needs assessment is to identify organizational units in which to conduct a demonstration project or to start the rollout of a new service. In such cases, one must beware of the fact that where the need for the improvement in effectiveness that the new service might bring seems to be highest, successful implementation may be the hardest. This is the case when, for example, current effectiveness is low because management is weak, morale is low, or turnover is high. Ironically, implementation may be easiest where the need is lowest: unto him that hath shall be given.

    Realistically, needs assessments are a necessary evil as part of the process of new media product development—often to justify funding for the project. A needs assessment certainly is substantially better than nothing in filtering out

    Page  107

    schemes that would not be worth pursuing and in providing information useful in the planning of those that would.

    Users

    The end users of new media technologies or services are a surprisingly neglected group. In the whirlwind of activities to fund, develop, and launch new media, it is easy to take the intended users for granted, to think only of the average user without appreciating that the heterogeneity of users may matter, to treat users simply as inanimate and predictable components in an organization; or to consider them merely as buyers. Moreover, when potential buyers are studied, market research is likely to focus primarily on demographics—i.e., age, gender, race, income, and other social categories. However, a much wider range of variables is associated with the successful implementation of new media. Even the study of demographics should take time into account. The demographic characteristics of early adopters may differ markedly from those who purchase or use new media later (see chapter 1).

    There is a difference between purchase and use. Some new media are purchased, used a few times, and then left to gather dust in closets. The distinction may not matter—a company may need to be concerned only about sales if, for example, its product is a novelty watch that has a built-in FM radio. In other cases, however, successful implementation requires that the new media product or service continue to be used over time—for example, a satellite radio, an MP-3 player, or a video mobile phone. Continuing use matters because it will generate revenue (e.g., monthly or per-use fees), content will be sold that plays on the device (e.g., songs for an MP-3 player), or the organization’s mission is associated with continuing use (e.g., with telemedicine applications).

    A focus on the user reminds us that innovations in which people operate or interact with new media are always social as well as technological. Among the many neglected issues is skill level in using technology. The broad public in developed countries generally has much greater skills in using technology than was the case 20 or 30 years ago. The ubiquitous presence of ATMs, personal computers, and even TV remote controls has taught most of the public basic skills in interacting with technology. The same can be said for many people in developing countries as a consequence of the widespread adoption and use of mobile phone technology.

    Large differences exist in skill levels, however, and some people’s skills are quite low, raising a few questions for those implementing new technologies or services. How much skill do intended users need? What are their present skill levels? Could better design of the product or service reduce the

    Page  108

    required skills? A related question is whether there is an opportunity to train users? In the absence of direct training, the available tools are user manuals, quick-start guides, online tutorials, and customer-support help lines. Few people read user manuals, and those with weak skills are not likely to use online tutorials. This leaves quick-start guides, which may not be sufficient, and telephone help lines, which are expensive to operate.

    Much closer to the other end of the scale are users in organizational settings, who, although they have a good deal of experience in using computers, are less knowledgeable than they think they are. Patience and diplomacy may be necessary in responding to their “helpful” suggestions. Listening to their suggestions is useful; following all of them without careful scrutiny can be a formula for disaster.

    The issue of user training illustrates a divide between open and closed settings. In the former, a new media product or service is made available to the general public; in the latter, implementation takes place in an organization such as a company, a school, or the military, where users can be trained and usage can be controlled to some or a large degree.

    User Issues in Organizations

    Our discussion of planning has already introduced the subject of resistance to change, which has been well documented as it applies to users of new media.[7] Exceptional leadership and people skills may be required to overcome this resistance.

    Resistance to change can take many forms. The group for which a new service is intended may refuse to use it or may do so only when forced. Intended users may complain that the equipment does not work although it actually

    Page  109

    works well or engage in negative viral marketing—bad-mouthing the project to others in the organization. (Many people assume incorrectly that viral marketing is always good.) There are several reasons for resistance. Some people may have had bad experiences with technology in the past—it did not work well or did not meet their needs—and fear that the new service will also be disappointing. Others may perceive the new service as disruptive and simply do not want to change the way they currently do things. In a number of cases, we have seen resistance occur because people had to change their schedules. Many professionals—teachers, doctors, and lawyers, among others—function under heavy workloads that constrain their time. It is difficult for them to find the time to learn about new technology services, receive training, and integrate the services into their work environment. Resistance is almost certainly higher when people perceive a new media service as initially requiring too much time relative to the time that subsequently will be saved. Schedule issues can also arise when a new technology anticipates real-time communication among people in different time zones and members of one group must adjust their schedules.

    The problems of resistance to change are mitigated to some degree if the workers concerned can be required to use the system, if they can treat or serve more people and thereby make more money by using the system, or if the technology relieves them of unwanted travel assignments.

    Two different types of training are often required for new media systems and services: training to use the technology and training for new organizational roles associated with the introduction of the technology. Training in the use of technology will depend on the system and users’ existing skills. In projects we have studied, technology training often worked best when it was in two phases: an initial wave of training to develop basic skills and a second wave a few months later to develop more advanced skills and answer questions that emerged as a result of their using the technology. It is important to recognize the limits of training and to avoid going beyond the point at which professionals should operate the equipment or complement regular users in performing certain technical tasks.

    A few additional lessons stand out from our research about training within organizations. First, even if a system is very easy to operate, there is often a value in allowing intended users to try it out on a demonstration basis, when mistakes do not matter, before using it for real. Second, whether instructions are written or presented in person, they work better if offered on a colleague-to-colleague basis rather than being handed down by specialists. A third and related lesson is that peer training seems to work well in a variety of projects. Some projects have adopted a “train the trainers” approach in which an initial group of users is trained in using the technology, and they in turn become peer trainers for others. The advantages of peer training are that a peer shares a

    Page  110

    common language with other users, there is less concern about looking foolish in the presence of a peer, and most users have ready access to peers for subsequent assistance.

    The chances for successful organization-wide implementation are enhanced when the initial groups that use the service have a high chance of success, thereby fostering positive word of mouth. Choosing candidate groups for early implementation may be based on perceived need, enthusiasm within the group, and knowledge gained through the training process. Training can be used to identify both people who are likely to experience early success and other people who will require more time and support.

    When staff members need to be recruited to manage and operate a new media service within an organization, it is important to define their roles clearly before hiring them. Staff should also understand and be comfortable with the project goals. In many situations, these employees will have to interact both with technical people and those who know absolutely nothing about technology. In such projects, they need to be able to function comfortably in both worlds. Technical staff members need to be skilled in explaining technical issues in plain language for nontechnical colleagues and end users. A positive attribute of new media projects within organizations is that they often have high status, which helps to attract skilled staff. In an organization that will continue to implement new media services, the other side of the coin is that staff then acquire skills which are in high demand and may be recruited away to other companies or other groups within the organization. For this reason, it is important to ensure where possible that key staff can advance their careers within the new media project group.

    Many nonprofit organizations that undertake new media projects have special staffing requirements. A strong need often exists for staff members who are skilled community organizers, since much work involves coalition building and marketing to other nonprofit organizations. A straightforward but often overlooked need is for someone who knows how to obtain purchase orders and generally navigate the bureaucracy, which can be daunting in large nonprofit organizations such as universities. Nonprofit organizations with limited funding can rely on students and community volunteers for some tasks, but such people typically have a high turnover.

    In some user groups, we have observed problems with turnover and low morale. We have studied a number of new media services that were set up to provide previously unavailable services for organizations in remote rural areas. High turnover commonly occurs among personnel working in such areas as a consequence of professional isolation, social isolation, low pay, or other factors. Some user organizations estimate turnover rates to help plan how often a new training or promotion phase will be required. High turnover can also lead

    Page  111

    to problems if subsequent user groups do not support the application as the first group did or if they have problems with the system that were not present earlier. One telemedicine application had high turnover among a group of doctors using an interactive video system. The later users included many foreign-born doctors who had difficulty being understood over the video link and ultimately rejected the system.

    Low morale within an intended user group can also prove to be a serious problem. In some cases we have observed, groups felt that new media services were taking money away from other projects that, they felt, were more needed; in other cases, low morale made intended users generally negative about any changes in the way they worked.

    Page  112

    In many organizational applications of new media the goal is an information exchange between experts and those in need of the information they can provide. This can raise some subtle but important issues. First, in some cases, there can be different perceptions about whether the flow of advice should be one-way or two-way. We saw one example of this in a service in which a group of research doctors were consulting with a group of practitioner doctors. The research doctors viewed themselves as experts who were providing advice to the practioners. The practitioner doctors, however, felt that they were the equals of the research doctors and could provide advice as well as receive it. It is also worth noting that information exchanges on the Web often blur the distinction between expert and novice. The Web as a medium fosters a sense of equality and a—sometimes dangerous—perception that all opinions are valuable. Second, in some community settings, people assume that all expertise needs to be imported from outside, but important latent resources or relevant expertise may exist within the community. Such resources may be more acceptable than outside experts, and utilizing those resources may strengthen the community.

    User Issues Arising Both in Consumer Applications and within Organizations

    Many user issues such as security, privacy, and the initial experience of the technology or service arise both in applications for consumers and within organizations. Security of information about users is a large and growing problem. Unfortunately, many of the measures undertaken to increase security get in the way of a good user experience. The issue is highlighted by “password hell.” At the workplace and on consumer Web sites, more and more new services require passwords. We know many people who have more than 50 passwords. At some universities, each student or faculty member must have 10 or more passwords just to use the university’s Web site. Many corporate networks are even more onerous, requiring multiple passwords and mandating that users change them every month. As a further protection, systems require long passwords that mix letters, numbers, and symbols. In the real world, we have observed that many workers, often in open cubicles, write down all their passwords on post-it notes and stick them onto their computer monitors in plain sight for all to see.

    Many consumers avoid sites with strong password restrictions. One research study of a new Web site for a credit card company had particularly strong password requirements. Users were brought into a laboratory and asked to create a password and navigate through the site. After hearing the password requirements, one person after another said, “I’ll never use this site.” The company’s director of security, who was observing the users’ behavior from an ad-

    Page  113

    jacent room, responded, “Well, people are just going to have to change their behavior.”

    Users’ privacy involves a complex set of issues. Many people (for example, those who worry how information collected about their visits to Web sites will be used) view privacy as a major concern, while others (for example, those who display intimate videos of themselves on Web sites such as YouTube or Facebook) do not. New media can threaten privacy, as in the case of ubiquitous mobile phone cameras, camcorders, and surveillance cameras in public locations, or provide privacy, as in the case of Japanese teenagers who use text messaging at home to have private communication with friends. (Telephone conversations can be overheard by parents.) We have encountered some privacy concerns in organizations that use video teleconferencing systems. These concerns often are related to whether people out of the view of the camera can see and hear what is being said or to whether a video recording is being made. Such concerns may be justified or unjustified. Where they are unjustified, they may be alleviated if the system and its procedures are explained to users. One criminal justice application used a two-way video system for arraignments. The person being arraigned was in one location, while the public defender and judge were in another. The public defender’s office demanded a means of private communication with clients, outside the earshot of the judge and police. Meeting this requirement (with a telephone call that others could not overhear) was essential for the acceptance of the new service even though the private telephone links were rarely used. User privacy needs to be addressed in implementing new media services, but it has not been an insurmountable obstacle in the applications we have studied.

    We have touched on the importance of having a user’s initial encounter with a new media service be a positive experience. Disappointed users are unlikely to return or to recommend services to others. At a later stage, within the context of successful prior experience, occasional difficulties are less serious. Yet problems are most likely at the start: technical malfunctions are more probable; users are more likely to make mistakes; and there is less understanding of how to manage or work around problems. The lessons are clear. It is not wise to release a new media service that has significant technical problems; more technical support is needed at the launch of the service, when bugs are more likely; and, in many cases, it is helpful to release beta versions of services to groups of technically proficient users, who often enjoy playing with the latest technology, identifying bugs, and making recommendations about how to improve it. Some new media technologies or services can be introduced in stages, ensuring that the first stage provides a positive experience before later components are added and thus buying time to develop them properly.

    Page  114

    The implementation process, through product design and training, needs to address how users interact with the new media technology or service. Two of the issues that arise here are social conventions and the user interface. Social conventions are the unwritten rules about how people interact with others face-to-face or through technology. They tell us to say “Thank you” if someone holds a door open for us or “Hello” when answering a landline telephone. Social conventions may change over time or vary by culture or region, and, central to our discussion, they are linked to specific media. In an organizational setting, it is sometimes possible to develop and teach social conventions through training sessions. Consumer applications generally develop their own social conventions over time, a process that can be a problem during the implemen-

    Page  115

    tation stage. When the Web was first introduced to the public, social conventions in user forums were inconsistent. In the early days of e-mail, there were no emoticons such as :) to indicate wry comments. And, SMS abbreviations such as cu (see you later) took time to develop. In the short term, when there are few or inconsistent social conventions, users sometimes misinterpret what others intended to communicate. During the early years of e-mail and online bulletin boards, people would often angrily flame about messages they had misinterpreted.

    The user interface is a principal component of new media design that affects how people interact with technology and with other people through technology. The user interface includes the design of the hardware (size, shape, number and location of buttons, and so forth) and the design of on-screen navigation features. In a rational world, the design of a new media product or service would precede implementation. In our experience, however, many new media products are rushed to market with incomplete or bad designs that must be changed during the implementation phase. Generally, insufficient time and attention are given to the user interface before and during implementation. Another problem can arise when the designers want to make radical changes from what users have experienced with other products and services. It is necessary to balance new design features that may enhance the new media product or service with a design that is consistent with major navigation features that users already know from experience with other services.

    Designing an effective user interface takes time. We know of one company that assigned 12 people to the task of designing a remote control. It took six months, but they got it right, and the product was launched to rave reviews. We know of a publishing group that assigned 10 people to the task of designing a Web site that could handle complex transactions. They required 18 months, testing the product with end users in three waves and adjusting the design based on what they learned, but they got it right and the service worked flawlessly as soon as it was launched.

    When pressed for time and forced to roll out a weak or incomplete user interface, the chances for successful implementation are reduced. Even when the user interface appears to be adequate, it is important to get user feedback during the implementation phase and make adjustments. Most applications do not get the user interface completely right by the start of implementation.

    Equipment Issues

    Many technical issues are associated with the implementation of new media equipment—for example, reliability and the speed with which necessary re-

    Page  116

    pairs can be made. Social and psychological issues associated with equipment, such as feeling comfortable using it or how location can influence people’s use of it, have received less attention. We will treat both. Differences also exist between equipment issues in organizations and in households.

    Nuts-and-Bolts Equipment Issues

    Matching equipment characteristics to user needs and situational constraints requires considerable attention. We have observed many mismatches: a remotely controlled zoom lens was too slow for a crisis situation in a hospital setting; the wrong microphones were selected for a situation in which there was much background noise; a Web site used advanced software and high-end graphics for an education application directed toward schools with older computers that could not display the content; and an application used video monitors that were too small for users with poor eyesight. Knowledge about the user group and the situation where equipment will be placed must inform the criteria for selecting equipment. One constraint is a policy in many organizations, especially government and publicly funded institutions, of accepting the lowest bid for equipment. It is foolhardy to choose equipment exclusively based on the lowest bid. While cost is obviously an important criterion, it is only one element that should be employed in the decision-making process.

    In general, it is better to purchase off-the-shelf equipment, which is cheaper and more readily available than custom or untried equipment, as long as it meets the application’s needs. Installation and repair of equipment are so important that the quality and availability of those services should be part of the criteria for selecting equipment. Equipment frequently needs to be changed after projects are under way for a variety of reasons: the original choice was not appropriately matched to the needs of users, was too costly to operate, or did not perform as specified by the manufacturer. Anticipating at least the chance that this might occur, the project team needs to build flexibility into the budget and the implementation timetable.

    A related issue is whether to choose a high-tech or low-tech solution. In general, high-tech solutions provide more options, are often easier to use, and are more attractive to groups such as young adults who like bells and whistles. However, they are generally more expensive and can bring reliability problems with them. Low-tech solutions are typically less expensive, are more reliable, and provide a familiarity that is attractive to many adults. However, low-tech solutions sometimes involve older equipment that needs more frequent repair, and the technology may become obsolete.

    In order to choose sensibly between a high-tech and a low-tech solution, it is important to ask what are our technical skills and resources and whether

    Page  117

    our group can manage the technology and repairs or will we rely on a vendor. In general, it is wise to match the demands of the technology with the technological skills within the organization concerned. Bleeding-edge technology (the latest and most advanced) should probably be limited to groups with significant technological and financial resources. For others, it may be better to stay a few steps behind the cutting edge while trying to avoid technology that will become obsolete overnight. It is important to monitor technological change and distinguish what is significant and relevant from what is marketing hyperbole.

    Page  118

    What are the weaknesses or flaws in a technology that is being considered for use in a new media project? Some of them are hidden by the manufacturer and may not show up until a project is launched. They can often be detected through testing or by talking to people who have already installed the technology. The weakness may be deadly for the intended tasks or it may be something that is not crucial to successful use. Voice recognition technology, for example, works for some applications but is not ready yet for free-flowing speech from an unknown person calling an automated service center.

    Users generally expect a high degree of reliability from equipment. When they do not get such reliability, their reactions vary in relation to their expectations (e.g., physicians are less tolerant of equipment breakdowns); the degree of their need (if the new media technology is the only way to get a vital service, problems are more likely to be tolerated); whether use is voluntary; and other positive aspects of the experience (e.g., whether they enjoy using the system). To safeguard against unreliable equipment, it is useful to build redundancy into the technical system. In this way, alternative or backup equipment can be used rather than losing service entirely or continuing with bad equipment

    Page  119

    until repairs can be made. In some cases where no immediate fix or alternative is available, it helps to identify how or where equipment is unreliable. One company we studied began to use an advanced mobile phone for field workers but had problems with gaps in coverage areas, resulting in many complaints. When the company provided workers with maps detailing areas where coverage was poor, unreliability became more predictable and complaints were reduced.

    Many managers of new media projects have told us that the installation of equipment presented more problems than they expected. Relatively few individuals in large companies (and even fewer, of course, in small companies or nonprofit organizations) are highly experienced in installing advanced new media systems. Installation nearly always involves debugging. Often, insufficient time and personnel are budgeted for this task. Debugging consists of at least two phases. The first phase involves those problems that the installers or technical staff uncover. The second phase involves those problems that are discovered only when users get their hands on the equipment.

    We have heard many nightmare stories from projects that depended on installation assistance from a faraway company. A potential installer who will not be available to come back many times for repairs and debugging should not be regarded as adequate. Project managers who have reported satisfaction with how equipment works have a common element: a first-rate technician on-site or available on a regular basis.

    Location, Location, Location

    The location of equipment can have a significant impact on usage of technology in organizations. There are at least five issues associated with it. The first is the physical distance between a user and where the technology is located. Proximity encourages usage. It does not appear that any precise distance can be assigned across situations or technologies beyond which individuals will not use a system. When users report that equipment is “too far away,” it is a matter of that person’s perception more than a measurable distance. Selection of locations for equipment is often a matter of accepting what is available. It is unlikely that good space is ready and waiting to be used in most existing office buildings, hospitals, schools, and so on. In one case, the space for the new media system was not secure. To prevent theft, the organization locked the equipment in a closet and provided the key to a supervisor. Finding that person and arranging to unlock the closet was a significant nuisance and negatively affected the service. Difficulties in finding nearby and easily accessible locations for new media equipment have arisen in many of the projects we have studied.

    Page  120

    Turf is a second issue. In most organizations, different groups have different areas of a building assigned to them—a floor or a row of cubicles that they consider their turf. Locating equipment on the home turf of intended users encourages them to make greater use of it. A third issue is the normal traffic patterns of intended users. Locating equipment where people normally travel in their everyday routines affects perceived ease of access and encourages usage. A related issue is the existence of perceived barriers such as stairways, elevators, and security checkpoints. We have heard people say, “If I have to take the elevator downstairs, walk to another building, go through security, then walk upstairs, there is no way I’m going to use it,” even though the journey might take only a few minutes. Users also expect that technology will be portable and therefore available wherever they wants it. In some cases (e.g., teleconferencing equipment), equipment must be placed on movable carts. In other instances (e.g., Blackberries, PDAs, and advanced mobile phones), each person must receive a unit of the technology and carry it with them.

    It is useful to determine the social definition of the space where equipment will be located and whether the definition is compatible with the usage intended. For example, are executives expected to use a modern Web conferencing system with whiteboards and other new gadgets in what was previously a storage room? This may pose less of a problem if the room will be altered significantly for the new application. Also, some groups (e.g., students) are more malleable than others (e.g., bankers). If a space is used by multiple user groups, who are they, and will they have an impact on others who would not want to be associated with them and their applications by sharing the space? Dedicated space is clearly better. Problems may also arise from proximity to undesirable spaces. For example, outside noise and smells drifting in from an adjoining cafeteria have negatively affected some applications.

    If a new media service links groups in separate physical spaces, as in audio-, video-, or Web conferencing, are they compatible? We know of one case where videoconferencing linked two groups of doctors, one in a university setting and one in a rural setting. Each group behaved appropriately for the space it was in, but this led to problems. The rural doctors dressed and acted casually, while the university doctors dressed and acted more formally. The rural doctors thought the university doctors were snobbish, and the university doctors thought the rural doctors were boorish. Had either group visited the other, they would likely have adapted to the other’s social space and been more compatible.

    The Web provides an opportunity and a challenge to create a virtual space that users enter into, leaving the social space they are in, at least psychologically. Avatars (iconic representations of a person in the virtual space) can fur-

    Page  121

    ther reinforce the entry into another space. Video games are all about virtual space, and many people derive great enjoyment when entering these fantasy worlds to battle demons, take part in adventures, or build virtual communities. We have seen fewer examples to date of successful virtual space applications for more down-to-earth tasks in organization such as education business or health care. However, virtual worlds have received renewed attention. Many observers advocate the use of virtual worlds in a broader range of settings, and much research is under way to measure their effectiveness and impacts.[8]

    Comfort

    Users need to be comfortable in both a physical and a psychological sense. Physical elements that affect comfort include the design of chairs, glare from lights, and cramped space. These factors can affect perceptions about the system or service as a whole. Many issues related to physical comfort are easy to identify. If members of the implementation team find the lighting or chairs uncomfortable, users probably will too. The difficulties that have arisen generally resulted not from a lack of concern about physical comfort but from the project team’s inability to do enough to correct the problem—for example, a cramped room may be the only available space. It is a different matter with ergonomic issues such as the angle of a computer monitor or the height of a chair at a workstation. We have been surprised by the number of business, university, and library computer environments that have terrible ergonomics: for example, users may have to look up at an angle to see the monitor and then down sharply to see the keyboard, or chairs may be too low for a desk, forcing people to type with their forearms raised above their elbows. Many implementation teams apparently do not notice that something is wrong, and users often notice only that after a few hours at a workstation, their backs and necks ache or their wrists are painful. Simple guides about appropriate ergonomics for computer use are readily available.

    Seemingly minor psychological elements in a new media environment can also affect users’ attitudes toward a system. When using a two-way video teleconference system, some users worry that outsiders who are not part of the teleconference can see them. Potential embarrassment is another concern, especially when a person is using a system for the first time, since few people want to look foolish in front of others. The social definition of the space, discussed earlier, can also influence psychological comfort. Does it match the tasks to be performed? For example, a room with open access and people passing by is not psychologically appropriate for tasks that require privacy or a high level of security.

    Page  122
    Consumers and Equipment

    Many of the equipment issues that pertain to organizations affect consumer applications of new media as well, but people at home or in mobile settings have some unique requirements. Many of these are known and can be incorporated into product development and planning. Others will be discovered only after the new media product or service is introduced—another argument for building flexibility into the implementation phase so that changes can be made. Our treatment of consumer issues emphasizes the U.S. environment, with a caveat that household environments in other countries often differ markedly.

    U.S. homes vary from small studio apartments to very large houses. In smaller apartments and homes, space for equipment is an important issue. On the one hand, the miniaturization of equipment (e.g., laptops replacing desktop PCs) and the combination of multiple devices into one box (e.g., a personal video recorder inside a cable box) have helped to alleviate space problems. On the other hand, large-screen HDTVs and larger computer monitors have increased space problems in some households. Open floor architecture, where walls do not divide a space into separate rooms, presents a challenge for new media with sound, which will leak into other areas unless listeners use headphones. If two or more devices are playing at the same time, the sounds mix. Some people tolerate such mixing, while others find it distracting.

    Wireless networks have provided a solution to wiring problems. The combination of a laptop, broadband Internet access, and a wireless network is a powerful tool for accessing information and entertainment. The wires needed to connect different items of equipment or to recharge portable devices still present two problems. One is that they can take up space—for example, many households set aside space on a counter or a desk as a recharging station for multiple devices. The other problem is that some people find wires ugly. We have studied households where one spouse resisted a new media technology unless a partition or custom cabinet would be built to hide wires.

    The social definition of a space and turf apply to households as well. Some spaces are work areas where people may resist technologies that are purely for entertainment. However, many homes mix work and entertainment in the same space. Turf issues arise when some household members are not welcome in certain rooms—e.g., teenagers generally do not want adults to spend time, let alone hang out, in their rooms; a home office may be off-limits to all but the people who work there. Equipment that is placed in these off-limits spaces is not likely to be used by multiple household members. A useful question to ask of a new media technology is: will it be a personal unit used by one person or a multiple user device? Many new media technologies—TVs, telephones, and home computers—have evolved from multiple-user to single-user devices.

    Page  123

    Households, in turn, adopted multiple units of the technology to serve individuals rather than the family.

    As noted, design and implementation of products are interrelated, as demonstrated by issues such as the reliability of equipment in households, which is surprisingly complex. Shouldn’t all technology be as reliable as possible? It depends. Who will use the equipment, and how are they likely to handle it? A 15-year-old male might be expected to treat a product more harshly than a

    40-year-old female. What is a product’s expected life cycle? TVs are expected to have a long life cycle; mobile phones a short one. It may be argued that mobile phones are replaced frequently because they break easily. This is partly true, but they are also replaced because new features are released at a rapid rate, and many people want to get the latest. Reliability affects price. Many parts in a new media product come with a rating, which may be expressed as an expected lifetime in months or years or as amount of use. For example, an earphone jack has a rating for how many times the earphone can be plugged in. A 50-cent jack may have a rating of 300 uses; a $3 jack may have a rating of 5,000 uses. The implementation team must estimate how often the jack is likely to be used (higher for an MP-3 player, lower for a laptop). Designing a product may involve 30 to 50 such decisions, each of which affects cost and overall reliability. Poor reliability can have many negative effects, among them bad word of mouth, negative reviews, and costly recalls.

    It is generally not possible to train consumers in the use of a new media product. Our research indicates that quick-start guides and calls to customer service are the most widely used instructional aids. Alternatives exist in some cases. If the new media technology or service is installed (e.g., by a cable or telephone company), the installer can provide some training. However, installers are often under time pressure and may skip training or give it short shrift, and the person who needs training may not be home when installation takes place. Trainers sometimes emerge naturally: a member of the family is technically proficient and trains others; or, a neighbor or coworker drops by to help. The need for training is affected by a technology’s complexity and design. Resisting all the bells and whistles that are possible in a device and spending time on a transparent user interface can reduce the amount of learning necessary to use the technology.

    One way to provide support for consumers is through customer service. Customer service calls tend to be front-loaded—i.e., most calls come in the first 30 days after a new media technology or service is acquired. Such calls can be very expensive, and many companies try to divert calls from live staff to automated telephone response systems and to offer relevant information online. Some compromise by providing live online help, which is cheaper than telephone calls to customer service but still provides contact with a real person

    Page  124

    who can respond to users’ specific questions. Automated response systems and online help can answer many questions and may be reasonable parts of the solution. Many companies, however, go to extremes, making it very difficult for users to contact live customer service representatives. This option often is not on the menu of a voice response system, and customers must try to figure out how to break through the wall of canned responses. Indeed, third-party Web sites have been created to help consumers learn the secret codes that provide access to live people at customer calling centers. From an implementation perspective, it is important to find the right balance between saving on customer service costs and making users angry.

    Equipment standards affect consumers in several ways. A lack of standards—for example, two or more incompatible technologies providing the same service—can confuse people and make them reluctant to adopt the technology until a unified standard is created or a winner emerges among the competing standards, as was the case for the two high-definition DVD standards. Standards issues arise not only with the core technology but also with components built into a device. Here, a lack of standards can increase costs in ways small and large. Many mobile phones have a jack for an earpiece that is not compatible with the standard plug for earphones. Anyone who wishes to use such a mobile phone for audio or video must purchase an adapter or a second set of earphones, adding cost and decreasing convenience. Why don’t companies standardize parts? Many see nonstandardized parts as a revenue generator; others are motivated to use proprietary technology so that consumers become locked into the service provider. Very few batteries and battery chargers for mobile phones, digital cameras, MP-3 players, and other portable devices, for example, have compatible batteries or chargers. People traveling must carry two, three, or more chargers to provide for all their portable media.

    Page  125

     Four //User Research

    In telecommunications research, too much emphasis has been placed on technology and not enough on human behaviour and organizational dynamics.
    Alex Reid

    This chapter discusses research that seeks to understand users at the individual rather than societal level. Such research may be conducted either at the level of a particular service or type of service (e.g., e-mail) or at the level of a medium (e.g., asynchronous computer-mediated text communication). It may focus on actual users of services that are already available or on potential users of services that are as yet available only on a limited basis or that are still to be developed. Using the term in its broad sense, we will refer to such research as user research. The term is also frequently used to refer to the much smaller field of activity centered on “usability” testing—for example, testing a prototype Web site to find out if real-world users understand what is offered and can navigate comfortably within it.

    While user research is just one of many different types of research—pure and applied, quantitative and qualitative, from a wide variety of disciplines—that have been undertaken to investigate new media, it is the most relevant to the issues with which this book is concerned. Moreover, the understanding this research seeks to provide is essential to the analysis of a variety of important issues at the intersection of new media and such fields as mass media, public policy, health care, and many others. While some user research is pure research (in the sense that its objective is to develop theory rather than to contribute to decision making or the development of policy), much is applied research. Before many new media services became established, applied user research was typically directed to such questions as

    • For what purposes might it make sense for different kinds of people to use a particular service?
    • How would this service’s effectiveness for specific purposes compare with that of established ways of pursuing those purposes?
    • Page  126
    • How acceptable would the service be to different types of people for different purposes?
    • If intended to serve a specific function, how well does it serve that function; if intended to entertain, how entertaining do potential users find it?
    • How should the service be used to best effect?
    • How should associated equipment and content be designed?
    • How should the service be priced and promoted?
    • What effects would use of the service likely have on people’s perceptions, attitudes, or relationships with one another?
    • What would be the probable side effects of using the service?

    Now that the use of new media is widespread, corresponding questions are also asked about actual rather than potential services.

    Although differences exist between pure research that seeks implications for theory and applied research that seeks implications for action, whether a particular study was undertaken as one or the other usually matters little, if at all, to readers who encounter it in the open (i.e., nonproprietary) literature. What matters are the actual findings and the rigor with which they were obtained; the latter can be assessed in the customary fashion. (Often, of course, proprietary research undertaken within a corporation or by a private consulting firm is not published.) Leaving aside market research, the same range of research methods has been used in each type of research (though field trials, since they require very large budgets, have naturally tended to be undertaken as applied research).

    From the 1930s until the late 1960s, user research on new media was limited to—one could say bottled up within—educational research and ergonomics. The former goes back to the 1930s and was principally concerned with the relative effectiveness of different media for teaching, rarely finding much difference in effectiveness between use of the face-to-face mode and the media of radio, television, and later teleconferencing.[1] The latter, also termed human factors, was pioneered at Bell Laboratories in the 1940s and focused mainly on the design of the telephone handset.[2] (It is of little consequence here that the early ergonomics should probably not be regarded as relating to new media inasmuch as the telephone was well established by the 1940s.)

    From the late 1960s to the mid-1970s, the field expanded rapidly, driven mainly by interest in whether and how emerging communication systems could be put to worthwhile use. In the United States, the United Kingdom, and Canada, government agencies played an important role in jump-starting research, often initiating it rather than responding to unsolicited proposals. Although the telecommunications industry spent considerable amounts on research and

    Page  127

    development for the new technologies, it was slow at first to recognize the contribution that user research might make. Until the arrival of the Picturephone, the huge monopolies that dominated telecommunications services perceived little need for marketing and hence for understanding users; they had moved from a rationing mentality (“You can have a telephone in any color, provided it is black”) to a Field of Dreams mentality (“Build it and they will come”). The industry was shaken out of this complacency for two reasons: (1) the unexpected

    Page  128

    difficulties encountered in selling the Picturephone in the United States and a few years later in selling videoconferencing on both sides of the Atlantic and (2) companies’ desire to have a presence in the rapidly growing field of user research on their technologies. Table 4.1 shows some of the major publicly funded research programs and projects initiated between 1969 and 1975 in the United States, the United Kingdom, and Canada.

    Page  129

    Looking back, one can distinguish three eras of user research. The first was dominated by work in the fields of education and of ergonomics. The second, starting at the end of the 1960s, was given over mainly to work on emerging media, which, with a few exceptions, did not become established as soon as proponents had expected. It was a time of research on potential services more than on actual services. In hindsight, it can be seen as the calm before the storm.

    In the third era, which started in the 1990s and was ushered in by the Internet and the online services that rode on it, the research scene has changed beyond all recognition. Drawing in many researchers from other fields as it continues to expand rapidly, it exhibits some of the same characteristics of vigor and unruliness as the Internet itself.[3] But while failure to accept conventional ways of doing things has lain behind much of the success of Internet innovators, thumbing one’s nose at the principles underlying sound research would hold little promise.

    In our view, it is too early for confident assessment of the value of the user research undertaken in the first decade or so of its present era. For this reason, our comments on differences between the second and third eras of user research will be somewhat brief. Most of the chapter focuses on the quarter century of research that started around 1970, with an emphasis on interesting lessons this research provides about methods rather than on findings. Much of the associated research literature is not freely available (in either sense) on the Web. Those who want to delve more deeply may need to rely on libraries for books or for their subscriptions to online journal archives.

    The Emergence of a New Field of Research: The Late 1960s to the 1990s

    As table 4.1 suggests, in the late 1960s and the early 1970s, government agencies’ practical concerns created a demand for user studies of new media. The 1973 oil crisis added to this demand, since telecommunications were seen as a way to reduce transportation and associated reliance on foreign sources of energy. At an international level, the cold war provided incentives for Western governments to develop advanced technologies, and competition among Western countries spurred the development of new telecommunication services. As a consequence of these domestic and international pressures, government agencies in Washington, London, Ottawa, and beyond began to believe that some emerging forms of telecommunication could play an important part in helping solve some of the problems that fell within the agencies’ areas of responsibility. The need to explore this potential created a situation in which funds sought researchers as much as researchers sought funds.

    Page  130

    Since the relevant technologies were for the most part interactive rather than one-way, mass communication theory and research were not well positioned to make a significant contribution. With the important exception of Everett Rogers and a few others, scholars of mass communication were strikingly absent from the field until the explosive growth of the Internet commanded their attention about 25 years later. User research might have grown within or branched out from the field of ergonomics, but despite some early studies on videoconferencing by celebrated ergonomist Alphonse Chapanis,[4] it did not do so, possibly because of the difference between focusing on people using a particular piece of equipment in a particular setting and focusing on people communicating at a distance via particular media. It is understandable that there was no significant contribution from research on the relative effectiveness of media for teaching and from those who conducted such research. The latter’s well-established research paradigm involved controlled experiments in which subjects were taught the same content via different media in natural environments, with effectiveness measured by, for example, test scores. It is vastly easier to use this approach with teachers and their students than, for instance, with health care professionals and their patients, salespeople and their customers, judges and defendants, or civil servants and other civil servants or the public. The educational setting offers an ample supply of subjects, readily available and accepted measures of outcomes, and little risk to subjects’ performance and well-being.

    Outside the field of education, researchers from a very wide variety of disciplines examined new communication media and technologies, generally working in interdisciplinary teams. Most research projects fell into one or more of the following categories:

    1. Studies of the perceptions, attitudes and intentions of potential users.
    2. Controlled experiments in a laboratory environment.
    3. Field trials and demonstration projects.
    4. Controlled experiments in natural environments.

    Different approaches had different strengths and weaknesses in providing indications of the future use and usefulness of new media in particular settings (i.e., for the specific purposes of certain types of users in certain situations). For some research questions, certain approaches could not be applied: for example, it was not possible to investigate two-way cable television by means of a controlled experiment with one group of towns having a two-way cable system and a control group of towns not having it—installing the necessary infrastructure in the treatment group would have been too expensive.

    Page  131
    Studies of Perceptions, Attitudes, and Intentions

    The simplest way of investigating the possible use of a new communications technology is to describe or demonstrate it to possible users or let them try it out and then use questionnaires, interviews, or focus groups to find out what they think. This approach is associated more with the market research field than with noncommercial research. Unfortunately, however, whether the studies were conducted by market research professionals or by others, responses usually provided seriously misleading indications of future use: even though participants were often positive about a technology, they—or others like them—typically failed to use it when given the chance to do so subsequently. Various possible explanations exist for this phenomenon: people can be dazzled initially by the glamour of a new technology; they may not be good at predicting how they will behave in unfamiliar situations; they may not take costs sufficiently into account; and they may not anticipate the technical glitches that will subsequently dampen their enthusiasm. Whatever the underlying causes, such studies commonly failed.

    As far as focus groups are concerned, there really was no good reason for hoping otherwise. Although they have their place in, for example, helping to raise research questions or providing information that may be useful to designers of equipment or content, focus groups, not surprisingly, were of no value in indicating the kind and number of people who would use a new technology. Focus groups may be biased by moderators, any group may be dominated by one or two participants, and because participants within any group influence one another, it cannot be assumed that comments by different participants are independent. In consequence, there is no basis for assuming that the output is statistically representative of what would be found in the population that the focus groups may have been intended to represent.[5]

    Controlled Experiments in a Laboratory Environment

    At the level of specific tasks that require communication either between people or between people and machines, controlled experiments in the laboratory are very well suited to the rigorous comparison of new and established media or technologies—for example, performance of a task undertaken using a videoconference may be compared with performance of the same task undertaken in a face-to-face meeting. Highly developed methodology is available for making comparisons in terms of the effectiveness with which the tasks were performed and of various other measures, such as subjects’ perceptions of effectiveness or their views about the outcomes. Creating a suitable technological test bed in a laboratory is very much simpler, faster, and cheaper than installing a service

    Page  132

    in the field for an experiment or trial. When a program of laboratory experiments is undertaken, the program can be adjusted as the research team proceeds, since team members may decide on the specific objectives of a subsequent experiment in light of the theoretical implications of findings earlier in the program.

    In the first half of the 1970s and primarily through a program of laboratory experiments, the Communications Studies Group (CSG) at University College London played a pioneering role in shaping research on users of new media

    Page  133

    and influencing scholarly studies over the next quarter century. Broadly speaking, the CSG experiments indicated that for many tasks—for example, information exchange and certain types of problem solving—performance would not be significantly affected by the choice among the three media. This finding implied that a substantial proportion of business meetings that occurred face-to-face could be conducted effectively by some form of teleconferencing. When differences were found among the media—for example, in tasks involving conflict and bargaining—perception of the other and coalition formation were sometimes affected. On the whole, however, video communication came closer to audio communication than to a face-to-face meeting. As would be expected, in some asymmetrical cases, audio communication worked best for one of the parties—for example, for civil servants defending positions in which they personally did not believe.

    Actual experience in the teleconferencing field over the following 35 years—with the considerable success of audioconferencing and the lackluster diffusion of videoconferencing (see chapter 6)—was in line with the CSG’s conclusions. But broad generalizations do not do justice to the value of their detailed findings to the theoretician.[6] Applied research made a significant contribution to the agenda for pure research: according to Ederyn Williams, the CSG’s and similar findings have identified “many media differences . . . though not as many as might be expected from a reading of the nonverbal communication literature. However, a unitary theoretical explanation for these differences has yet to emerge.”[7]

    With more modest objectives, laboratory experiments may also be helpful as a preliminary step in a field trial. They were used, for example, to explore issues relating to the design of pages in a teletext system introduced in a field trial at the start of the 1980s (see chapter 4 appendix). For this purpose, it was straightforward to simulate the teletext system using a computer.

    This kind of use is a reminder of an important point made by Edmund Carpenter in his classic essay, “The New Languages.”[8] He and Marshall McLuhan had conducted a carefully designed controlled experiment comparing the teaching effectiveness of television and radio with the effectiveness of older media. They found—or so it seemed—that television was the most effective (for students and subjects of the kinds used in the experiment). However, in an exaggerated pursuit of rigor, they constrained the use of each medium in an attempt to ensure that the words used in each treatment were identical: the radio treatment was simply the sound portion of the television treatment, and the print treatment was the transcript. Consequently, they had the results of a comparison of the effectiveness of different media when none of them was used to best advantage. When they repeated the experiment with the form of

    Page  134

    the teaching presentation tailored to each of the media, radio was the most effective and television came second. The wider implication is that it can be a mistake in laboratory-based research to insist that the experimental treatment (for example, a message to the subject) must be exactly the same and that only the medium be varied.

    An associated difficulty has been little discussed in the research literature on media introduced since the mid-1960s (indeed, Carpenter’s essay rarely seems to have been cited in this literature): experiments on the use of new media may provide misleading results if the media are not used in at least a reasonably appropriate way. A variety of aspects may need to be considered here: how content is presented, how interfaces are designed, and how equipment is laid out, among others. But if a medium or technology is really new, how can one know when sufficient understanding exists of how to use it well? Though it may be impossible to provide a useful answer to this question with any confidence, it would appear sensible to have it in mind when conducting or reviewing empirical research in the field. It may be relevant not only in connection with qualitatively new media but also when substantial improvements have occurred in the associated technologies—when exploring the added value of high-definition video for certain purposes other than television programming, for example.

    For one important reason, the value of controlled experiments has been even higher than might have been expected: the evaluation of new media has frequently produced surprises both in controlled experiments and in other research. Such results raise the issue of drawing conclusions that would be valid in the real world from results obtained in an artificial environment. Here, two lines of thinking need to be distinguished. The popular line regards the laboratory as an inevitably unrealistic microcosm but hopes that the results will be sufficiently indicative of what would happen in the real world. The scientific line, in contrast, holds the principle that such experiments allow the rigorous testing of hypotheses drawn from theory; if the hypotheses fail the test, the theory needs to be adjusted in light of the findings. Theory sound enough to provide correct predictions of what would happen in the messiness of the real world should enable acceptably accurate predictions of what would happen in the much simpler environment of the laboratory. When such is not the case, the theory must be flawed. The scientific perspective, therefore, allows strong conclusions to be drawn from surprising results in the laboratory: for example, “The results from the laboratory showed that there is something wrong with our understanding of the use of this particular technology or medium, so assumptions based on the same understanding about what would happen in the real world are also likely to be wrong.” This statement also holds if common sense rather than theory provided the shared basis for both the hypotheses and the real-world predictions.

    Page  135Page  136

    Although the selection or development of outcome measures for the standardized tasks used in controlled experiments is often conceptually straightforward—how long it takes to retrieve required information, how accurately radiological diagnoses are made, how well instructional objectives are met, and the like—it is often necessary to think beyond a one-dimensional measures ranging from bad to good. The use of one medium may differ from the use of another in several dimensions—for example, how well a task is performed, the time taken to complete it, and the parties’ estimations of how well it has been accomplished—being more desirable in some dimensions and less in others. Another situation in which there are effects on outcomes that cannot be placed on a single scale arises when there is conflict between the parties involved: what is better for one party may be worse for the other, as when the party with the stronger case prevails more often in the audio-only mode than in two-way video.[9]

    Valuable as experiments employing standardized tasks and outcome measures have proved in research on new media, usually they are possible only in the artificial environment of the laboratory. This limits their usefulness in exploring a variety of issues. In laboratory experiments, the task is taken out of the context in which it arises naturally, potentially affecting subjects’ motivation to perform the task or their confidence in how well they did so. Such experiments are rarely applicable to the study of other than immediate outcomes. Moreover, the framework provided by tasks may be inapplicable, as in the use of interactive television for entertainment purposes. And laboratory experiments are not well suited to questions regarding when, how, and with what effects a particular system would be used if it were available in a natural setting. Trials and experiments in the field do not face these obstacles.

    Another limitation of some laboratory studies is the use of available subjects—that is, college students—to compare media. Might the results of a laboratory study differ if another demographic group were used as subjects? Would such differences matter? If the end users of an intended service will be students or if the findings will be applied primarily to students, such discrepancies may not matter. If, however, the intended user group includes a significant proportion of senior citizens, for example, their skills, likes, and dislikes may differ in important ways from those of students.

    Field Trials

    In considering past research in this field, distinctions among the terms field trials, market trials, and demonstration projects are of little significance. Customary use of language would suggest that uncertainties regarding how useful an innovation will prove should be lower in demonstrations than in trials, but such does not appear to have been the case in practice. Market trials are private sector

    Page  137

    initiatives, while field trials are generally funded by public sector agencies. As a result, the former often include pricing issues among their foci, while the latter do not, and results from the former are much less likely to be publicly available unless they are subsequently thought to be useful for marketing or public relations purposes. Nevertheless, the commonalities among the three types of activity are much more important than their differences.

    In all three, an innovation involving the use of communication technology was introduced into a natural setting in which it was expected to be valued, and a combination of straightforward research methods was used to explore how well it worked in practice and the effects of its use. The fact that the setting was natural did not, however, mean that the field trial would be realistic in all respects: for example, similar funding mechanisms would generally not be available for the application elsewhere; a site particularly favorable for the innovation might have been chosen for the trial; implementation might have a much higher level of support than could be expected at subsequent sites; and the people concerned would generally have known they were participating in a trial.

    The assumption that the innovation would be valued was crucial: if the service in question was discretionary (those concerned could choose not to use it), little worthwhile learning would result if they did not chose to participate. Very often, however, preexisting options remained available, an unavoidable circumstance in the case of market trials and an understandable circumstance in many other field trials, since the innovation had not yet proved itself. As a result, use of the trial services was generally discretionary and disappointingly low. Hence the amount of learning of the kind that had been hoped for, was also disappointing.

    For some research teams, the field trial had been preceded by other projects in an ongoing program; for others, it stood alone. Loosely speaking, the distinction was between treating the trial as part of a larger research process and treating the research process as part of a trial. Projects of the latter kind tended to fare poorly, which is hardly surprising.

    If an innovation failed in a trial because it had not been well enough implemented, it was very hard to draw any conclusions about the potential value or lack thereof of the underlying concept. Services did not fail in field trials because the technology was too expensive; funds had already been made available. They did not fail because outdated regulations needed to be changed. Observers sometimes concluded that a service featured in a field trial had failed because “it did not meet a real need,” with the implication that no real need existed. Such a conclusion was unwarranted if it was likely that the service simply had not been implemented in a way that provided a good enough chance of success.

    Implementation turned out to be much more difficult than expected for a variety of reasons (see chapter 3). In many cases, the process involved organi-

    Page  138

    zational change, collaboration among different organizations, and/or the installation and maintenance of immature technical systems. When researchers rather than experienced managers were in charge of implementation, the associated management challenges were not of a kind that the project leaders would necessarily welcome, be experienced in meeting, and have budgeted for.

    Underlying these challenges was the importance of service objectives relative to research objectives. The service had to perform well if utilization was to be meaningful, but this objective could conflict with research objectives, and not only because funds and time were limited. Conflict could arise, for example, if experience during a trial with a quasi-experimental design suggested that changes should be made to the technology or service in question; these changes in “the treatment” could make drawing conclusions about its effects more difficult. A different kind of problem lay in the fact that users sometimes needed to invest significant time and psychic energy to take advantage of a new service. Would it have been ethical to ask them to make such investments without making them aware that, however valuable the service proved, it might well have to be withdrawn when the research funds ran out?[10]

    Planners of trials seem to have assumed that summative evaluation (informing external clients how well an innovation worked in practice) and formative evaluation (conducted to assist internal clients in the successful implementation of the innovation) were distinct activities and that only the former had lasting value. This assumption is open to challenge on both theoretical and practical grounds. On theoretical grounds, it would be challenged by proponents of action research, which has recently started to be applied in the field of new media.[11] In addition, followers of operations research as it was originally developed in the middle third of the twentieth century rather than as it is practiced today might well observe that in its older form it could have provided credible evaluation as well as have minimized conflicts between research and service objectives. (Operations research was originally developed in World War II as a process in which teams of scientists from different disciplines supported military decision making.[12] Today, it can be regarded as a specialty within management science, which emphasizes the use of mathematical models.) On practical grounds, one might comment that what usually matters most in a field trial is that the service in question should be used (an exception being if it is like a fire extinguisher, for use only in an emergency); otherwise, little useful research can be conducted. The complexity that is often involved in the implementation of new media means both that future use should not be taken for granted and that much of value to external clients may be learned from formative research.

    After research (or demonstration) funding came to an end, the continued operation of those trial services that had attracted a sufficient level of worthwhile use and whose future running costs would not have been unexpectedly

    Page  139

    high might have been expected. It might also have been expected that success would have led relatively quickly to the initiation of somewhat similar services at other sites. In fact, however, survival, let alone transfer, was decidedly problematic. The fates of four acknowledged successes from the mid-1970s to the mid-1980s are instructive. Although the city of Phoenix was more than willing to pay to keep the Picturephone system in operation following the criminal justice system trial, AT&T withdrew the system after concluding that the overall market for Picturephone would be too small for economic viability and that the cost of maintaining the experimental system would be prohibitive (see chapter 6). Warner Cable installed its interactive cable television system, Qube, in Pittsburgh, Cincinnati, Milwaukee, and Dallas after the moderate success of a market trial in Columbus, Ohio. Nevertheless, a few years later it started renegotiating its franchises and phasing Qube out in all these cities on the grounds that it was not commercially viable (see chapter 8). With funding mainly from community arts grants and local commercial sponsorship, the two-way cable television system for senior citizens, installed in a field trial in Reading, Pennsylvania, continued in operation after National Science Foundation funding came to an end, albeit with considerably less ambitious use of its two-way capability. However, the application never transferred to other cities, primarily, it would appear, because funding such a service was not regarded as being of a sufficiently high priority and the organizational costs of developing and maintaining such a system are high. For two reasons, the highly successful Boston Nursing Homes Telemedicine project (a controlled experiment in the field rather than a field trial, which is briefly described in the following section) almost expired when the experiment ended: budgetary inflexibility meant that federal monies could not be used to fund it even though the project would save the federal government more than it would cost, and it was almost impossible to find a hospital-based physician who would agree to be on call 24 hours a day. At the last minute, the state of Massachusetts stepped in with the necessary funding, and a suitable physician was found.

    One of the lessons regarding public sector trials was the importance of preparing from the start of the project for possible success and transition to more secure funding (as had been done for the Reading project). A second lesson was that lower cost and readily available technology often trumps its more sophisticated and costly cousins, as in Reading and the Boston nursing home project. Unfortunately, it is often harder to obtain government and foundation funding for projects with simple off-the-shelf equipment and easier to gain funding for state-of-the-art yet untested systems.

    A wide variety of methods were used to gather data about users and uses in this period. General concerns included the use of unobtrusive methods and the avoidance of intrusions on privacy. Most of the methods were well

    Page  140Page  141

    established—questionnaires, interviews, logs—and need not detain us here, though ethnographic methods also began to be used. Where a trial service was computer-based—for example, computer conferencing—a considerable amount of data was generated automatically; in other cases, it was necessary to prevent record keeping from becoming too much of a burden. Where a service was not interactive—for example, teletext—planners could consider installing a meter into terminal devices to capture usage data. All of these projects were conducted before the era of institutional research boards—and their stringent privacy requirements—at universities. The boards might have killed some of these research projects. How, for example, could one have protected the privacy of senior citizens in the Reading project when programs were transmitted to all cable subscribers in the city?

    Controlled Experiments in a Natural Environment

    For the new media of interest in the 1970s and 1980s—principally, Picturephone and two-way television in business, government and health care, interactive cable television, computer conferencing, and videotex—it would generally have been impossible or far too expensive to meet the methodological requirements of controlled experiments in the field (in particular, treatment and control groups containing sufficient numbers of independent experimental units, no risk of contamination between the two groups, and a treatment that would be held constant for the duration of the experiment). However, two early telemedicine projects made good use of this approach.

    In the early 1970s, the Boston Nursing Home Telemedicine Project, conducted by Roger Mark, who was at the same time a practicing internist and a member of the electrical engineering faculty at the Massachusetts Institute of Technology, showed that it was possible simultaneously to improve the quality of care provided to nursing home patients and to reduce the cost of the care with a telemedicine approach based on a team of nurse practitioners supervised by a hospital-based physician and using decidedly modest communication technology (principally telephones and Polaroid cameras). In a rigorously designed experiment, patients in the treatment group were drawn from 13 nursing homes, those in the control group from 11 others.[13]

    Also in the early 1970s, David Conrath and Earl Dunn, a management scientist and a physician, respectively, carried out a multistage research program on telemedicine in Canada. The first stage was an observational study focusing on the extent to which primary care physicians used different senses when examining patients; finding the importance of the sense of touch, they concluded that nurses should be with patients when physicians were engaged in remote diag-

    Page  142

    nosis. In the second stage, the researchers conducted a controlled experiment comparing four modalities to link physicians to patients and nurses—color television, black and white television, still-frame black and white television along with a hands-free telephone, and a hands-free telephone alone. Patients who came to a clinic seeking medical attention were invited to take part in a trial of telediagnosis as well as to receive an examination and, as appropriate, treatment by a physician in the usual way. More than a thousand patients accepted the invitation, enabling comparisons between the physicians’ diagnoses and decisions made via telecommunication links with those in person.[14] The researchers found no statistically significant differences in diagnostic accuracy, proportion of supporting investigations requested (e.g., laboratory tests and X-rays), time taken for the diagnostic consultations, and effectiveness of patient management across the four communication modes, though patients slightly preferred the more sensory-rich modes of communication. Conrath and Dunn then designed a third stage: a telemedicine field trial in which hospital-based physicians used still-frame black-and-white television and hands-free telephones for communication with distant nurses and patients. The trial took place at six sites in the Sioux Lookout Zone in northwestern Ontario and at two hospitals in Toronto.[15]

    A Time of Rapid Transition: The 1990s Onward

    The explosive growth of the Internet and Web-based services caused dramatic change in the user research scene during the 1990s. Previously, most user research had been directed toward answering questions about services that were not yet in widespread use—if they were in use at all. Subsequently, however, those seeking understanding for its own sake faced an abundance of opportunities to study how new media were actually being used. At the same time, naturally, rapid growth occurred in private sector and public sector demand for research that would contribute to decision making. Also playing their parts in driving the expansion of research activity were other new media services—in particular, wireless telephony, which was also growing explosively.

    Changes in the industries involved with new media and in the associated infrastructures created a very different research environment. And as a result of the Internet, new types of data become available and new research methods were developed.

    The Environment for Applied Research: The Need for Speed

    By the 1990s, regulatory changes, together with versatile and rapidly improving new infrastructures, had caused fierce competition to replace slow-moving monopoly as a characteristic of nearly all markets for new media and created a

    Page  143

    need to bring new services to market much more quickly than in the past. At the same time, it became possible to a much greater extent than in the past to launch new media services without the heavy cost of installing new or improved infrastructure, whether locally or nationally. For many new services, expensive new terminal devices were unnecessary. Some new services could be provided via the Web, making them immediately available to subsets of users with adequate local connections and sufficiently up-to-date computers. For others, would-be users had only to buy relatively inexpensive new terminal equipment. Associated services could be launched or test-marketed without the long lead times and heavy infrastructure costs that had been associated with, for example, Picturephone in the late 1960s or two-way cable television services in the 1970s. Further, rapid turnover in some categories of equipment—e.g., mobile phones—eased the process by which new generations of equipment and associated new services were brought into homes and businesses.

    Mobile telephony somewhat resembles the Internet in stimulating and enabling the need for speed, though expensive upgrading of infrastructure is also required. Although cable television companies still receive some protection from their status as local monopolies, their market is increasingly open to competition from landline telephone companies (and satellite television companies), so they too face the need to introduce new services quickly on their new two-way digital infrastructures.

    The introduction of other new services requires substantial investments in new infrastructure. From a carrier’s perspective, the whole point of a set of new services may be that they cannot be accommodated to an acceptable standard using existing infrastructure: making them available would require an expensive upgrade. The claimed demand for such services played a large part in carriers’ arguments for the substantial investment in infrastructure that would be required for both integrated broadband networks around 1990 and 3G cellular services about ten years later. Even in such cases, a need for speed exists if another new infrastructure may compete with the proposed one. 3G cellular services, for example, faced potential competition from an infrastructure combining WiFi and 2.5G.

    User research is often undertaken to improve people’s ability to predict or to answer “what-if” questions. When doing so, there is the risk of failing to take properly into account relevant changes that will occur in related areas. With the greater interconnectedness typified by convergence of technologies and services (think of the Web, though convergence encompasses much more) and the increasing pace of innovation, the number of relevant related areas increases and assumptions of ceteris paribus become riskier. Chapter 2 brought up the example of corporations investing in satellite-based mobile telephony but failing to allow for the explosive growth of much less expensive terrestrial mobile

    Page  144

    telephony. This case, however, was relatively simple: underestimating the success of a competitive technology that already existed. Other cases may be far less straightforward. For example, telephone companies investing in fiber-optic networks to or near the home need to be concerned about change in people’s television viewing habits.

    Within the private sector, the need for speed has created a motivation to “get it out there” as quickly as possible and hope all goes well, without waiting for the results of research on whether and how to get it out there—for example, computer software is often launched both with many flaws and with uncertainties about demand. Companies bet that demand will exist and hope that they will have time to fix flaws and offer downloadable patches to users before the level of pain reaches a threshold where people abandon the product. Much less of a case can be made for the applied research that might have been undertaken in the more sedate era of a few decades ago. Either no research or a different kind of research with a faster turnaround is undertaken.

    On their domestic fronts, governments do not face competition in the way that companies do, so it might be thought that they would lack a similar perception of the need for speed. To a lesser degree, however, governments seem to share this perception, whether they are providing new systems on behalf of their citizens or developing new public policy. As an example of the former, it would seem—at least with the benefit of hindsight—that various e-government initiatives around the turn of the century, including e-voting in the United States, might well have profited from more prior research. The hype that surrounded these initiatives may have provided an incentive for unwarranted speed.

    Even in the slower-moving era of the 1970s, the manager of the relevant program at the U.S. National Science Foundation described the daunting difficulties inherent in undertaking user research to guide important public policy decisions, particularly the incompatibility between the timescale within which policy decisions needed to be made and the generally longer timescale needed for the design, funding, and execution of sound research programs.[16] As technological change speeded up, that incompatibility became more severe.

    In today’s environment, in which rapid technological change and fierce competition combine to put pressure on companies to release new media technologies and services as quickly as possible, can research be conducted at a faster pace but remain sound? And, are certain types of research more or less useful in the new environment? Some types, such as field trials, clearly require considerable time and are not suitable to highly time-sensitive research. Similarly, when research requires careful deliberations about the questions to be asked, as in much policy research, it is unwise to save time by cutting corners. However, we have been involved in a number of corporate research projects where the pace was very rapid indeed. Some of these involved usability research about

    Page  145

    new electronic products. The research questions were known, and participants were recruited beforehand. The engineering team delivered a prototype product on a Friday, intensive usability testing was conducted over the weekend, and the results were communicated to the engineering team on the following Monday morning. They then reworked the software and delivered a revised prototype on Friday, and the process was repeated. (Usability testing is often iterative—one tests, revises, and tests again two, three, or even more times, depending on the product, the complexity of the design and the problems encountered.) After another week of revisions, the final specifications were sent to a company in Asia for manufacturing.

    We are also familiar with media companies that have established large “panels” of people—sometimes 20,000 or more—who agree to test new services, watch special programming, or give opinions about potential new services. Such an approach can provide very rapid feedback—e.g., several thousand responses in 48 hours.

    “Firehouse research” is another form of research in a hurry. If an unexpected event occurs and waiting to conduct research about it with a typical timetable might miss important knowledge as memories fade, can the research be deployed very rapidly? The first example of firehouse research with which we are familiar followed Orson Welles’s radio broadcast of War of the Worlds in the late 1930s. Using a documentary news format, the drama caused panic, as many listeners thought that the earth (more specifically, New Jersey) was being invaded by Martians. CBS, which broadcast the drama, was very concerned about reactions and potential penalties from government agencies, so the company’s head of research, Frank Stanton, immediately commissioned Princeton University researcher Hadley Cantril to investigate what had happened. Cantril rapidly deployed a research team, and produced the classic study, The Invasion from Mars: A Study in the Psychology of Panic.[17] Other events that have triggered firehouse research are the use of mobile phones and pagers on 9/11,[18] fax communications about the events surrounding the repression of demonstrators in Tiananmen Square, and the use of mobile phones to help topple the president of the Philippines (see chapter 10). Firehouse research has many limitations, but it is sometimes acceptable to learn what one can and accept weaknesses in the research design rather than miss the opportunity to study important events while or immediately after they occur.

    New Types of Data and Research Methods

    From their introduction at the start of the 1970s, computer conferencing services offered their operators data of unprecedented richness about how individuals were using the services, since a record of every interaction with the

    Page  146

    computer hosting the service was created and stored. All online services—but especially Web sites—have this natural capability. Even when sites do not solicit information that would identify users—e.g., for registration or in electronic commerce—individuals (more accurately, the groups of people sharing a browser) can be tracked through multiple visits to the site by means of cookies.

    Numerous e-commerce firms use analyses of data generated by users’ online activities to optimize marketing offers and their designs; other Web site operators, too, use such data to optimize the content conveyed on their Web pages. The data and the results obtained in these studies are generally proprietary. Although one would not expect employees of firms in competitive markets to publish results of such research, reports from which information identifying the firm and its specific market has been omitted are likely to trickle out. When outsiders—for example, consulting firms and consultants from the academic world—are important partners, publication of a sanitized version is generally in their interests.

    Organizations for whom such research is conducted may know or be in a position to know the identities of the individuals whose behavior at particular sites has been studied. Clearly this poses risks to individuals’ privacy. These risks are greater if individuals’ actions are to be tracked across Web sites oper-

    Page  147

    ated by different organizations or if individuals’ actions on one site are to be linked to information about them held in other databases.

    The Web can also provide a very convenient platform for experimental research designed to optimize the content and form of the communication it carries. The possibility of providing different versions of a Web page to different users makes it possible to design ongoing experiments that compare the effectiveness of a message with that of one or more variations on it. Along with practical guidelines for applying this approach, R. Kohavi et al. provide two examples in which this process has produced useful surprises.[19] One involved the comparison of nine different designs for an e-commerce checkout page; the other compared alternative designs for a page on which users could rate articles provided in response to the use of Microsoft Office’s “help” function. Kohavi and colleagues also provide a brief discussion of the limitations of the method—for example, it identifies effects without identifying their causes and can measure only short-term effects—and briefly discuss how to compensate for these limitations.

    This kind of research method is far from new. In the 1960s, for example, it was applied to compare commercials carried on cable television systems. The theory on which it is based goes back at least to the work of eminent cybernetician Stafford Beer in the middle of the past century. Its relevance at this time lies in the fact that both the Web and interactive television provide such inviting platforms.

    Some less radical changes in data collection methods are also apparent. One is the growing use of the Web for long-accepted research activities such as focus groups and surveys. If the latter were used only as substitutes for conventional surveys, their statistical weaknesses—relating to self-selected samples, for example—would probably have doomed them. However, their offsetting advantages—very rapid turnaround, modest cost, and ability in many instances to generate very large samples—mean that the alternative would often be no survey or one that produced results much later than desirable. In some cases, participants in Web surveys are recruited ahead of time and stand ready to respond to research questions. These panels are often very large and can provide a rapid turnaround. The problem of self-selection can be mitigated in a few ways. Some research groups use random-digit telephone calls to recruit the panel. Others filter the panel (i.e., select a subset of responses by using specific criteria) to try to obtain a sample that represents the general population (for example, asking people if they are early, middle, or late adoptors of new technology and selecting a sample that matches the general population on this attribute). The use of Web surveys and panels has also become more acceptable, with the limitations noted, because of a growing problem in telephone surveys—poor response rates (the percentage of people called who can be reached and agree

    Page  148

    to participate in the survey). As response rates for telephone surveys have declined, the chances of achieving a truly random sample that is representative of the general population have been reduced.

    The use of Web surveys is a double-edged sword on many counts. It is now much simpler to conduct surveys on the Web, using a variety of very well designed software packages and Web services that make it very easy and cheap to create questions, send out links to the survey, and tabulate results. Almost anyone can create and administer a survey. But without professional training, many people and companies create very bad surveys with biased questions. It is easy to understand how a marketing group within an electronics firm might unknowingly ask about a new media technology in such a way that people would be encouraged to say that they would like to have it.

    Better measurement of the fragmenting linear and interactive television audiences has been made possible by the continuing improvement of meters designed for this purpose. These meters automatically keep records of what is being viewed. Earlier systems of measurement, still employed by some researchers, involved telephoning people and asking what they had watched the previous day or week or asking household members to keep a record of what they had watched. These earlier systems were subject to errors in memory and lack of knowledge if the person answering the telephone or filling out the diary provided inaccurate information about the viewing patterns of others in the household. Like computer conferencing in earlier periods and Web usage measurement, digital cable and satellite systems and DVR systems can track every button pushed by a household. Putting aside for the moment the question of privacy, these systems can provide much more information about how new media services are being used.

    Although very much overshadowed by changes in the research scene associated with the use of automatically generated computerized records, a trend at the opposite, qualitative end of the range is notable: ethnographic methods were very rarely used in the 1970s but have since gained much greater acceptance in corporate research on new media as well as in academic research in the field.

    Challenges

    On the surface, the present might seem like a golden age for user research on new media. For those conducting pure research, so much is new, there is so much to be found out, and fairly straightforward methods can be used to seek it. What did different types of people do when offered a new kind of service or a new feature in an existing service? What effects did certain experiences of new media have on different types of users? And so on. But of course, discovering

    Page  149

    why people acted as they did or why the experiences had the effects they did may well be far harder.

    Those conducting applied research have access to a vast wealth of automatically generated data and new techniques to gather other data that are not generated automatically. The range of software with which to conduct analyses continues to expand, and powerful new options continue to become available. The Web can provide a platform for experimentation and for simulation. For all researchers, the combination of powerful search engines, the Web, and specialized databases greatly eases the otherwise impossible task of keeping up with research activity that has been expanding so rapidly.

    While the explosive growth of the Web has created a cornucopia of opportunities for conducting original research (which differs greatly from conducting research originally) and while other new media have created additional opportunities, major challenges exist in obtaining as much value as could be desired from the research that is done. These challenges will be all too familiar to readers with research experience, but others may not appreciate these challenges as clearly.

    Scholars regard research primarily if not solely as a means to advance theory (and the authors believe there is much truth in the cliché that nothing is as practical as good theory). Especially when the media environment is changing so rapidly, findings from diverse studies about how, say, particular types of users behaved in a particular situation may have little if any value unless either they inform theory now or the right variables pertaining to the situation were measured so that they can inform theory in the future. At a more fundamental level, assuming that the answers to research questions are valid, they are much more useful if they were good questions. As Sheizaf Rafaeli wrote in 2001 about the Internet as an area of research,

    I wish to propose some constructs to guide our thinking and study. These constructs, I hope, should shed some light on what can be investigated, and suggest manners to do so. Constructs can suggest what should be asked. . . .

    I propose focusing on several defining qualities of communication on the Net: multimedia, synchronicity, hypertextuality, packet switching, interactivity, logs and records, simulation and immersion, and the value of information.[20]

    For applied researchers, the main challenge in the short term is likely to arise from the reduced timescales on which decisions need to be made. When possible, it may be better to minimize errors by decreasing emphasis on using research in advance of decision making and by increasing emphasis on making

    Page  150

    decisions that are likely to be approximately right and can be adjusted quickly in light of fast turnaround research on what happens subsequently in the field.

    A longer-term challenge, especially for in-house researchers, arises from the fact that the cumulative value of a corporation’s or government agency’s research studies is likely to go far beyond their value relative to the decisions they were intended to inform. This potential is unlikely to be realized unless those in a position to act on it take it seriously. In this respect, our personal experience in the less pressured decades at the end of the last century was not encouraging: remarkably often, far from any synthesis being conducted, there was no corporate memory of earlier research: the people had gone and the research reports had been tossed. It will not be enough for the knowledge management systems that started to become fashionable in the mid-1990s to succeed as far as the retention and location of knowledge is concerned: synthesis will be necessary, too.

    Appendix: Teletext in the United States: Design and Implementation of a User Research Project

    This sketch of a research program on teletext conducted between 1979 and 1982 illustrates the problem of using field trials in policy research; the value, when the unanticipated occurs, of the flexibility offered by a loosely coupled multicomponent research program; the complementary roles of laboratory experiments and field trials; and the value of using multiple methods in gathering data on user behavior.[21]

    Background. By the late 1970s, teletext was successfully established in Europe, and international rivalry existed among the competing national teletext standards of the United Kingdom, France, and Canada. In the United States, where all of the three standards were being promoted vigorously, the air was thick with competing technical and economics claims, but little else appeared to be happening. The situation caused concern at the Corporation for Public Broadcasting (CPB) and the National Telecommunications and Information Administration (NTIA) in Washington, where officials believed that teletext was of particular relevance to the Public Broadcasting System (PBS).

    Planning study. In 1979, the Alternate Media Center at New York University undertook a planning study to identify the public policy issues related to a public broadcasting teletext service and to make recommendations as to whether and how a field trial should take place.

    Research program. The National Science Foundation and the CPB eventually funded a research program, with additional support provided by the NTIA and the U.S. Department of Health, Education, and Welfare. The research program confronted a number of challenges. Because the teletext field trial would be

    Page  151

    costly, funding would have to be obtained from four different agencies, each of which had somewhat different expectations for the project. In particular, a conflict existed between some agencies’ desire for scientifically rigorous research and others’ goal of a successful demonstration of the new technology. Various technological hurdles also needed to be overcome, including the scarcity of teletext decoders that could be used with the U.S. television standard and significant reception problems that would affect field testing in general and systematic sampling in particular. Further, the PBS station where the service was to be based (WETA in Washington, D.C.) had very little experience in the type of work required for teletext, which is akin to a small newspaper or a radio news service. The project needed (and developed) public information providers such as federal agencies, local libraries, and community service agencies, but these organizations were experiencing budget cutbacks that reduced the effort they could put into the project.

    The overall plan called for the project to be split into two phases. The first would have two components: a pilot field trial (involving 40 households and 10 public sites) and laboratory studies. The division into two phases was an attempt to deal with funding problems, to iron out technological problems before large-scale field research began, and to explore general research issues before formulating specific hypotheses that could be tested during the second phase.

    Laboratory studies. The laboratory studies, which were intended to start after the pilot field trial was under way, sought to investigate a series of general issues that appeared both practically significant and suitable for testing in the laboratory: for example, how long people would be prepared to wait for a page of information to appear before irritation set in. Some of these issues had been identified prior to the start of the project, while planners anticipated that others would be identified during the pilot.

    The most important product of the laboratory studies was methodological: a set of scales that could be reduced to three major factors that accounted for 60 percent of the variance in users’ reactions to teletext pages. As a cross-check on validity, one of the experiments was repeated some months later using the same page designs in the homes of users taking part in the pilot. Results were reassuring.

    Engineering tests and problems. Teletext is more vulnerable to reception problems than is regular television; furthermore, the project was to use a UHF channel with above-average reception problems. Engineering tests had difficulty identifying suitable locations, but three neighborhoods eventually were chosen from which a sample of homes would be selected. Since the trial service would be carried on the local PBS station, the research team accepted the possible demographic bias of using its membership list as a sampling frame and selected the 40 matched pairs of households needed for the treatment and con-

    Page  152

    trol groups. However, reception ultimately was much more sensitive to location than had been assumed, causing further delay and expense.

    The control group was abandoned (it was raided for additional subjects). The consequent damage to the research design was not serious, since the pilot incorporated a control group primarily because the research team wanted to check the procedural problems and costs of incorporating a control group in the following phase.

    Research data on use of the trial service. The research instruments used for the household subjects included a series of face-to-face interviews, diary records kept by users, and purpose-designed meters that recorded each page request by time of day. The research instruments used at the public sites included the meters, a one-time written survey of user reactions, and nearly 200 hours of ethnographic observation of user behavior at and around the teletext terminals.

    The multiplicity of different instruments provided useful information on a wide variety of specific topics. Meter readings, for example, tracked the novelty effect within households and determined whether subjects switched to teletext during commercial breaks. Comparison of diary records with meter readings illustrated the systematic biases in the use of diaries. The ethnographic study provided firsthand observations of usage behavior.

    Change of plan. The trial service started in June 1981. Six months later, a combination of considerations demonstrated that a major change of plan was required. Public research funds had become very tight, and evidence was mounting that the pilot service was more seriously underfunded than its designers had realized. On the positive side, more progress was being made in the research than had been anticipated. The research team decided, therefore, not to proceed to the planned second phase but to extend the pilot by a few months. In the end, the project generated a great deal of useful research findings, even if the journey to get there had some twists and turns. It demonstrated the value of building flexibility into a research project and using multiple research components. The project also illustrated the complementary roles of laboratory experiments and field trials and the difficulty (but not impossibility) of working with multiple funders.