Presented at MIT Workshop on Internet Economics March 1995

1 Introduction

This paper argues that however much of an anathema the notion of regulating the Internet may be, there is a strong need to start putting the appropriate regulatory structures in place as the commercialized Internet moves incrementally towards a usage-based pricing system. Various factors such as new bandwidth-hungry applications; the massification of the net; the concerted entry of the telephone, cable, and software companies; and the proliferation of electronic commerce all imply unimaginable potential growth rates for the Internet and a resultant scarcity of bandwidth, thus making it imperative to put a pricing system in place that would effectively ration scarce bandwidth.

As has been argued by many, a usage-based pricing system seems to be an innovative way to effectively ration scarce bandwidth. In this context, this paper examines the Precedence and the Smart Market models of Internet pricing. We note that (a) the perceived homogeneity of the Internet's load, and (b) the threat of market-power abuse through artificial creation of a high network load by those who control the bottleneck facilities, remain the fundamental weaknesses of usage-based pricing. However, given that usage- based pricing is inevitable, and that the Smart Market mechanism does present an innovative and a potential solution, it is important to consider the appropriate safeguards that need to be put in place. In this context, the paper argues that a usage based, free market pricing system needs to be combined with some form of regulatory oversight to protect against anti-competitive actions by the firms controlling the bottleneck facilities and to ensure non- discriminatory access to emerging networks.

2 The Different Dimensions of Growth

The Internet, which has hitherto been restricted as a resource for high level researchers and academics, is "expanding to encompass an untold number of users from the business, lower-level government, education, and residential sectors" (Bemier, 1994, p. 40). Studies done by Merit Network Inc. [1] indicate that the Internet has grown from 217 networks in July 1988 to 32,370 networks in May 1994. The number of hosts have increased from 1,000 to over two million over the same period, with about 640,000 of these located at educational sites, 520,000 at commercial sites, 220,000 at governmental sites, and the remaining 700,000 at non-US locations. Traffic over the NSFNET backbones increased by 10 times in three years, from 1,268 billion bytes in March 1991 to 12,187 billion bytes in May 1994. The traffic history of packets sent over the NSFNET shows similar exponential growth trends. As against 152 million packets in July 1988, 60,205 million packets of information were sent over the system in May 1994; an increase of almost 400 times. [2]

These stunning growth figures are just a precursor to the boom in Internet traffic that is expected to take place in the near future. As will be laid out in this paper, a set of factors in combination are threatening to dwarf even these exponential growth rates in the near future.

3 The Causal Model of Internet Congestion

As illustrated in the chart, a set of forces working together are threatening to create unprecedented levels of congestion on the Internet. It is argued that three main factors—incompatibility of the newer applications with the Internet's architecture, massification of the Internet, and privatization and concomitant commercialization of the Internet—are responsible for an inherent change in the Internet's dynamics, thus mandating a reexamination of the economic system that surrounds the Internet.

Figure 1Figure 1

3.1 Incompatibility issues

New network applications are all tending to require heavy bandwidth in near-real time. As Bohn et al. note, "one may argue that the impact of the new, specifically real-time, applications will be disastrous: their high bandwidth- duration requirements are so fundamentally at odds with the Internet architecture, that attempting to adapt the Internet service model to their needs may be a sure way to doom the infrastructure" (p . 3).

Their technical characteristics and, consequently, their demand on the network are very different from the more conventional, traditional electronic communication and data transfer applications for which the Internet has been designed. [3] While conventional electronic communication is typically spread across a large number of users, each with small network resource requirements, newer applications such as those with real-time video and audio require data transfers involving a continuous bit stream for an extended period of time, along with network guarantees regarding end-to-end reliability. Even though the data-carrying capacity of the networks is constantly being enhanced through upgrades in transmission capacity and switching technology, current developments in communication software, especially those related to multimedia, are creating network applications that can consume as much bandwidth as network providers can supply (Bohn, Braun, Claffy, & Wolff, 1994).

Multimedia Netscape applications, Internet fax, and Internet radio are becoming large users of resources (Love, 1994). Russell (1993) reports that while only 2.4 kbps are required for communication of compressed sound, 3840 kbps are required for CD quality stereo sound. Real-time video needs bandwidth ranging from 288 kbps to 2000 kbps, while studio quality non-real time video could require up to 4000 Kbps. HDTV requirements range from 60,000 to 120,000 Kbps. [4] Bohn et al. (1994) report that many videoconferencing applications require 125 kbps to 1 Mbps. Although compression techniques are being developed, the requirements are still substantial CUSeeMe, developed at Cornell University uses compression, yet its requirements are in the region of l00 kbps.

In essence, the trend is towards applications that are, first, heavy bandwidth consumers and second, require near real-time transmission—both characteristics that are essentially incompatible with the inherent architecture of the Internet.

3.2 Privatization, Commercialization, and Massification

Simultaneously, we are witnessing a privatization of the Internet's facilities, increasing commercialization of the net, and a political agenda promoting the rapid deployment of the NII. All these are resulting in a massification of the Internet, as it becomes easier to get "wired" in. The bottom line implication is that the demand for bandwidth is possibly rising beyond current levels of supply.

Prior to 1991, the net's physical infrastructure was government-owned and operated. On December 23,1992, the NSF announced that it will cease funding the ANS TS backbone in the near future. The Clinton Administration's thrust on private-sector investment in the NII implies that very soon, possibly by 1996, the Internet's facilities will be largely privatized. In 1994, the NSF announced that the developing architecture of the Internet would utilize four new Network Access Points (NAPS), and the contracts for operating them were awarded to Ameritech, PacBell, MFS, and Sprint. In addition, MCI has been selected to operate the Internet's new very high speed backbone (vBNS).

The traditional telecommunication companies operating in a nearly saturated and increasingly competitive domestic market, are turning their focus towards advanced data services, a market where the "number of data relationships is growing at more than four times the number of voice relationships" (Campbell, 1994, p. 28). Spurred on by the promise of the NII, a variety of communication companies are getting into the act. "(T)elephone companies, cable companies, information service companies, television networks, film studios, and major and software vendors are all maneuvering to ensure that they are well positioned to profit from the NII in general and the Internet in particular" (Business Editors, 1994).

Of all these players, the telephone, software, and cable companies are in a position to strongly affect one critical aspect of market: accessibility. User-friendly software, enhanced services, and marketing skills are together likely to have a dual effect: one, allow computer literate users who have been to date outside the periphery of the net the opportunity to connect, and two, drive the development of user-friendly tools of navigation, which would have a multiplier effect on both network usage and the number of people who would be able to navigate through the Internet effectively and access desired information bases productively .

Bernier (1994) reports that the telephone and the cable companies have already rolled out their plans for the Internet. In March 1994, AT & T announced a national InterSpan frame relay service and Internet Connectivity options, both dial-up methods for accessing the Internet. MCI offers access over its frame relay services. Sprint, which offers a nationwide Internet access service along with providing international Internet connections, is now offering ATM access to the net. Several Bell regional companies are getting into the act. US West offers end users access to two Internet providers via its frame relay services. Pacific Bell in collaboration with InterNex Information Services, now offers Internet connections, while Ameritech has won a contract to be one of the four Network Access Providers. They plan to offer Internet protocol pipes over their frame relay, switched multi-megabit data service. Many cable operators are also getting into the market. Continental Cablevision and Jones Intercable are using cable modems hooked onto their coaxial lines to bring broadband Internet connections to businesses and homes. Continental, a Boston- based cable company, launched a service in M arch in collaboration with Performance Systems International, the national Internet access providers, to bring high bandwidth service to residences and businesses in Boston. [5]

The bottom line implication is that the number of Internet users is going to increase manifold, as opportunities to interconnect with the network become ubiquitous through the efforts of the telephone, software, and cable companies, and as user-friendliness and utility of the applications develop further.

4 Implications & Key Issues

The implication of these forces—the incompatibility of the new bandwidth hungry applications, infusion of new users, and the privatized and commercialized nature of the Internet—is that the demand on network resources will increase exponentially, and will possibly be much more than the supply of bandwidth. As network resources become scarcer and as the system is driven towards a free-market model, resource rationing through a change in the pricing system is inevitable.

The key issue is that the pricing mechanism should be able to (a) preserve the inherent discursive nature of the net, (b) send the right signals to the marketplace, and also (c) be flexible and adaptive to changes brought about through technology, political initiatives, and software development.

4.1 Pricing Alternatives

The major fear in some quarters is that the present system of flat-rate, predictable pricing for a fixed bandwidth connection will be replaced by some form of vendor preferred, usage-based metered pricing Users feel that the Internet should continue to function primarily as a vast, on-line public library from where they can retrieve virtually any kind of information at minimal costs.

According to some, a transition to metered-usage would make the NII "like a Tokyo taxi, so that for every passenger who takes a ride on the national data superhighway, the first click of the meter will induce severe economic pain and the pain will increase with each passing minute" (Judith Rosall, International Data Corporation's Research Director quoted in Business Editors, 1994).

Consumer advocacy groups opposing metered pricing usage of the Internet [6] feel that the NSF should create a consumer advisory board to help set pricing and other policies for the network to ensure that the free-flow of information and democratic discourse through Internet listserver and fileserver sites is preserved and enhanced. In addition to the fear that a popular discussion would have to pay enormous amounts to send messages to its members, it is feared that usage based pricing would introduce a wide range of problems regarding the use of ftp, gopher, and mosaic servers, since the providers of the "free" information would be liable to pay, at a metered rate, the costs of sending the data to those who request for it. This would have a negative effect on such information sites, and would eliminate many such sources of free information.

In essence, the argument is that usage based pricing would imply severe economic disincentives to both users and providers of "free" information, and would therefore destroy the essentially democratic nature of the Internet.

4.2 The Arguments against Flat-rate Pricing

The paper argues that flat-rate pricing in the current context of the Internet is likely to run into severe problems. Paradoxical as it may sound, the continuance of flat rate pricing is likely to severely impair the current discursive nature of the Internet.

The basic role of a pricing mechanism is to lead to an optimal allocation of scarce resources, and to give proper signals for future investments. The mechanism in place should lead to the optimization of social benefits by ensuring that scarce resources are utilized in such a manner as to maximize productivity in ways society thinks fit. As Mitchell (1989) notes, "in a market economy, prices are the primary instrument for allocating scarce resources to their highest valued uses and promoting efficient production of goods and services'' (p. 195). One critical issue however is the basis on which an appropriate pricing scheme can be designed.

Given that the marginal cost of sending an additional packet of information over the network is virtually zero once the transmission and switching infrastructures are in place, marginal cost pricing in its simplistic form is inapplicable. Cost-based return on investment (ROI) pricing is both not feasible, given the multiplicity of providers who would have to chip in to bring about an end-to-end service, and inefficient, given the chronic problem of allocating joint costs. [7] A "what the market can bear" policy would be likely to have unforeseen implications, especially if the markets are not competitive in each and every segment of the network.

The principle that is most likely to be effective in this scenario is a modified version of the marginal cost approach, where the social costs imposed by the scarcity of bandwidth—the bottleneck resource—is taken into consideration. Bandwidth being the speed at which data is transmitted through its networks, its scarcity implies delays due to network congestion. This then is the social cost that needs to be incorporated into any efficient pricing scheme.

4.3 The Costs of Congestion

The packet-switching technology of the TCP/IP protocol embedded in the Internet has an essential vulnerability to congestion. A single user, overloading a sub-regional line that connects to the regional level network, can overload several nodes and trunks, and cause delays or even data loss due to cell or frame discarding for other users. The specific manner in which the problem manifests itself depends on the protocols used, and on whether the network is simply delaying or actually discarding the information (Campbell, 1994). Since backbone services are currently allocated on the basis of randomization and first-come-first-served principle, users now pay the costs of congestion through delays and lost packets (Varian & MacKie-Mason, 1994). [8] The problem is likely to become even worse as Power PCs such as a $2000 Macintosh AV combined with a $500 camcorder would enable an undergraduate to send real-time video to friends on another continent, by pumping out up to 1 megabyte of data per second onto the Internet, thus tying up a T l line (Bohn et al., Love).

The cost of congestion on the Internet is therefore a tangible problem, and not merely the pessimistic outpourings of a band of dystopians. Some have argued that it does not matter if users fill up their leased line, and even less the manner in which they do so (Tenney, telecomreg, 4 May 1994, 18:42:09). However, the Internet is not designed to allow most users to fill their lines at the same time. Also, as new applications such as desktop videoconferencing and new transport services such as virtual circuit resource reservation come in, it will become more and more necessary for the network to provide dedicated and guaranteed resources for these applications to operate effectively (England, telecomreg, 7 May, 1994 08:04:26). In the Internet system, which is essentially designed for connectionless network services, the requirement of bandwidth reservation implies that an incompatible class of service needs to be provided over it, thus necessitating costs in developing added functionality to its edges (Pecker), and in decreasing its overall efficiency.

In essence, the changing nature of network traffic implies a social cost, largely due to this inherent incompatibility between new applications and the Internet architecture. There is a social cost imposed by those who are making unlimited use of the newer bandwidth-hungry, incompatible applications. This cost is being borne by others in the form of delays and data dropouts while making use of the more traditional applications such as email, ftp, and gopher. [9] The flat-rate pricing mechanism is therefore inefficient in sending out corrective signals to minimize social costs and as a resource allocator since it can hardly be argued that the social benefits of a democratic discourse are less beneficial to society than an undergraduate sending out real-time video to his friends. [10]

There is a potential danger here. Continuance of the current pricing system may result in a situation where the new applications drive out traditional uses. The inherent bias of flat-rate pricing, whereby heavy users are subsidized by light users, is a threat to the more traditional forms of net usage as applications requiring heavy bandwidth are coming of age. It is however clear that a new form of pricing scheme needs to be developed in order to ensure that the net retains part of its original character as it evolves into a more potent and futuristic medium of communication.

4.4 The Pricing Options

At the far end of the spectrum is pure usage-based pricing. Given the shortfalls of the flat-rate based scheme, it seems certain that there will eventually be "prices for Internet usage, and the only real uncertainty will be which pricing system is used" (Love).

4.4.1 The Telephone Pricing Model

One form of usage based pricing would be to use the system of posted prices as in telephony. One way to do this would be to adopt the telephone model of computing interLATA prices, where the cost of Internet usage is based on the distance between the sender and the receiver, and on the number of nodes through which data need to travel before they reach their destination. This however would be difficult to implement given the inherent nature of the connectionless net technology, which is based on redundancy and reliability, where packets are routed by a dynamic process through an algorithm that balances load on the network, while giving each packet alternative routes should some links fail (Varian & MacKie-Mason, 1993, p. 3). The associated accounting problems are also enormous. In addition, the sender would prefer that packets are routed through a minimum number of nodes in order to minimize costs, while the algorithm in the Internet would base its calculations on the concept of redundancy and reliability, and not necessarily on the fewest links or the lowest costs.

The telephone model of pricing is not likely to work for another reason. Posted prices are not flexible enough to indicate the state of congestion of the network at any given moment (Varian & MacKie-Mason, 1993, p . 19). As we have seen earlier, congestion in the network can peak from an average load very quickly depending on the kind of application being used. Also, time-of day pricing means that unused capacity at any given moment cannot be made available at a lower price whereby it would be beneficial to some other users. Conversely, at moments of congestion, the network stands to lose revenue because users who are willing to pay higher amounts than posted rates are being crowded out of the network through the randomized first-in-first-out (FIFO) process of network resource allocation.

In essence, the system of posted fixed prices implies multiple problems: while it does not allow for revenue maximization under the "market can bear" philosophy or lead to optimal capacity utilization, it also does not address the social costs of congestion because it cannot allow for prioritization of packets. It is thus clear that the answer to the Internet's pricing problem does not lie at either ends of the pricing spectrum defined by flat-rate pricing and pure usage based pricing, but possibly in an innovative approach.

4.4.2 Innovative Pricing Models

Two innovative pricing schemes have been suggested recently. Bohn et al. have proposed the "Precedence" model, while Varian & MacKie-Mason have developed the "Smart Market" mechanism.

4.4.2.1 The Precedence Model

The Precedence model proposes "a strategy for the existing Internet, not to support new real-time multi-media applications, but rather to shield ... the existing environment from applications and users whose behavior conflicts with the nature of resource sharing" (Bohn et al., p. 4). The authors propose that criteria be set to determine the priority of different applications, which will then be reflected in the IP precedence field of the different data packets. Packets would receive network priority based on their precedence numbers. In the event of congestion, rather than rely on the current randomized decision, the Precedence model presents a logical basis for deciding which packets to send first and which to hold up or drop. While noting that their proposed system is vulnerable to users tinkering with precedence fields, the authors feel that this approach would "gear the community toward the use of multiple service levels, which ... (is) the essential architectural objective" (p. 10).

However, this model has some inherent weaknesses. Given that the Precedence model rests on priority allocation of packets, the central issue is how these priorities will be set and who will set them. There seems to be an inherent assumption of an increased governmental role in regulating content, and as Varian and MacKie-Mason point out, "Soviet experience shows that allowing bureaucrats to decide whether work shoes or designer jeans are more valuable is a deeply flawed mechanism" (1994, p . 16).

The system would also require continuous updating of the priority schemes as newer products and applications become available. Real time video may be assigned a lower priority than ftp, but it is possible that the video transfer of data is concerned with an emergent medical situation. Application- based priority will be limiting, and it would not be possible to define each and every usage situation in a dynamic environment.

Also, the model relies heavily on the altruism of net users, and the correct reporting and non-tinkering with precedence fields by computer-savvy netters. The continuing survival of such a system is at odds with current social trends.

4.4.2.2 The Smart Market Mechanism

Proposing the Smart Market mechanism as a possible model to price Internet usage, Varian & MacKie-Mason (1994) suggest a dynamic bidding system whereby the price of sending a packet varies minute-by-minute to reflect the current degree of network congestion. Each packet would have a "bid" field in its header wherein the user would indicate how much he is willing to pay. Packets with higher bids would gain access to the network sooner than those with lower bids, in the event of congestion. The authors acknowledge that this mechanism is preliminary and tentative and is only one approach to implementing efficient congestion control; moreover, it would only ensure relative priority without being an absolute promise of service.

The Smart Market mechanism has great theoretical potential as a basis for implementing usage-based pricing. By charging for priority routing during times of congestion, traffic that does not claim priority status, such as a large Internet mailing list of a listserv conference. would travel for free during off-peak hours. During congestion, users would bid for access and routers would give priority to packets with the highest bids. A great deal of consensus will be required along the network for smooth functioning and to ensure that priority packets are not held up .

Users will be billed the lowest price acceptable under the routing "auction," and not necessarily the price that they have indicated as their bid. A user would thus pay the lower amount between his bid and the bid of the marginal user, which will be necessarily lower than the bids of all admitted packets. As a result, the Varian and MacKie-Mason model ensures that while everyone would have the incentive to reveal his or her true willingness to pay, there are systemic incentives to conserve on scarce bandwidth while simultaneously allowing effectively free services to continue.

5 Discussion: Building a Case for Regulation

We argue that although the dynamic bidding mechanism is very attractive as a theoretical basis for pricing usage, it renders the system wide open to potential abuse by those who control the system bottlenecks. A case is therefore made for establishing some form of regulatory oversight to ensure against anti-competitive activities and abuse of market- power. In essence, this paper argues that a usage-based pricing scheme needs to be combined with some form of regulatory oversight that aims at making the access of emerging networks to the Internet open and nondiscriminatory, and that the firms which control the bottleneck facilities in the emerging structure do not indulge in anti-competitive behavior. [11]

Interestingly, in the Internet debate, we seem to have lost sight of the fact that dynamic pricing of network services has been advanced and debated earlier. The notion of dynamic rates for pricing network services as a mechanism to balance loads, limit congestion, and avoid the high costs of adding capacity, has been advanced in the past (Mitchell). Vickrey (1981) proposed that telephone networks could manage their congestion during peak-load times by alerting subscribers through a higher pitched dialing tone and charging premium rates for calls made at those times. Mitchell notes that as the local networks of telephone systems evolve into broadband systems and become even more capital-intensive, the gains from allocating capacity dynamically on demand will be larger. Dynamic pricing would enable higher overall use of network capacity, while allowing price-sensitive users to access telephone services at lower prices on a dynamic and daily basis.

5.1 The Weakness of the Dynamic Bidding Model

The essential weakness of the Smart Market proposal as a stand-alone, free market pricing system that does not need any regulatory oversight for its proper implementation lies in its assumptions, summarized below.

5.1.1 Perceived Homogeneity

First, the model proposes to price the scarce network resource based on the perceived network load. Prima facie, it seems that a uniform load factor is presumed across all points of the network on which basis bandwidth is priced. However, this is simply not true. The Internet is not a homogeneous network. The load factor and the resultant level of congestion is going to be very different along the different nodes/switches/lines between the sender and the receiver.

It may be argued that the price of sending a message can be based on the most congested point of the network. However, the path that a packet will take cannot be predicted with any degree of certainty . It is thus close to impossible to base pricing on an algorithm related to the network load at the most congested point of the network along the path that the packets have to traverse in order to be able to reach their destination.

Also, network load is unpredictable, and is prone to sudden peaks and troughs. It is entirely possible that the load at a particular node changes rapidly and the bid is simply not good enough to receive priority from that node at that moment, even though it might have been so earlier. It may be argued that through consensus a system could evolve where "regional" congestion is calculable, and the price determined on the basis of an algorithm that considers all possible routings and all possible levels of network loads. However, given the diversity of the Internet and the multiple levels of players, this sounds extremely far-fetched and difficult to achieve without any neutral, oversight agency.

5.1.2 Manipulation of network load

Second, and more importantly, a pricing system based on network load opens itself up to potential abuse by those who control the facilities at the system bottlenecks. It may be argued that any system would be vulnerable to abuse, but the anonymity of data transferred along the Internet would make this system especially vulnerable: for example, unscrupulous firms in control of the various nodes would have both the incentive and ability to manipulate the network load to keep it artificially high so as to create an upward pressure on the price of network usage. Given that marginal costs are almost zero, the firm would attempt to maximize revenue. It can do this by tracking network usage and artificially keeping the network load at a point where overall revenue realization is maximized.

The system is therefore open to abuse by bottleneck-controlling firms who peg the network load at high levels in order to maximize revenue, thereby manipulating the price of network usage upwards. For the system to operate fairly and efficiently, there would either have to be no motivation for exploitation of market power, or a strict system of controls against abuse.

5.2 Internet Pricing: A Case for Regulation

These two issues—the perceived homogeneity and the possibility of manipulation—are the fundamental reasons why the Smart Market mechanism, or any variation of it, needs to be combined with an institutional form that is responsible for (a) consensus-building, and (b) ensuring against manipulation, anti-competitive behavior, and abuse of market- power. Given the experience of the telecommunication industries, it should be amply clear that there is an essential contradiction in free market operations. The greater the degree of freedom, the greater becomes the role for regulation. [12] Taking the example of the telephone industry, it should be clear that potential bottlenecks and potential for abuse need to be considered well in advance so that necessary safeguards may be put in place.

It is important to address the control of bottlenecks and their role in influencing the pricing mechanism. Although an oversight agency could, hypothetically, ensure that the consumer surplus [13] generated is not collected as excess profits by the firms and is returned to consumers (MacKie-Mason, 1994 [14]), it is more desirable to design a system wherein the transfer of excess funds does not happen in the first place. While it is true that competition is the best form of regulation, the privatization of the Internet's facilities and the emergence of the NAPs indicate that the owners of the underlying trunks and access paths (the Regional Bell Operating Companies, the Inter Exchange Carriers, and the CAPs) are likely to have more market power than any private organization has had over the Internet to date.

Whether one envisions Internet carriage emerging as a competitive industry or one that is effectively oligopolistic, there seems to be a role for regulatory agencies. There is a need to regulate pricing and control anti-competitive behavior in the event that the industry is less than competitive. On the other hand, even if the system is highly competitive, the dynamics of network pricing need to be implemented by some form of nonprofit consortium or by a public agency to ensure consumer protection on the one hand, and coordination and consensus among the different service providers on the other. In the absence of such consensus building activities and an imperfect market situation, dynamic pricing is likely to have a chaotic effect where the cost of accounting and regulatory oversight is extremely high. This might have an undesirable effect on the implementation of such a scheme in the first place.

Some may argue that in the event a purely competitive situation emerges, then it does not matter what form of pricing scheme emerges (Bohn, 1994 [15]). But this overlooks the fact that every pricing schemes has its own inherent bias and different levels and kinds of associated social benefits.

An added factor that needs to be assessed is how technology is expected to develop over time. Similar to pricing schemes, every technology also has its own bias. Since technological development is likely to be unbalanced, and breakthroughs can be expected to be sporadic both in terms of time and space, the pricing schemes that are implemented need to be accordingly tailored to reflect or obviate the effects of technological imbalances.

For example, transmission technology, which is dependent on fiber-optics, is slated to develop much faster than switching technology, which is currently electronic based. Should the expectation be that switching technology will develop quickly and fiber-optic technology implemented, the fear of congestion at the nodes will no longer be a valid one. The bottleneck will then change back to the transmission lines, not in terms of the physical capacity of the fiber optic trunk lines, but in the costs associated with overlaying all user lines, especially the last loop that connects the customers premises to the nearest switch.

In all likelihood, the market is going to be transformed in an incremental manner. Initially, some form of usage-based pricing, possibly dynamic pricing, may be combined with flat- rate pricing. For applications that require resource reservation, usage-based pricing would be necessary to control their proliferation and to ensure network performance. For more traditional forms of net usage, such as email, flat-rate access would continue to be the norm. In other words, the pricing system that is likely to evolve would move the industry towards multiple service levels. While it would be difficult to predict the exact form of pricing that will emerge, it seems clear that there will be a role for oversight agencies and regulators as the Internet evolves

References

Bernier., P. (1994). Opportunities abound on the Internet. Telephony, 226(13).

Bohn, R. (20:35:25,2 June 1994). Future Internet pricing. Posting on telecomreg@relay.adp.wisc.edu.

Bohn, R., Braun, H.-W., Claffy, K. C., & Wolff, S. (1994). Mitigating the coming Internet crunch: Multiple service levels via Precedence. Tech rep., UCSD, San Diego Supercomputer Center, and NSF. Available at ftp://ftp.sdsc.edu/pub/sdsc/anr/papers/precedence.ps.Z.

Business Editors. (March 11,1994). Competition, controversy ahead in era of Internet commercialization. Business Wire.

Cocchi, R., Shenker, S., Estrin, D., & Zhang, L. (1993). Pricing in computer networks: Motivation, formulation, and example. Tech rep., USC, Department of Computer Science, Hughes Airport Company, and Palo Alto Research Center. Available via Web from [formerly] http://gopher.econ.lsa.umich.edu.

Campbell, A. (April 4, 1994). Distributed testing: Avoiding the Domino effect. Telephony, 226(14).

England, K. (08 04:26,7 May 1994). Future Internet pricing. Posting on telecomreg@relay.adp.wisc.edu.

Love, J. (00: 02:55, 4 May 1994). Notes on Professor Hal Varian's April 21 talk on Internet economics. Posting on telecomreg@relay.adp.wisc.edu.

MacKie-Mason, J. K. (13:37:03,2 June 1994). Future Internet pricing. Posting on telecomreg@relay.adp.wisc.edu.

Mitchell, B. M . (1989). Pricing local exchange services: A futuristic view. In Perspectives on the telephone industry: The challenge of the future. Edited by James H. Alleman & Richard D. Emmerson. Harper & Row: New York

Pecker, C. A. (1990). To connect or not to connect: Local exchange carriers consider connection oriented or connectionless network services. Telephony, 218(24).

Russell, J. D. (1993). Multimedia networking requirements. In Asynchronous Transfer Mode. Edited by Yannis Viniotis & Raif O. Onvural. Plenum: New York.

Tenney, G. (18 :42 :09,4 May 1994). Future Internet pricing. Posting on telecomreg@relay.adp.wisc.edu.

Varian, H., & MacKie-Mason, J. K. (1993). Pricing the Internet. Tech rep., University of Michigan, Department of Economics. Available via Web from [formerly] http://gopher.econ.lsa.umich.edu.

Varian, H., & MacKie-Mason, J. K. (1992). Economics of the Internet. Tech rep ., University of Michigan, Department of Economics . Available via Web from [formerly] http://gopher.econ.lsa.umich.edu.

Wenders, J. T. (1989). Deregulating the Local Exchange. In Perspectives on the telephone industry: The challenge of the future. Edited by James H. Alleman & Richard D. Emmerson. Harper & Row: New York.

Vickrey, W. (1981). Local telephone costs and the design of rate structures: An innovative view. Mimeo.

Author Information

Mitrabarun Sarkar (sarkar@tc.msu.edu) is a Research Associate with the Institute of Public Utilities; The Eli Broad School of Management; Michigan State University; East Lansing, MI. Tel: (517) 355 8004.

Notes

1. Traffic statistics are available from Merit's ftp site at nic.merit.edureturn to text

2. Varian and MacKie-Mason note that the actual growth has been faster. Internet usage is underestimated by the Merit figures, which do not incorporate data related to alternative backbone routes where the traffic is estimated to have been growing much faster.return to text

3. For example, real-time video is closer to a connection oriented network service (CONS) than it is to packet-switched connectionless network services. It does not exhibit the same stochastic burstiness that is characteristic of more conventional applications such as email. Russell (1993) notes that one way of distinguishing the kind of applications is to think of them as being either "conversational" or "distributive" (p. 190). Conversational applications are interactive where delays are critical to the natural flow of communication, and where a few hundred milliseconds can make a difference. Against this, in distributive applications, delays are not so critical. The newer applications are more skewed towards conversational than distributive.return to text

4. For a detailed overview of bandwidth requirements of different emerging applications, see "Multimedia networking performance requirements" by James D. Russell in Asynchronous Transfer Mode Networks, edited by Y. Viniotis & Raif O. Onvural, Plenum Press: New York, 1993.return to text

5. For a more detailed discussion of the telcos and cable companies involvement in the Internet, see Paula Bernier's "Opportunities abound on the Internet" in Telephony, vol. 226 (13), March 28. 1994.return to text

6. TAP-INFO is an Internet Distribution List provided for by a Washington-based organization, Taxpayers Assets projects, an organization founded by Ralph Nader. This letter, which was posted on various conferences across the Internet, requested a signature campaign addressed to Steve Wolff, Director of Networking and Communications for the NSF.return to text

7. For a detailed and well argued thesis of the difficulty in allocating joint costs in the telephone industry, see John T. Wenders "Deregulating the Local Exchange" in Perspectives on the Telephone Industry: The challenge of the Future, edited by James H. Alleman & Richard D. Emmerson, Harper & Row, New York, 1989.return to text

8. They also report that the Internet has experienced severe congestion in 1987, and during the weeks of November 9 and 16, 1992, when some packet audio/visual broadcasts caused severe delay problems, especially at heavily-used gateways to the NSFNET backbone and in several mid-level networks. A posting by William Manning on the telecomreg list on 4 May, 1994, at 20:50:46, reports that Rice University had to shut down their campus feed because some students were playing around and feeding live video signals into the Net, thus saturating the link, and making it unusable for other users on the ring. Varian & MacKie-Mason also report that they found delays varied widely across times of day, but followed no obvious pattern.return to text

9. One is tempted to include Mosaic and Netscape as a traditional application. However, the newer forms of multimedia applications over Mosaic and Netscape are tending to skew it as an application base that is that is at loggerheads with the net environment.return to text

10. It can also be argued that the real-time transmission of a heart surgery is more beneficial than an academic browser, and this is where the essential difficulty in assigning social values based on application software rather than specific uses come in. This point will be elaborated later.return to text

11. In the emerging architecture, the Network Access Providers will play a crucial role. The four NAPs, as mentioned earlier, are all telephone companies, with the exception of MFS which is a Competitive Access Provider (CAP). Historically, the telephone industry is replete with stories of monopoly abuse through the control of bottleneck facilities. It would be wise to realize that the inheritance of years of management styles cannot be shed aside very easily.return to text

12. The form and focus of regulation may change however.return to text

13. Consumer surplus in this case would be the excess bottleneck facilities.return to text

14. Posted on telecomreg on 2 June 1994.return to text

15. In response to my posting on telecomreg where I invited assessments of pricing mechanisms in the context of the systemic bottlenecks that are likely to emerge.return to text