Presented at MIT Workshop on Internet Economics March 1995

Abstract

This paper is concerned with the relationship between the costs of Internet Service Providers (ISPs) and optimal interconnection arrangements. Section 1 is an introduction to the Internet. Section 2 provides a description of the costs of Internet Service Providers (ISPs), including the costs of various support activities. Section 3 develops an economic history of interconnection agreements on the Internet. Section 4 describes the layered structure transport services that constitute the infrastructure on which the Internet is built, and describes how costs of service provision can influence the choice of an architecture for interconnection. Section 5 concludes.

1. Introduction: What is the Internet?

The Internet is a network of networks connecting a large and rapidly growing community of users spread across the globe. Individuals use the Internet to exchange e-mail, to obtain and make available information on file servers, and to log on to computers at remote locations. In July 1994, there were 3.2 million host computers on the Internet. The number of hosts on the Internet has been doubling every year for the past five years. The number of users on the Internet is not known with any certainty. Estimates range from 2 million to 30 million users (New York Times, 8.10.94, page A1). The revenue generated by Internet Service Providers (ISPs) is not known with certainty, either. The Wall Street Journal of June 22, 1994 reported that NEARNet, an ISP serving New England, had annual revenues of $5 million in 1994. AlterNet, a national ISP, reported an annual revenue of $11 million for 1994, and a growth rate of 50% (Press Release, July 11, 1994). Maloff (1994) estimates that 1994 revenues for all ISPs will be $118 million, more than double the revenues in 1993.

The Internet is a loose federation of networks, each of which is autonomous. There is no central point of control and no overarching regulatory framework. More detailed descriptions of the Internet's history and organizational structure can be found in numerous sources, including Comer (1991) and MacKie-Mason & Varian (1994).

2. Internet Services and their Costs

Internet Service Providers (ISPs) offer their customers a bundle of services that typically includes hardware and software, customer support, Internet Protocol (IP) transport, information content and provision, and access to individuals and information sources on the Internet.

The service mix varies across providers and over time. Customers usually obtain an access link from their location to the ISP's nearest node. While many ISPs will arrange for this connection and pass the cost on to the customer, the access link is not usually considered a service offered by the ISP. Access through an 800 number or other options where the called party pays for the telephone call (such as Feature Group B access) are the major exceptions

Access to "the Internet" is a minor miracle that is often taken for granted. There are dozens of commercial ISPs offering a variety of service options. At the low end are dialup accounts with limited electronic mail capability suitable for some individuals. At the high end is 45 Mbs connectivity that is suitable for institutions with sophisticated campus LANs, such as large universities. At the level of basic e-mail connectivity, all customers can reach, and be reached by, the same set of people and machines. In this sense, the Internet is like the Public Switched Telephone Network (PSTN). A difference between the Internet and the PSTN is that broad connectivity on the Internet resulted without explicit regulations or government mandates on interconnection. A major purpose of this paper is to describe the economic environment that resulted in this connectivity, and to analyze how fundamental changes in the economic environment will affect the connectivity of the Internet in the future.

As a prelude to this analysis, we describe the cost structure of Internet service provision. Two caveats are in order. First, we focus on the incremental costs of Internet service provision, with occasional references, where relevant, to examples of Internet costs that are borne directly by end users. The Internet has been built incrementally on a very expensive infrastructure, which includes the facilities of the telephone companies, the computing environment of end users (including department LANs and systems administration), and the campus LANs around which the original regional networks were built. ISPs pay for a part of these infrastructures through their purchase of leased lines and, in some cases, payments to universities for rent and local administration. For the most part, the joint costs of the infrastructures are picked up directly by end users and are not included in the prices charged by ISPs. Second, in the absence of results generated by a more methodical approach, we rely on anecdotal evidence. Below, we list the major categories of cost and describe which elements of cost are sunk, fixed, and variable. The analysis is a useful first step toward understanding Internet competition and interconnection arrangements.

2.1 Costs of Hardware and Software

Customers have a choice between dial-up and leased line access to the Internet. Dial-up access is of two types: shell accounts and SLIP (Serial Line Internet Protocol) or PPP (Point to Point Protocol) accounts. With a shell account, a customer uses his computer, a modem and communications software to log onto a terminal server provided by the ISP. The terminal server is connected to the Internet, and the customer can use the Internet services that the ISP has enabled for his shell account. As most potential purchasers of shell accounts already own a computer and a modem, the incremental hardware and software costs on the user's end are negligible.

An ISP offering dial-up service must purchase a terminal server, a modem pool, and several dial-up lines to the telephone network. In March 1993, the World (an ISP selling shell accounts in Boston) reported using a Solbourne SPARCserver as its host computer. At that time, the host computer was reported to have 256 MB of memory and 7 GB of disk space. The World had sixty five incoming lines serving approximately 5,000 customers. By August 1994, the number of subscribers had doubled to over 10,000. The World's SPARCserver was reported to have 384 MB of memory and 16 GB of disk space. In 1994, the World did not report on the number of incoming lines, but there are reliable estimates that this number has doubled.

The costs of supporting shell accounts are partly fixed and partly variable. When the number of customers and the amount of usage increase, increases in computer memory, disk space, and the number of incoming lines may be necessary. The link from the ISP to the Internet (typically through one of the major backbone providers such as AlterNet, PSI, or the NSFNET) may also need to be upgraded. These upgrades are usually lumpy.

SLIP and PPP accounts require software in the customer's host computer to packetize data according to the IP protocol suite, and format the packet for transmission over a telephone line. A suitable modem is required. The hardware and software costs of SLIP connectivity are comparable to similar costs for shell accounts. However, SLIP software has been difficult to configure in the past, and has often been priced above dial-up shell accounts. At the ISP end, costs are incurred in purchasing dial-up routers and inbound telephone lines. No terminal server is required. Additional customers and additional use will eventually result in additional costs for ports on the router and upgrades in the link from the router to the rest of the network.

Leased line customers typically have multiple users who are already connected to an enterprise network consisting of one or more LANs . Internet connectivity for these customers often requires the purchase of a router and CSU/DSU (Channel Service Unit/Data Service Unit). AlterNet's prices for equipment suitable for a 56 Kbs and T1 connections are about $2,500 and $5,700, respectively. At the ISP end, additional hardware costs may include the purchase of matching CSU/DSU and a port on a router, or an additional router if existing routers are fully loaded. Hardware costs increase in a lumpy manner with the number of customers. Upgrades of the ISP's internal links may also be necessary. Prices for leased line service vary considerably among providers.

The hardware and software costs described above are part of the costs of obtaining Internet service, as is the cost of the users' computers, and the LAN infrastructure in which large customers have invested. This is an important feature of Internet economics: substantial elements of cost are borne by the user and not the ISP. Consequently, user costs are considerably higher than the charges set by the ISP. The incremental costs of Internet connectivity are small in comparison to the larger investments that potential customers have already made.

2.2 Costs of Customer Support

ISPs incur support costs when a customer is acquired, on an ongoing basis during the business relationship, and when the business relationship is terminated. Service establishment may require a credit check, consultation with the customer on the appropriate choice of service options, a billing record that accurately reflects the customer's selected options, facilities assignment, configuration of the ISP's network to recognize the new customer, analysis of the network infrastructure for possible upgrades to support the added load, and other activities necessary to maintain service at the level expected by customers. In addition, some initial debugging may be required to ensure that the hardware and software at both ends of the connection interoperate. Ongoing customer support is required during the business relationship. Large corporate customers may upgrade their LAN hardware and/or software, which may require reassignment of IP addresses and reconfiguration of their Internet link. Individual dial-up customers may upgrade their operating system software, or install new Internet search tools, and they may require help with configuration. ISPs must also undertake network management and maintenance activities to assure an acceptable quality of service. Costs at service termination include a final settling of accounts, and reconfiguring routers and domain name servers to ensure that the records accurately reflect the termination of the relationship. While all customers require some support, the level and cost of supporting customers varies widely across individual customers. A technical description of support activities can be found in D.C. Lynch & M.T. Rose (1993), Chapter 14.

BARRNet's service description in January 1993 (obtained via anonymous ftp) provides some information on the nature and cost of service activation. When it was founded in 1986, BARRNet did not view customer support as a component of its service mix:

BARRNet was conceived and implemented as a network of networks. It connects "sites" or "campuses" rather than individual computers. Our assumption has been generally that our member sites operate their own networks, and support their own users. BARRNet is then more a provider of "wholesale" network service than "retail service...".

This view had changed substantially by 1993, when a wide range of support services were offered. For example, by 1993, T1 connectivity was offered in two flavors. Full Service had a nonrecurring fee of $17,000. With this option, BARRNet owned, operated and maintained the hardware at the customer's end, provided spares, and upgraded the software as necessary. The Port-only option required the customer to provide the router at its location, and to assume responsibility for configuration, management, and maintenance at their end. The non-recurring fee for Port-only service was $13,000 (24% less than for Full Service). It can be inferred that the cost of configuring, managing, and maintaining a router added up to approximately $4,000 over the expected lifetime of the contract.

Other elements of customer support were offered as unbundled options. The Basic Internet Connectivity Package, priced at $1,500, included assistance in acquiring an Internet number and domain name, specification of a hardware platform for domain name service and e-mail, configuration of the platform, and training for one person in the maintenance of the platform. The Deluxe Internet Connectivity Package, priced at $3,000, offered the following additional services: specification of equipment to secure internal networks, configuration of packet filters, configuration of secure mail servers on the internal network, and configuration of a Network News server. Additional consulting services such as specification of technical platforms, remote monitoring of internal links, and training, were available at $125 per hour.

Currently, BARRNet's equipment and installation fee for a Full Service T1 customer is $13,750, and a two year prepaid contract for service is $22,800. For high speed connections (T1 and 56 Kbs), the nonrecurring charge for equipment and service activation exceeds the ongoing charge for a year's service. For low speed service, the installation fee is about half the annual service fee. BARRNet's current service activation fee varies from $13,750 for Full Service T1 to $1,300 for 14.4 Kbs leased line or dial-up SLIP or PPP access. Port-only T1 service costs $2,000 less at installation than Full Service. While the additional charge of the Full-Service option has fallen by 50% between 1992 and 1994, the hourly charge for consulting services has risen to $175 per hour, an increase of $50 per hour. This suggests that the costs of standard support tasks have fallen, while the cost of customized advice has risen sharply.

AlterNet provides another data point. They charge a non- recurring fee of $5,000 for T1 service; the fee does not include necessary hardware and software, but does include help in configuring the customer's router. While these charges vary across ISPs, similar patterns emerge: the charge associated with account activation is a very significant component of a customer's cost. To the extent that there is active competition among ISPs the structure of prices reveals, at least partly, the structure of ISP costs; and customer support appears to be significant. In sum, there appears to be considerable expense and effort involved in connecting a new customer to the Internet, and considerable variation in cost across support services and customer types.

2.3 IP Transport

In this section, we discuss some relevant aspects of IP transport: the method by which an IP packet is transported from one location to another. Within a campus LAN, the entire IP packet (data plus header) is treated as a data unit by the LAN, encapsulated in a LAN packet and transported in accordance with the LAN's protocols. An important consequence is that LAN protocols will determine access to the LAN's bus. From an economic point of view, the significance is that the IP header cannot be used to allocate a potentially scarce resource (access to the bus) without accommodation by the underlying LAN protocol. The identification of bottlenecks and the design of resource allocation mechanisms in a layered architecture is complex and beyond the scope of this paper.

On wide area networks built on private lines contention for the scarce resource takes on a different form. A router receives packets from various interfaces, consults its routing table, and forwards each packet on the appropriate outward link. When the incoming rate exceeds the outgoing rate, packets can be temporarily queued. The IP header has a Type of Service (TOS) field that can, in principle, be used to manage queues in routers, but this function has not been implemented widely. Bohn, Braun, Claffy & Wolff (1993) describe an episode during the mid 1980's when router queues were managed to offer priority to some delay-sensitive uses of the network. They propose that the TOS field be implemented and used to manage access to congested links. When IP runs over private lines (or any time-division multiplexed service) the IP protocol can be used to implement smart markets or other congestion management schemes without any change in protocols at lower layers in the stack.

Wide area networks that use so-called "cloud technologies" (also called fast packet services) such as Frame Relay (FR), Switched Multi-Megabit Data Service (SMDS) and Asynchronous Transfer Mode (ATM) raise a different set of economic issues. Fast packet services statistically multiplex packets over time slots carried on underlying physical facilities. The fast packet technology treats the IP packet as a Protocol Data Unit, just as LANs do. As was the case with LAN technologies, the IP header, by itself, cannot deal adequately with resource allocation issues. Below, we briefly describe the role of IP transport in the provision of Internet services.

2.3.1 Costs of Transporting IP Packets over Private Lines

In the late 1980's most ISPs had internal backbones consisting of routers connected redundantly to one another by private lines ranging in speed from 56 Kbs to 1.5 Mbs. Most lines were leased from telephone companies. There were a few exceptions. NEARNET, in Boston, had its own wireless Ethernet (10 Mbs) backbone connecting five nodes, and BARRNet had its own wireless link between the University of California at San Francisco and Berkeley. Statistical multiplexing of IP packets on the underlying leased lines led to significant cost savings. The cost of transporting IP packets was determined by leased line tariffs, the costs of the routing hardware and software at the nodes, and the ongoing costs of monitoring the network and remedying supply disruptions in a timely manner. These costs were fairly substantial. MacKie- Mason and Varian (1993) estimate that the costs of leased lines and routers amounted to 80% of total NSFNET costs. The cost of the Network Operations Center (NOC) amounted to another 7%. With high transport costs, the ability to use bandwidth efficiently through statistical multiplexing is a major benefit. However, it should be noted that the NSFNET service provided by ANS (a nonprofit joint venture of IBM, MCI, and the State of Michigan) was part of a research experiment in high speed networking funded by the NSF. Almost all NSFNET customers were large Regional networks, not end users, and ANS's cost structure differed significantly from that of other ISPs. An estimate based on an analysis of several midlevels suggests that IP transport accounts for 25 to 40% of a typical ISP's total costs. While efficient use of bandwidth was important for these ISPs, it was not as important an issue.

For many ISPs, transport costs are sunk over the business planning horizon. A brief digression on leased line tariffs may be useful, both for an understanding of IP transport costs and because the evolution of leased line prices may foretell similar developments on the Internet. Currently, most long haul transmission links are provided over optical fiber. The major cost of constructing fiber optic links is in the trenching and labor cost of installation. The cost of the fiber is a relatively small proportion of the total cost of construction and installation. It is therefore common practice to install "excess" fiber. According to the FCC's Fiber Deployment Update (May 1994), between 40 and 50% of the fiber installed by the typical interexchange carriers is "dark"; the lasers and electronics required for transmission are not in place. The comparable number for the Major Local Operating Companies is between 50 and 80%. Private lines are provided out of surplus (lit and unlit) capacity available in the networks constructed by telephone companies. The incremental cost of providing private line service is determined by the costs of lighting up fiber if necessary (lasers plus electronics at the ends), the costs of customer acquisition (sales effort and service order activation), and ongoing costs of maintaining a customer account. Private line tariffs must recover these incremental costs and contribute to the very substantial sunk costs of the underlying facilities. Furthermore, these tariffs are set in a very competitive environment. The effect of this competition has been to drive down the price of leased capacity. According to Business Week, private line prices have fallen by 80% since 1989 (Dangerous Living in Telecom's Top Tier, September 12, 1994, page 90).

The cost structure described above, together with competitive forces, has resulted in two increasingly common features of the price structure: volume discounts and term commitments. The standard interLATA private line tariff consists of a non-recurring charge, and a monthly charge based on the airline mileage between the two locations to be connected. Customers can select optional features at an extra charge. The standard charges vary with the bandwidth of the private line, but there are no usage sensitive charges. Private line tariffs offer discounts based on volume and term commitments. AT & T's Accunet 1.5 (T1) tariff offers a discount of 57% to customers whose monthly bill for a specified bundle of services, including T1 lines, (at standard rates) exceeds $1 million, and who commit to maintaining that level of expenditure for five years. The volume discount may reflect the fact that large customers are more desirable: it may be less costly selling to one customer with a $1 million bill than to 1,000 customers each of whom has a $1,000 bill. The term commitment may be the telephone company's response to the long run cost structure of building physical networks and the high cost of churn: as there are fixed costs of service activation and termination, companies seek to provide their customers with incentives to be loyal.

In a competitive environment with excess capacity, there is a tension between the large sunk costs of physical networks and very low incremental costs of usage. On the one hand, the need to recover sunk costs suggests using price structures with high up-front charges and low (or zero) usage rates. On the other hand, with significant excess capacity present, short-run profits can be increased by selling at any price above incremental cost. Economic theory would suggest that the pricing outcome in this situation might be unstable, unless regulatory forces or other influences inhibiting competition were present.

The consequence of the leased line tariff structure described above for the cost of IP transport is straightforward. Given a high nonrecurring service order charge, ISPs with leased line backbones have an incentive to size their needs over a three to five year period, and commit to a level of purchase determined by projected demand. In a rapidly growing Internet, this can result in substantial excess capacity among ISPs in the short run. The incremental cost of carrying IP packets will be close to zero. (If private lines charged for usage, this would not be true). However, the sunk costs of IP transport can be substantial. An examination of ISP network maps in mid-1993 suggested that none of the national providers had backbones large enough to qualify for AT & T's largest discount. However, many ISPs were large enough to qualify for smaller, but nevertheless substantial, discounts, on 3-5 year contracts. Competition among these ISPs may be subject to the economic tension present in the private line market. Indeed, the use of volume discounts and term commitments is emerging in the ISP market. ISPs typically charge their T1 customers twice the rate they charge their 56 Kbs customers, even though the T1 customers have 24 times the bandwidth. Term commitments can be seen in BARRNet's price structure. 56 Kbs customers are offered a 17% discount over monthly rates if they take a two-year prepaid contract.

There are at least three types of ISPs whose cost structures do not fit the model described above. One type is represented by Sprint, which offers a national IP service, SprintLink. As Sprint owns a large national fiber-optic network with substantial excess capacity (45% dark in 1992), it faces a lower incremental cost for transport provision than other ISPs who lease lines. On the other hand, it has far higher sunk costs. The second type of ISP is represented by small mid-levels or regional networks. These ISPs obtain access to the global Internet by connecting to a larger ISP. Very often, this larger ISP is ANS, which historically provided inter-regional connectivity to the regional networks sponsored by NSF. The third type of ISP is the small reseller, a group that appears to have grown rapidly in the past year or so. To the larger ISP, the reseller often appears to be a customer with 56 Kbs or T1 access. The mid- level or reseller has a very small (perhaps non-existent) backbone: customers are responsible for the connections to the reseller's node (and there may be just one node), and the reseller purchases the connections to the larger ISP (and there may be just one connection). The small ISP/reseller has relatively small sunk costs and little excess capacity. According to an article in Forbes (November 23, 1993, p. 170), small providers working out of their basements require an initial investment of $30,000 for electronic gear and about $1,000 per month for telephone connections. For these providers, incremental costs for transport are relatively high as significant volume discounts are not applicable on the links to their customers and to the larger ISP.

Not surprisingly, the range of prices charged by service providers varies widely. In a competitive environment where the costs structures of different providers are radically different, where average costs are very different from incremental costs, and where there is substantial excess capacity in one key input (raw bandwidth), the equilibrium outcome is not obvious. The prices for shell and SLIP accounts vary dramatically among providers, and serve to make the point. Alternet, which leases an extensive international backbone, charges $20 per month for the basic account, another $10 for e-mail service, another $10 for USENET news and $3 per hour for direct dial to an Alternet POP. The one- time fee is $99. At the other extreme, the Connection. a local provider serving the 201 area code in New Jersey, charges $10 per month for a shell account, has no fees for e- mail or USENET news, no usage charge and will waive the sign- up fee of $20 for customers who sign a 12 month contract. Scruz-Net, a small network in Santa Cruz, offers single-host SLIP/PPP connectivity at 28.8 kbps for $25 per month, with an allowance of 100 free hours. JVNCnet, a commercial provider with a backbone spanning many states, offers SLIP access at $59 per month and $4.95 per hour. Whether these large price variations are accompanied by large quality variations is not known.

In sum, the cost structure of IP transport provision varies considerably among ISPs. The four broad classes include providers who own a physical network, national backbones based on leased facilities, small regional networks, and resellers. Sunk costs of transport are highest for the first type and lowest for the last. Variable (with number of customers and usage) costs are lowest for the first type and highest for the last. Prices across providers are highly variable.

2.3.2 The Impact of Fast Packet Technologies on IP Transport Cost

The introduction of new fast packet services such as Frame Relay, SMDS, and ATM may have a significant impact on an ISP's cost structure and its role as a provider of low cost transport. Fast packet services (also referred to as cloud technologies) statistically multiplex variable size packets or fixed size cells onto time slots carried on an underlying physical facility. IP packets ride on top of this statistically multiplexed service. As was true of LANs, a fast packet service will treat an IP packet (header plus data) as a data unit, add its own header, and transport it over the underlying network in accordance with its own rules. Currently, NEARnet and PSI run IP over Frame Relay, Cerfnet runs IP over SMDS, and AlterNet runs IP over Ethernet over ATM. The additional statistical multiplexing gains of IP transport over those obtained by the underlying "cloud" service will be less than the gains obtained when ISPs used private lines. The extent to which an ISP can offer additional multiplexing gains will be determined in part by the proportion of the traffic over the physical link that is generated by the ISP; the higher this proportion, the greater the potential gain generated by the ISP.

Early tariffs for fast packet services had monthly rates that varied with bandwidth. There were no charges for usage or for distance. One example is a Frame Relay tariff filed by US West in September 1992. This tariff offers a small discount (about 10%) to customers who commit to a five year contract. MCI's SMDS price structure (announced in August, 1994) is considerably different. There are usage and distance charges, but the usage charges are capped at relatively low levels (Business Wire, August 16, 1994). For large users (with an access speed of 34 Mbps), MCI's SMDS price can vary from $13,000 to $20,000 per port per month, depending on the customer's usage. There is no mention of a discount for term commitment. This may be because MCI is the only Interexchange Carrier to offer SMDS, and is not concerned with customer chum.

Both tariffs discussed above show a movement away from the deep term discounts that characterize private line charges. ISPs who lease fast packet services from others will have a weak incentive to sign long term contracts for their backbones. A smaller proportion of their transport costs will be sunk. In addition, connectivity among multiple ISPs can be established at significantly lower costs than was possible with private lines. Once an ISP pays a flat rate to connect to a fast-packet cloud, the incremental costs of virtual connections to multiple ISPs are very small. With MCI's SMDS service, for example, there are no additional costs involved in communicating with multiple SMDS customers (though an ISP will have to pay for each port it connects to on the SMDS cloud). Small ISPs with a few nodes can reach out to anyone else on the national SMDS cloud without investing in a national backbone consisting of multiple private lines. With Frame Relay and ATM service, there are incremental costs of reaching additional sites on the cloud, as Permanent Virtual Connections (PVCs) must be configured and managed. However these costs are relatively small (as little as $1.19 per month in US West's tariff for Frame Relay).

Even if IP transport provides minimal multiplexing gains when it is run over a cloud technology, IP service will perform important functions. These include uniform global addressing (Frame Relay, for example, has reusable addresses with only local significance) and wide connectivity, protocol conversion across varying LAN, MAN and WAN technologies, and an important "bearer service" role that helps insulate embedded investments from ongoing technical change in network hardware. Most of these functions can be performed at the edge of the network, and IP may migrate to the network border over time. This may accelerate the evolution of some ISPs into systems integrators, or "market-makers," in the terminology of Mandelbaum & Mandelbaum (1991).

2.3.3 The Impact of Multimedia Traffic on IP Transport Costs

The share of transport in ISP costs is likely to change as new multimedia applications grow in popularity. Voice, video, data, and images differ in the requirements they place on the network, and raise difficulties for the use of IP transport in its current form.

Efficient coding schemes vary greatly for the different media; an excellent discussion of coding schemes can be found in Lucky (1991). Voice can now be digitized and compressed by a factor of 16 to 1 on commercially available chips, such as Qualcomm's Q4400 vocoder. which claims to achieve near-toll voice quality at less than 10 Kbs. Video compression using the MPEG standard allows for VCR picture quality at a bandwidth of 1.5 Mbs. Video applications over the Internet (Mbone and CU-SeeMe) use a different compression scheme. The Mbone uses the JPEG standard to digitize each frame and transmits data using UDP and IP tunneling. These techniques offer low picture quality at 100 to 300 Kbs (2 to 10 frames per second). Transmission of data files requires no specific bandwidth; there is a tradeoff between bandwidth and delay. For many current applications (e-mail, file transfer, and even fax) 9.6 Kbs is adequate, and store-and-forward techniques are acceptable. High quality image transfer (such as that needed in medical applications) requires considerably more bandwidth; schemes using lossy JPEG allow for the transfer at 56 Kbs in reasonable time.

If bandwidth were essentially infinite, variations in bandwidth use would not be a problem for mechanisms like IP that treat all packets equally. However, most ISPs pay for added bandwidth, and must treat it as a scarce resource. A simple computation highlights the problem. At 300 Kbs for a video session on the Internet, it takes only 150 simultaneous sessions to congest a link on the NSFNET, the Internet's major backbone, with the highest speed links. The congestion created by video use is pernicious; it destroys some valuable mechanisms that are part of the Internet's discipline and efficiency. Transmission Control Protocol (TCP) is used by host computers to provide a reliable byte stream to the applications that are run by an end user. TCP selects a window size, which determines the number of packets it can send to the other side before stopping for an acknowledgment. Large window sizes allow for faster throughput. With the implementation of the slow start mechanism, TCP monitors round trip times, and if it detects congestion, reduces the window size, and contributes to better system behavior. Video sessions use UDP (User Datagram Protocol). Unlike TCP, UDP does not reduce its transmission rate during periods of congestion. Other users running data applications over TCP pay disproportionately in delay when video sessions congest any link. If congestion should get severe, TCP users may have an incentive to stop using the slow start mechanism.

ISPs recognize that there is a problem with high bandwidth users and uses. In the absence of pricing solutions that can be implemented with the IP header structure, they have resorted to blunt instruments. For example, anonymous ftp sessions on NEARnet's server begin with a welcome message announcing that only paying subscribers may access the Internet Talk Radio files. In view of the fact that some of these files are greater than 30 MB, the prohibition appears to be motivated by a desire to preserve bandwidth. If the Internet supported a more sophisticated billing mechanism, or if the cloud technology underlying IP service supported multiple quality of service types, blanket prohibitions such as this may not be necessary. Current work on IPng and real time protocols may solve the bandwidth allocation problem at some point in the future. In the meantime, IP remains a very cost-efficient transport mechanism for applications (like e- mail and image transfer) which are not affected by delay.

2.3.4 Summary of Transport Costs

The costs of providing IP transport represent a substantial fraction (25-40%) of an ISP's cost. This proportion will fall as less costly fast packet services are more widely deployed. However, the increase in the use of multimedia applications may result in a proportionally greater increase in the need for bandwidth. The tension between satisfying customers with bandwidth-intensive needs and satisfying customers with low-bandwidth applications cannot be efficiently resolved with current technology. MacKie-Mason & Varian (1992) have suggested a smart-market mechanism that allows bandwidth to be efficiently priced. An attractive feature of their pricing scheme is that it generates the correct signals for investing in new capacity. More work needs to be done in this area, taking into account the more complex layered structure that is now emerging in the Internet. In addition, the issue of bundling or unbundling transport, support, and information content needs to be addressed by new pricing approaches.

2.4 Information Content and Provision

There is usually a fixed cost associated with the production, formatting, and organization of information suitable for database applications and a low cost of duplication (Perritt (1991)). Some examples suggest that the revenue generated by information provision will greatly exceed the revenue generated by the underlying transport. Audiotex services (900 and 976 numbers) usually set per minute charges that are many multiples of standard toll charges. Further supporting evidence can be found in the price list published by Dialog. The connect time charge for transport has traditionally been in the $3-$12 per hour range, depending on the network used to access Dialog. Once the user has logged on, the charge for database access ranges from $15 to $300 per hour. Peter Huber, in the Geodesic Network (U.S. Department of Justice, 1987), reports that on- line services spend only 8-10% of their expenses on local and long distance transport. Approximately 40-45% of their expenses are spent in acquiring information content, and approximately 45% is spent on sales, marketing, and administration.

For much of its life, the Internet has offered "free" information. This reflects the Internet's roots in the academic community, which encourages the free dissemination of scholarly research. In addition, various government agencies have begun to use the Internet as a means of making public information available in electronic form. For example, the FCC posts its Daily Digest on an ftp server. Archives of Usenet groups and the ongoing contributions to newsgroups provide another source of free information. The use of Web- servers to establish company "presence" on the Internet represents yet another source of "free" information; the supplier of the information pays to display advertisements in a non-intrusive way.

There is, however, a rapid increase in the number of subscription-based information services on the Internet. All the major information services (Dialog, Orbit, etc.) offer telnet access to subscribers. Dialog provides an itemized bill which separately lists charges for transport and database access. For-pay service appears to be taking root in the university community too. The ORION system at UCLA charges for access and usage, with a minimum charge of $25 per month (CERFnet News, January 1991). The clari.* hierarchy in Usenet is available only to paying customers. The charge is $75 per site plus $1 per user at the site.

It appears likely that the Internet will see a variety of free and for-pay information services develop. While for-pay information services are accustomed to paying for all transport charges incurred by their clients, and billing back for network use, providers of free information may resist this arrangement for the recovery of transport cost, as they have no paying client to bill back. Currently, the Internet satisfies the academic community's needs. However, if budgetary pressures on academia should increase, universities may feel the need to charge for the use of information they produce, and the needs of this community may align with those of the on-line industry.

There is a considerable research effort under way on a variety of security, privacy, and billing mechanisms that will support commercial information provision over the Internet. The experience of the on-line industry suggests that the commercial potential of information content and provision will be significantly greater than the cost of the underlying transport, and the needs of information providers may have a significant influence on payment mechanisms for transport.

2.4.1 The Bottom Line on Costs

The early Internet was developed to meet a specific goal: the interconnection of academic sites for the purpose of open scholarly research and education. In meeting this goal, the Internet was not just successful, it was too successful. The rapid growth of the Internet into sectors of the economy that it was never designed to serve (such as banks and on-line information services) has revealed some gaps in capability that were not important to early users, but are very important to the new users. These include higher levels of customer service, greater reliability, assured security and privacy, and billing mechanisms. In response to these changing market needs, the nature of Internet service and the cost structures of service provision are being transformed. At the same time, the spread of fast packet services is reducing the Internet's value as a provider of cheap transport. As competitors continue to emphasize service quality as a differentiating factor, the share of transport in total cost will fall, and the share of existing and new support services and sales expenses will rise. As Noll (1994) has pointed out, AT & T's expenses on sales and advertising for voice services grew rapidly in absolute and relative terms after equal access was implemented and competition grew more intense.

3. Economics of Interconnection

This section develops a brief economic history of interconnection agreements on the Internet, with a view to understanding future developments.

3.1 The Early Years: 1986-1991

A logical starting point for a discussion of Internet interconnection agreements is the first NSFNET backbone, which was constructed in 1986. The NSFNET was the top tier of a growing network of networks organized as a three layer hierarchy. The second tier in the hierarchy was made up of mid-level networks, each consisting of ten to thirty universities. Many mid-levels were formed with some funding from the NSF. Each mid-level attached to a nearby NSFNET node. The bottom layer consisted of campus LANs, which attached to a mid-level network.

The Internet's hierarchical structure, crowned by a single backbone, resulted from early decisions by Internet architects. In discussing their work, Comer (1991) states: "They came to think of the ARPANET as a dependable wide-area backbone around which the Internet could be built. The influence of a single, central wide-area backbone is still painfully obvious in some of the Internet protocols that we will discuss later, and has prevented the Internet from accommodating additional backbone networks gracefully". (pp 33-4).

Early routing protocols such as the Gateway-to-Gateway Protocol (GGP) and the Exterior Gateway Protocol (EGP), bear out Comer's observation. Internet routers typically use hop- by-hop destination routing. The router reads the destination address in the packet header, looks up its routing table, and forwards the packet on the next hop. Routing protocols specify the rules by which routers obtain and update their routing tables. Exterior or border gateways that connect a network to the rest of the Internet use only the network portion (the first 8 to 24 bits of the 32 bit IP address) of the destination address for routing purposes.

GGP partitioned Internet gateways into two groups: core gateways controlled by the Internet Network Operations Center and non-core gateways controlled by others. The core gateways contained full routes. All other gateways were permitted to maintain partial routes to destinations that they were directly connected to, and point a default route up the hierarchy towards a core gateway. This arrangement simplified routing at the core and non-core gateways, and reduced the amount of information that routers had to exchange over the network. The set of core gateways and the links connecting them together formed the backbone at the top of the hierarchy. It is worth noting that a major justification for a single backbone was simplicity in routing, and this was unrelated to economies of scale in transport. GGP does not easily accommodate multiple backbones connected to one another at multiple points.

We provide a brief example (based on Comer's discussion) that highlights a technical problem and a pricing puzzle that arise when multiple backbones are multiply interconnected. Suppose two coast-to-coast backbones are interconnected on the East and West coasts. Suppose Host 1 located on the East coast on Network 1 wishes to send a packet to Host 2 located on the East coast on Network 2. It would make sense for the packet to go through the East coast interconnect point. Suppose Host 1 wished to send a packet to Host 3 on Network 2 on the West coast. If Network 1 had a better (less congested) backbone, it may make sense for Network 1 to transport the traffic across the continent, and deliver it to Network 2 at the West coast interconnect. Routing will depend on the host address, not just on the network portion of the address. Routing schemes that require this level of detail are not scalable: the size of routing tables would increase too rapidly with the growth of the Internet.

Apart from such technical problems, there is an economic problem. For transcontinental communication between users on different backbones, which backbone should carry the traffic? In the absence of settlements. it is not clear that either network has any incentive to volunteer for the job. As long as backbones are not congested, the issue of transport will not be a weighty one. But in a world of rapidly growing traffic, when frequent upgrades are needed, there are good reasons for an ISP to advocate routing arrangements that use another ISP's backbone instead of its own. This is not merely a theoretical possibility. A relatively large proportion of traffic between sites in Mexico transits the United States over the NSFNET.

Of course, all ISPs cannot pursue the strategy of shifting traffic onto others' backbones successfully. An alternative is for some form of settlements among interconnected networks. The most important criteria for an efficient settlements mechanism are that it should not impose high administrative costs, that it should provide the correct incentives for routing, and that the net flow of funds should allow all suppliers to recover their costs. As the Internet goes through periods of substantial excess capacity, followed by periods of congestion and capacity expansion, different settlement mechanisms will be required. The smart market mechanism described by MacKie-Mason 8 Varian (1993) can adapt well to these changing circumstances. More work may be needed to accommodate routing protocols to the smart market mechanism.

The alternative to competing peer backbones is a single backbone. The drawback to this alternative is that there will be no competition for the provision of a key service: routing and long haul transport. While economists have developed sophisticated regulatory schemes to deal with the lack of competition, the practical difficulties in implementing these schemes can be enormous. The early architecture of the NSFNET apparently avoided these issues by selecting a simple routing scheme and the architecture it implied. As the net was not commercial, there was little danger of monopoly pricing, and much to be gained by centralizing the routing function.

The next generation routing protocol, EGP, addressed several weaknesses of GGP by introducing the notion of an autonomous system. Interested readers are referred to the discussion of EGP in Chapter 14 of Comer (1991). Despite its advances, EGP shared the key drawback of GGP. As Comer points out: " ...EGP is inadequate for optimal routing in an architecture that has multiple backbones interconnected at multiple points. For example, the NSFNET and DDN backbone interconnection described in Chapter 13 cannot use EGP alone to exchange routing information if routes are to be optimal. Instead, managers manually divide the set of NSFNET networks and advertise some of them to one exterior gateway and others to a different gateway."

3.2 The Current Framework: 1991-1994

From the very beginning the business plans of key ISPs appeared to be inconsistent with one another. ANS provided a bundle of services that included full routing and long haul transport. The new backbones had constructed national networks of their own, and had no need to purchase transport or routing from ANS. However, they did need to offer their customers full access to all Internet sites. As most of these sites were connected to ANS, an interconnection agreement with ANS would have met their customers' needs. The commercial ISPs argued that such connectivity (without routing and transport) should be settlement free. The rationale advanced by the commercial providers was as follows. When a customer of one ISP communicates with a customer of another ISP, both customers benefit. Each customer pays his ISP for his use of the ISP's network. Both ISPs are paid by their customers, and there should be no further need for the ISPs to settle. Proponents of this view recognized that their argument did not apply to transit traffic. But when there are only two networks involved, there is no transit traffic, and no settlements are required. It was this philosophy, together with the inability of the new entrants to obtain interconnection agreements with ANS on terms acceptable to them, that led to the formation of the Commercial Internet Exchange (CIX) in August 1991. The three founding members were CERFNet, PSI and AlterNet. The CIX members agreed to exchange traffic without regard to type (commercial or R & E) and without settlements. The CIX router was installed in Santa Clara and managed by PSI, other founding members leased private lines from their networks to the CIX router.

Initially, ANS did not join the CIX. It formed a for- profit subsidiary, ANS CO+RE, and proposed a gateway agreement that would lead to full connectivity. At its core, the gateway agreement consisted of three parts. First, determine separate attachment fees for commercial and R & E customers. Second, use statistical samples of each attached network's traffic to estimate the proportion of R & E traffic. Third, charge each attached network a weighted combination of the commercial and R & E attachment fees, with weights obtained from the sample. A portion of the revenue generated by the gateway agreements was to be put in a pool that would be used to upgrade the NSFNET infrastructure. The proposals from ANS and the CIX members had little in common. Nevertheless, after some negotiation it was agreed that ANS and the CIX members would interconnect without settlements. ANS did not pay the CIX membership fee, and CIX members did not pay ANS for NSFNET services.

In October 1993, the CIX, apparently without warning, blocked ANS traffic from transiting the CIX router. At this point, ANS (through its subsidiary CO+RE) joined the CIX and full connectivity was restored. In the 10 months following this episode, CIX membership rose from about 20 to about 70 ISPs.

The right of resellers of IP service to transit the CIX router has been, and continues to be, debated. The Membership Agreement has, for at least two years, contained some rules suggesting that assured connectivity was limited to the "direct" customers of member ISPs. The term was not defined in the Agreement. It now appears that, beginning in November 1994, resellers of IP services will have their packets blocked at the CIX router if they do not join the CIX, and pay the $7,500 annual membership fee The cost of resale has gone up. AlterNet requires resellers to purchase a special wholesale connection that costs about three times as much as a retail connection. In addition, AlterNet requires resellers to use a complex addressing scheme and routing protocol (BGP4) rather than the simpler PPP protocol used by end users. PSI does not sell wholesale connections. Sprint apparently treated resellers just like its other customers, but new CIX rules together with the ability of the CIX to filter routes may affect Sprint's policy on resale. The growth of resellers and the change in the way established ISPs treat them is an interesting and unsettled phenomenon.

In order to simplify the exposition, only the roles of the NSFNET and the CIX in Internet connectivity have been discussed so far. There are several additional arrangements that are important for assuring connectivity. The more complex arrangements described below were made possible by developments in routers and routing protocols. Routers can now accommodate large routing tables, reducing the need for default routes. The current routing protocol of choice, Border Gateway Protocol 4 (BGP4), has automated the maintenance of routing tables and is flexible enough to accommodate routing policy based on configuration information. BGP4 also permits the use of more flexible addressing and aggregation of routes. Routing technology is not an absolute constraint to the development of a multiply connected multiple backbone architecture.

3.2.1 The Metropolitan Area Ethernet-East

The Metropolitan Area Ethernet-East (MAE-East) began as an experimental interconnect arrangement developed by AlterNet, PSI and SprintLink. Currently there are fourteen members. MAE-E differs from the CIX in two important ways. First, MAE- East is a distributed Ethernet service (provided by Metropolitan Fiber Systems) spanning a wide geographic area. An attraction of a cloud service like MAE-East is its cost, compared to that of a physical connection to a single router. Second, there are no multilateral agreements in place at MAE- East. ISPs need to work out a set of bilateral agreements that meet their needs for connectivity. Currently, none of the bilateral agreements are in written form. It has been unofficially reported that there are no settlements at MAE- East. Every provider accepts all traffic from and delivers all traffic to any ISP with whom a bilateral agreement exists. The transactions costs of multiple bilateral negotiations can be high.

The CIX has announced that by September they will place the CIX router on Pacific Bell's SMDS cloud. The CIX will retain its multilateral connection agreement, which reduces the transactions cost of establishing connectivity. These developments suggest that there is broad agreement on the benefits of using cloud technologies as interconnection mechanisms. But the large ISPs have not yet settled on a common business agreement for interconnection. The simultaneous existence of two very different interconnection models (multilateral and bilateral) naturally raises the question: how many types of interconnection agreements can we expect to see in equilibrium? If a standard agreement emerges, will it look like the CIX agreement (multilateral), the MAE-East agreement (bilateral), or something else?

3.3 Analysis of Interconnection Agreements

Consider first an architecture based on the CIX model: multiple backbones interconnected at a single point (a XIX), exchanging traffic among the direct customers of all members without settlements. Members pay a fixed membership fee. Assume that ISP networks are based on leased lines and not on fast packet services. Advantages of this architecture are:

  • membership in the XIX is necessary and sufficient for connectivity to the Internet,
  • members whose routers cannot carry the full complement of routes can keep local routes and point a default to the XIX, and
  • competition among backbone providers is feasible (supported by the routing technology).

Disadvantages of this architecture are:

  • if too many networks use the XIX as the only interconnect point, the XIX router could experience congestion and provide poor service;
  • end users who are geographically close to one another, far away from the XIX, and on different networks will experience needless delay as their packets make the round trip journey to the XIX;
  • small regional providers who join the XIX will have the same reach as large ISPs who have invested in national backbones. However, small regional networks will have smaller sunk costs, and can offer lower prices than the national providers. Cost recovery may become a significant issue for the larger providers. However, if the national backbones cannot recover costs and go out of business, the small regionals lose the connectivity that their customers want; and
  • given their cost structures, large ISPs with national backbones will set prices that are not proportional to bandwidth. Small resellers can arbitrage the difference profitably for customers who do not require much support, as they do not have large sunk costs to recover. There will be little incentive for anyone to invest in a national backbone (facilities-based or leased).

The first two disadvantages can be avoided by setting up multiple points of interconnection between ISPs with national backbones. This will reduce the load on the XIX router, and reduce latency. The third difficulty can be removed by restricting membership to ISPs with national backbones, or requiring small regional ISPs who join the XIX to pay settlements to the larger ISPs. Both these possibilities are hinted at in the ClX's membership agreement. Arbitrage by resellers can be handled by prohibiting resale, or raising the price to resellers. Large ISPs have already taken this step. The current situation may be stable.

The network topology of an ISP using fast packet services may be quite complex. An ISP's customers may be scattered all over the globe, and different cloud technologies may be in use at the various customer locations. The ISP can use the IP protocol to integrate its network over these disparate clouds. The ISP will provide customer support, some network management, and possibly information content. The ISP will not provide the multiplexing function that reduces the cost of underlying transport; this function will be performed by the firms producing the underlying clouds. In this environment, the original CIX philosophy may be an attractive model for interconnection, and the distinction between large ISPs with national backbones and small regionals disappears. Every provider purchases access to the designated underlying clouds, and shares the costs of the underlying transport by paying the price charged by the supplier of the cloud. The costs of interconnection become symmetric. As some (large) ISPs do not have sunk costs that other (small) ISPs can leverage off, the incentives to interconnect will not be hampered by gamesmanship.

As transport charges faced by ISPs fall, the prices they charge their customers will not be proportional to their access speed, and resellers may continue to find the Internet a profitable business. The treatment of resellers in this environment is difficult to predict. The economic forces governing resale in the Internet are not very different than those at work in the market for interLATA voice traffic. An excellent discussion of resale in the long distance market can be found in Briere (1990), Chapter 16. The rapid growth of resellers in both the voice market and the Internet may raise difficult issues regarding the stability of competition.

4. Architectures for Interconnection

A fundamental question that is not often asked is: why should networks interconnect? This question does not have an obvious answer when the networks in question are virtual and not physical networks, and when "network interconnection" is used to refer to a business relationship between network service providers. At a purely physical level, it is true that a continuous path is needed between the equipment used by the communicating parties. However, three examples presented below suggest that full connectivity among users does not require all intervening networks to establish business relationships with one another. The optimal degree of interconnection is part of a larger architectural decision.

Consider first the case of an end user who wishes to purchase a private line to connect two points, one on the west coast and one on the east coast. One option currently available to the customer is to lease (from the New York Local Exchange Carrier) a private line to an Interexchange Carrier's (IXC) Point Of Presence (POP), lease from the IXC a private line from the NY POP to the California POP, and lease a private line from the California Local Exchange Carrier linking the California POP to the customer's location on the west coast. The end user pays a separate charge to each of the three networks for their segments of the private line. While the three networks must agree on technical matters (timing and format of the signal) there need be no explicit business arrangements linking the network service providers that provide the private lines. Interconnection on the physical level does not require the network providers to maintain explicit business relationships. The customer may choose to designate one of the three networks to be its agent, and this would result in business agreements among the three networks; but such a designation is not required. The choice between the two arrangements would seem to hinge on transactions costs (who bills whom, who reports and coordinates repairs of outages, etc.).

For the second example, consider a community of electronic mail users who belong to networks that are not connected to one another. Suppose this community wants to establish a bulletin-board or newsgroup-like service to which they can post, and on which they can read each other's contributions. One solution is for the networks to interconnect and arrange for the transfer of e-mail. Another possibility would be for one of the users in the set (or even a third party) to establish e-mail accounts with all the networks to which the community is connected, and to operate a mail relay that passes e-mail transparently across networks. Which of these options is a lower cost solution? It is not clear that there are economies of scale in the customer support required for small operations of this sort. If all e-mail prices were based only on usage, with no monthly fees, then there would appear to be no cost differences between the two alternatives. Indeed, with pure usage-based pricing, each member of the list could join all the networks at no additional cost. If e-mail prices consist of flat monthly fees, with no usage charge, then the solution based on a mail relay may be more expensive. This solution requires that at least one user belong to all networks, and that user faces higher e-mail charges than he would if the networks were interconnected. This may (or may not) be offset by lower costs for customized support. In this example, the connectivity of the e-mail networks is not necessary to support full e-mail connectivity among users, and other alternatives can conceivably be more efficient than full network interconnection. The support costs associated with different arrangements are one important determinant of efficient interconnection arrangements.

Finally, consider a hypothetical situation where ISPs enter into bilateral interconnection agreements that result in a fragmented Internet. Suppose that there are two sets of ISPs, each of which is fully interconnected, but there are no interconnection agreements between the two sets. If all ISPs sit on a fully interconnected SMDS cloud, end users can be fully connected to one another by joining one ISP in each set. With a cloud technology like SMDS, a customer who joins two ISPs does not have to purchase two private lines or access ports, and there will be no additional costs associated with the need to reach two ISP nodes rather than one. If ISP costs are based on usage, and not on pipe size, the end user may see no additional costs to joining a fragmented Internet.

On the other hand, if (as is true today) many SMDS and Frame Relay clouds are not interconnected, ISPs straddling these clouds can provide full interconnectivity among users. Thus, a customer in Boston who is connected to NEARnet over a Frame-Relay link can communicate seamlessly with a customer in San Diego who is connected to Cerfnet over an SMDS cloud.

The relative costs of different interconnection modes discussed above appear to depend on a variety of prices, support, and transaction costs and not on the costs of raw bandwidth alone. The earlier discussion of Internet costs suggested that support activities are becoming a larger component of overall costs. Owners of physical networks who provide a full range of services to end users may spend very little on the underlying physical facilities. According to Pitsch (1993), transmission costs account for only 3% of AT & T's annual expenses. The modeling of support and other service-related costs appears to be important not just because they are a significant component of services end users pay for, but also because they have an important bearing on interconnection arrangements.

The economics of interconnection agreements is complex, when the there are multiple layers of virtual networks, built one over the other. Any layer in this chain has its costs determined by the prices charged by the virtual network below it, and its prices, in turn, determine the cost structure of the layer above.

What is the economic rationale for pricing in this layered structure? For illustrative purposes, consider a common set of services underlying the Internet today. At the very bottom of the hierarchy, real resources are used to construct the links and switches that constitute the first virtual network. In the emerging digital environment, Time Division Multiplexing (TDM) in the digital telephone system creates the most evanescent of outputs (SONET time slots lasting 125 milliseconds) out of very long-lived investments (including conduits and fiber optic cables). The pricing of these time slots determines the cost structure of the first layer of virtual networks (currently, ATM services) created on top of the TDM fabric. When multiple providers with sunk costs attempt to sell a very perishable good (time slots), unit costs will not be constant, but will decline with volume. Perfect competition will not be viable. If there is considerable excess capacity, no equilibrium may exist, and some providers may exit the market. If providers at this level do reach an oligopolistic equilibrium, will their price structure involve volume discounts and term commitments? If they do, (as is the case with private line tariffs) then providers of ATM services (the next layer in the hierarchy) may be faced with relatively large sunk costs and their unit costs will not be constant. Again, perfect competition will not be viable. If providers at this level do reach an oligopolistic equilibrium, will their price structure involve volume discounts and term commitments? If so, the next level of service (SMDS and Frame Relay) will not be characterized by constant unit costs, and a perfectly competitive equilibrium will not be possible. The same questions of the existence of equilibrium and the use of volume discounts and term commitments arise, and will keep arising as we move upstream.

The fundamental economic problem arises from the large sunk costs required to build physical networks, and the technological reality that optical fiber has so much capacity that it is prudent for a network to lay large amounts of excess capacity during construction. A possible conclusion is that there are economies of scale, and the industry should be treated as a natural monopoly. It is too late for this solution. There are four large nationwide fiber optic networks (and some smaller ones) and 95% of all households are passed by both telephone and cable TV wires. In addition, alternative access providers have built fiber rings in every major business district, and most business customers in these areas have a choice of alternative service providers for voice and data communications. How might competition evolve in these circumstances? Peter Huber (1993) has suggested one interpretation of the apparent stability of some prices and market shares in the long distance market. He concludes that competition is apparent and not real. A regulatory umbrella prevents instability. Eli Noam (1994) discusses the stability of open interconnection (more specifically, common carriage) in these circumstances, and concludes that non-discriminatory contracts cannot survive in a competitive environment similar to the one described above. The success of resellers of Internet access, and the strong reaction to them by incumbents with national backbones, are consistent with Noam's view.

The Internet is one component of a very complex environment, and shares costs with other services (such as long distance calling) that run on the same underlying transmission links. Of more immediate interest is the relationship of the Internet to some new services (SONET. ATM, SMDS and Frame Relay) over which IP can be run. There are clearly many alternative interconnection arrangements in this layered structure that can result in full connectivity among end users. What is the socially optimal set of interconnection arrangements? Is full interconnectivity of virtual networks at each layer of the hierarchy necessary for optimality? If not, what is the minimal acceptable set of arrangements? How will this be impacted by vertical integration and vertical partnerships? Will an unregulated market provide this level of interconnectivity at acceptable cost? If not, what forms of regulation are optimal? Is the bearer service proposal of the Open Data Network (in Realizing the Information Future: The Internet and Beyond, by the Computer Science and Telecommunications Board) an optimal form of regulation? Clearly, economic theory can contribute to the analysis of this problem, but the work has barely begun.

5. Conclusion

The market for Internet services is highly competitive, and cost structures are driving pricing decisions. As transport costs fall and firms seek to differentiate their services, support costs will tend to rise as a fraction of total ISP cost. Support costs are not proportional to the bandwidth used by customers, and so prices will not be proportional to bandwidth. Small resellers may be able to arbitrage across the tariffs offered by large ISPs, and leverage off their own lack of sunk costs. Possible responses to arbitrage include a flat prohibition on resale, and special wholesale prices. Both these strategies are currently in use in the Internet. Nevertheless, ISP resellers represent the fastest growing segment of the ISP market (Maloff, 1994).

It is not clear that the current price structures and interconnection arrangements represent an industry equilibrium. The wide variation in prices for essentially comparable services, the growth of resale, and the current dissatisfaction with the CIX (as expressed on the mailing list compriv) are symptomatic of an evolving market where prices have not lined up neatly with costs. Part of the problem stems from the sunk costs of creating physical networks, and the resulting trend towards long-term contracts for services that resemble transmission links (i.e. private lines). This pricing strategy results in relatively large sunk costs for ISPs who create a national backbone using private lines. Competition among firms with sunk costs can be problematical, especially when there is excess capacity. At the physical level of fiber-optic links, there is a good deal of excess capacity; as owners of this fiber enter the market as ISPs (as Sprint has done and MCI is about to do), the distinction between high average embedded costs and low (or zero) short run incremental costs may lead to repeated and unstable price cuts. Owners of physical networks may decide to avoid potentially ruinous price competition by integrating vertically and differentiating their service. Customer support, information content, and reliability are three elements of product differentiation that are in common use in the Internet. The announcement by the CIX that it will filter resellers' traffic suggest that another differentiating factor may be assured connectivity: CIX members can guarantee greater connectivity than a reseller who may be blocked at the CIX router. This may be a cause for future concern if Internet connectivity comes to be viewed as another means of differentiating an ISP's service.

An important problem that remains to be solved is the determination of economically efficient interconnection agreements. It is argued in this paper that a careful economic analysis of this problem needs to focus on the layered structure of services and the support activities that are required to transform raw bandwidth into communications services that customers will pay for.


References

Briere, Daniel D. Long Distance Service. A Buyer's Guide. Boston. Artech House. 1990.

Comer, Douglas. Internetworking with TCP/IP, Volume 1, Prentice Hall, 1991, New Jersey.

Huber, Peter. "Telephones, Competition and the Candice- Coated Monopoly". Regulation. 1993 Number 2.

Lucky, R.W. Silicon Dreams: Information, Man and Machine. New York. St. Martin's Press. 1989. Lynch, Daniel C. & Marshall T. Rose. Internet System Handbook Addison Wesley, 1993, Massachusetts.

MacKie-Mason, Jeffrey K. & H.R. Varian. 1994. "Economic FAQs About the Internet ". Journal of Economic Perspectives, Summer, 1994, 75-96.

MacKie-Mason, Jeffrey K. & H.R. Varian. 1993. "Some Economics of the Internet", Working Paper, Department of Economics, University of Michigan.

Maloff, Joel. 1993-1994 Internet Service Provider Marketplace Analysis, April 1994.

Mandelbaum, Richard and Paulette Mandelbaum. "The Strategic Future of the Mid-level Networks" in Building Information Infrastructure ed. Brian Kahin, 1992, McGraw Hill.

Noam, Eli. "Beyond Liberalization II: The Impending Doom of Common Carriage". Telecommunications Policy, 1994, 18(6), 435-52. [doi: 10.1016/0308-5961(94)90013-2]

Noll, A. Michael. "A Study of Long Distance Rates: Divestiture Revisited". Telecommunications Policy. 1994 18(5), 335-362. [doi: 10.1016/0308-5961(94)90051-5]

Perritt, H.H. "Market Structures for Electronic Publishing and Electronic Contracting on a National Research and Education Network: Defining Added Value" Building Information Infrastructure, ed. Brian Kahin, 1992, McGraw Hill.

Pitsch, Peter. "Earth to Huber". Regulation, 1993 (3).


Acknowledgments

I would like to thank Jeff MacKie-Mason, Stewart Personick, Thomas Spacek, and Hal Varian for comments on an earlier version. All remaining errors are mine alone.