These are the notes from the Internet Economics Workshop that was held on March 9 & 10, 1995 in Cambridge, MA at MIT. These notes are available from the workshop's World Wide Web (WWW) server ([formerly]) in postscript and ASCII form along with other information about the workshop. If you have any questions about this material or the workshop, please send electronic mail to

We want to thank the workshop's organizing committee: David Clark of MIT, Deborah Estrin of University of Southern California, Jeff MacKie-Mason of University of Michigan, and Hal Varian of University of Michigan. Much of the workshop's success is due to their efforts in assembling some of the foremost thinkers in the Internet Economics field. These notes would not have been possible without the support of the workshop participants, administrative staff of RPCP, and the graduate students who helped mold this document. We thank them for their efforts.

There will be more detailed information about the workshop to follow these notes. The "Journal of Electronic Publishing", published by the University of Michigan Press, will have a special issue based on the workshop. Please look at their WWW site ([formerly]) in the near future for more information. Also, an edited volume of papers from workshop participants may be published by MIT Press as early as January 1996. The Internet Economics Home Page ([formerly]) will have more information about these and other developments.

We will make the workshop videotapes available as well. First, we will rebroadcast the workshop in its entirety over the Internet via MBONE in the near future. We will make an announcement about that rebroadcast when we confirm a date. Second, we will be distributing an edited videotape version through Telecommunications Policy Roundtable—Northeast. Please contact Coralee Whitcomb ( for information about the edited video version of the workshop.

A discussion list ( and an announcement list ( are currently under development. An announcement will be made once the appropriate listserv software is working effectively.

We would like to thank the National Science Foundation for the grant for the workshop, NCR-9509244. Funding for related research from NSF, NCR-9307548, and the Advanced Research Projects Agency, N00174-93-C-0036, enabled the MIT Research Program on Communications Policy (RPCP) to organize this event.

Finally, we must take full responsibility for errors in the document that follows. We have received an overwhelming response from Internet users to gain access to this information as soon as possible. Because of our hurried schedule, there may be mistakes and unclear sections in the document that follows. We felt it was better to disseminate the information as quickly as possible and you, the reader, would bear with the document's flaws.

Thank you for your interest in Internet Economics.


Lee McKnight Associate Director, RPCP

Joseph Bailey Research Assistant, RPCP

1. Introduction; Jack Ruina

I am new to this field, and mostly impressed with the amount of hype and all that is going on. It's a pleasure to have a program that is down to earth and leaves out most of the "vision thing" and gets down to some of the realities: what are we really going to have, how much might it cost, how are we going to price it, and the like.

Thanks to the National Science Foundation, Joe Bailey, Lee McKnight.

2. A Framework for Internet Technology and Economics; David Clark

David Clark is from the Laboratory for Computer Science (LCS) at MIT. There are three ways we could begin this: we could have an economist, a technologist, or a dignitary. Clark thought it would be best to start with a technologist, and he caused a few Internet iconoclasts to come as well.

Periodically people have said, "I wish I could have a bit that did...." Some of the workshop participants are involved in the construction of the next generation of IP headers so, if there are mechanisms that should be in there, Clark suggested now is the time to talk about them.

Clark said that he is going to try to talk about economics, although he does not know anything about it. He discussed the relationship between the technical mechanisms that we have in the Internet and those that we need in the network. He talked about the infrastructure and not content. Clark said that participants here want to understand the difference between technical controls and pricing controls. Today we regulate usage through technical controls—we slow the network down.

Clark added that the participants want to understand fundamental principles—if Internet engineers put a bit in the header, they want to know that this bit fundamental. The engineers don't want a "Friends and Family" bit—that's not fundamental. (Friends & Family was built precisely because AT & T couldn't do it. It is an uneven playing field.)

When Clark tries to understand the Internet and Economics, his first reaction is that he can't even imitate an economist. What are the units of production? Clark wasn't so sure that "the next packet" is the right unit. If we were to change the Internet to make the customers more satisfied, what would we change? Maybe the customer gets happier if we send the packets faster. Maybe it has to do with delays.

If you understand what the service is that the Internet delivers, then the engineers and economists can have a dialog. Is the measure of service the number of bits per second we get? The average bit rate the Internet delivers to me is about 200 to 400 bits a second. But a network that goes that fast is not interesting. The beauty is that we can get the whole bandwidth of the net for short periods of time.

People want to send things of different sizes on the net, from a few characters for remote login to many bytes for images. There are two important numbers: how big the transfer is, and how long we can wait for it to be delivered. We can translate this into a bit rate, but we are more interested in the two factors separately.

Clark tried constructing a simplistic hypothetical network, with 2000 users looking at web images, each at ten second intervals. 28,000 of these people can share a 45 Mbps link. When we ask how much the Internet backbone costs us, it's very small because so many people share it.

"Instant" is within a second or so. The U.S. is 100 ms. wide; Australia is 300 ms. away. It's a nice flattening effect that everything is this close.

The reason that the Internet works is because of statistical sharing. If subscribers couldn't count on this, they wouldn't use it because they could not afford it. But is this wonderful or a is it disaster? Will usage go nuts the next time something new happens, like the World Wide Web? The WWW was somewhat benign; it caused fanouts for connections to go up and sizes of data transfers to drop, but it did not actually consume the net. People have a paranoia that all the video streams will cause the Internet to be dead one day.

If you look at the difference between the behavior you see and the worst-case behavior you could imagine, it is about four orders of magnitude off. But a single machine could consume the entire bandwidth of the Internet backbone. How are we going to regulate this thing so that it works? How do we make it real?

Clark asked himself the question, "What have I learned from economics?" The fundamental principle is that we ought to sell at marginal cost, otherwise terrible things happen. But at marginal cost we could still go out of business. There are analogies to bridges. If you send a packet into an unloaded network, the marginal cost really does look like zero.

We run the Internet today essentially under-loaded. This is the basis for the argument that we should not do per-packet charges, and should recover our costs through fixed fees. But there is a minor fear that if we build a network and overprovision it then the network is always under-loaded, we always charge the zero marginal cost, and there is no way to recover our cost.

Clark suggested that there will always be a difference between peak and off-peak traffic, because when the network is congested there is an opportunity cost of sending a packet. We need to be able to separate out the peak and off-peak charges; we ought to base our costs on what is happening at the peak.

He then introduced a term called "expected capacity." The idea is that during congestion the users pragmatically have an expectation about what they can do, based on what happened yesterday and such. We can talk about this expectation. If we could measure it, we could actually assess the needs of users and charge them differently.

Clark pointed out that expected capacity is not a guarantee. Don't mistake it for the "committed information rate" of frame relay, which is a hard guarantee. Note that customers buy frame relay because of expected behavior; most of the customers set the committed information rate to zero.

The behavior you get is based on the provisioning, which is based on the amount of bandwidth you put into the network

We ought to focus on expected capacity. We will try to make it concrete. This is how the bandwidth is allocated under congestion. The cost ought to be allocated according to expected capacity.

The model is that you pay for the expectation. If this is right, why don't we price the Internet that way today? What we find is that what most people pay for is the capacity of their access link. There is a relationship; if your access link is bigger, you can go faster and therefore your expected capacity is greater.

The other problem with expected capacity is that the concept is not well-specified. How do you make sure your customers are happy? You cannot put a meter on customers. But what WillTel does is to provision the network high enough and give users a money-back guarantee. Nobody has tried to parameterize expected capacity.

The second problem is that the access costs totally swamp the network costs. The backbone costs users between ten and fifty cents per month, we estimate. A T1 link costs $400 per month. We don't worry about capacity costs unless there is a customer out there whose expected capacity is worth much more.

There are two ways in which the customer can pay. You can pay in dollars, contending for access and bidding for more service when you need it. Or your service can simply get worse as more customers pile in. How do you pay off between these two? Some people say that the Internet is odd, that real people want to pay in dollars. But Clark is not convinced by this argument. He argued that you need both, but if you look today people seem to prefer the second option. We shouldn't preclude that from the pricing structure.

For a typical user, it doesn't seem that we should have this big argument. The concern is that there is a big user taking a free ride, whose expected capacity is worth thousands of dollars per month. Should we be concerned with telling big users from little users? Should we be concerned with telling single users from resellers? If one person bought a T1 for one user with a high peak rate and another bought it for a thousand users, we have an intuition that we should be charging the reseller with a thousand users more money because he is using more traffic. Dealing with re-sellers comes up again and again. The first problem is to find them.

The third problem is controlling worst-case usage. Do we need to do it at all? Clark said that he isn't so sure and would entertain a debate about that. But he pointed out that worst case behavior is substantially worse than average behaviors. In his example of prototypical network of web page cruisers, my arithmetic said that 28,000 people could share a DS-3 link. What would happen if people tried to recover this per bit? It would cost about four cents per megabyte, instead of 48 cents a month. If in the middle of the night we did a transfer and got T-1 rates, the charges come out to be 45 cents a minute. This is a rate that is 10^6 times the 48 cents per month rate we pay just sharing the fixed cost.

Back to the WWW examples: if there is a small user, cruising the web, looking at a 2 KB file every ten seconds. Our average bit rate is very low, 1,600 bits per second, and 28,000 people can share a big link. What happens if we have big users who have a ten megabit file and want to look at it every five minutes and transfer it in thirty seconds? The same link only supports 185 of theses people; they are 150 times bigger. And we can invent users who need even more than this.

There is a legitimate difference between a small user and a big user. But the difference is not how many bits they send. TCP is an opportunistic protocol; things transfer as fast as they can. If the net is unloaded, why should we punish the users? We want to distinguish between the case where the net is unloaded and when it is not.

We could talk more about a reseller, but if you just count bits you cannot tell a reseller from a large user. What are we supposed to do with the flood of bits? The right thing to ask is if we clamp down on the flow of bits, at what point does the user start to squeak? That's the expected capacity.

The difference between a reseller and a single user is that if we have a single user there is typically a small number of flows going out. However, this is not true if there is a server that many people are accessing; in that case the behavior of a server is indistinguishable from that of a reseller. The network provider ultimately cannot tell you whether a reseller is different. Clark wishes that there were no difference, because then we worry about whether we can tell the difference, whether people will know.

The expected capacity is what you believe you can be provided, but it is not a guarantee. What we would really like to be able to say is that if you have a higher expected capacity we will charge you more. And if you don't have a higher expected capacity we will charge you less, but we may squeeze down the bandwidth and if then you in fact need a lot of bandwidth you are going to be very unhappy. If a user can't turn the knob and make you unhappy than everybody is going to say, "I have zero expected capacity needs," and they are going to pay you zero money.

We need to figure out what the parameters are. As a starting point, for a particular customer we will look at typical transfer size and target delay. For example, a user may want to move gigabit files and he or she wants to do it in three hours, or they want to move web pages and want it to happen essentially instantaneously, which to me means about a half second.

The third parameter has something to do with how often you want to do it. The community has a policing mechanism called token bucket. Token bucket is a way of accumulating up tokens so that you can send a burst into the net. If your token bucket is measured over a very long period of time we tend to call it a quota, but it's all the same thing. (It's also called duty cycle.) These can arithmetically be converted to average bandwidth, but if a network provider simply told you that you got average bandwidth this wouldn't tell you much.

Once you have the token bucket idea you can deal with it in two ways: you can pay in delay, or you can pay in cost, it doesn't matter. Within the network where the switches are, we can instruct the switches, when there is congestion, to allocate the bandwidth according to the expected capacity that the people have purchased. Anybody can send if the net is unloaded. The important thing to remember is that there is nothing wrong with exceeding this capacity (TCP is aggressive, this makes the customers happy). There is no value judgment here on consuming more than your expected capacity. The value judgment is that if a network service provider throttles the bandwidth down to your expected capacity and you squeak, then you lied.

How can one do this? Clark proposed a concrete mechanism just for discussion. He mentioned before he began that there is a deep flaw here, but he went ahead anyway. If a person invents a box that sits out at the edge of the network, he called this a traffic meter for lack of a better name (the word has the wrong connotations). The box has a profile of your expected capacity; it could be anything you want. Packets are either in and out of your expected capacity, and all the box does is color those packets red and green. Then it just forwards them.

What happens when these packets reach a switch and the switch is congested? The switch looks for packets that are red, and it says that the red packets are going to cause congestion. The Internet just throws congested packets away, but he suggests that we should use explicit notification of congestion. (It didn't work the last time we tried it, but there is a proposed strategy that could make source quench work.) When we discover congestion on the network, we selectively push back on the people who are sending red packets, which is to say you push back on people who are exceeding their expected capacity profile. If you bought more expected capacity you could be sending ten times as much as you are and the switch doesn't do anything complicated.

What do you do at the edge of the net? You can do anything that you want. But it's useful to have explicit congestion notification. What happens when someone wants to pay more, but only wants to pay under periods of congestion? There are two ways to do it. We can make long term measurements and determine when the congestion periods will occur, but it's well-known that peak pricing can cause the peaks to appear elsewhere. Or we can have dynamic information inside the network.

Where you have interconnection points, you ought to put these traffic meters there as well.

The traffic meters can implement a tremendous range of policy. All they are doing is emitting red packets and green packets, but the strategy they use to tell which is which is a private matter. You can be as silly as you want; someone will invent the equivalent of Friends and Family.

Clark said that he has gone off and talked to people in several areas about this, and they ask why we are wasting our time talking about this topic when we know that even after economists figure out the correct pricing model, someone will invent something like Friends and Family that has nothing to do with cost, only with extracting money from customers. But what we are trying to do with this system is to corral all of the weirdness in the edge of the net. If you want to invent Friends and Family, put it in your own traffic meter.

Another problem that is also fundamental: we have lots of providers out there, some of them have big capacity and some of them have small capacity. If there is demand for connections to Slovakia, then the guy in Slovakia should put in more bandwidth and somehow we should make the cash flow work. (Remember, we're not just forwarding packets, we are also forwarding money.) Notice that the packets can all flow west but the money may have to flow east. You cannot derive money flow from packet flow. How do you allocate the money properly? Deep down inside, Clark said, he believes that this idea of expected capacity is a rational way to think about the responsibility that each network has to each other.

The traffic meters can do a broad range of things. But setting the flags is a matter of policy. Why should we worry about this, when what companies are really trying to do is extract more money from consumers? We are building the traffic meter, operators can color the packets however they want. We are only providing the mechanism.

Today we don't do any separation of users at all. By and large this works okay. We can always find someone who wants more bandwidth, complaining in the periphery. But the point he stressed is that engineers should not do things that are terribly precise. We should do things that have a certain amount of characterization; buy expected capacity under what we call best case external network behavior.

Clark proposed just one flag in the header. People have proposed all sorts of different flags. Small users versus big users: we have these mechanisms, but we have never used them. The difficulty is that people could lie. We could charge people more for the big user packets. Degree of assurance: we used to be able to buy big pipes to get the bandwidth, but now we want to hook to a packet net and get some guaranteed bandwidth. In the red/green packet net, everything that is happening is probabilistic. What you really want to look at is green packets; if you are throwing them away, then we know the switch is overloaded.

But how do we deal with a customer who doesn't want to play the statistical game, who actually wants to buy network bandwidth? We should imagine creating a bandwidth assurance where we don't have to keep track of the flows in the network. As long as we know that there is enough bandwidth there, we are okay.

Support for real-time traffic: This is outside the scope of the talk, but it is critical.

General design principles for the Internet: The stuff in the center of the net should be simple, should be a mechanism and not policy, should be a general building block, and should relate to basic economic principles. What we put at the edge of the net should be a local matter, should be as complex as you want to pay for.

We may have other propositions for how to solve our cost recovery and pricing problems. But the key is that we need to adhere to these principles.

3. Information Technologies, their Interoperability and Productivity

3.1. Lee McKnight, MIT Research Program on Communications Policy, "Economics of Interoperability"

McKnight began this presentation with the observation that the Internet is defined by its interoperability. To be on the Internet is to be interoperable with other things on the net. The challenge is to quantify the costs and benefits of designing interoperable architectures and systems like the Internet versus proprietary systems such as cable TV.

He mentioned that there will be a workshop at Harvard in July, 1995 specifically on the economics of interoperability.

McKnight presented some definitions of interoperability in other communities. To the defense department, interoperability is the condition when information can be exchanged between devices or users. Similarly, to the digital video and cable communities, interoperability is cross platform compatibility of media and components, a new idea for the cable industry, but one which is now being pushed by the FCC. Finally, in the Internet community, interoperability is using TCP/IP protocols to do "statistical sharing".

However, these definitions are not sufficient for quantitative modeling of the economics of interoperability. Although the desirability of interoperable systems may seem self-evident to the Internet community, quantitative evaluation of its value is required. As an example of what this modeling might look like, McKnight then discussed the Research Program on Communications Policy's work on the economics of interoperability for digital video systems.

McKnight's models of component costs for digital video (HDTV) systems are premised on two competing notions:

  1. The cost of production falls following some learning curve. As the production (or supply of a service) increases, the cost of production per unit falls over time.
  2. Interoperability might imply some short term cost penalty. At least initially, it may cost more to produce interoperable components.

The key findings of this work are that the cost penalty implied by interoperability disappears within just a few years, due to faster growth of the industry with interoperable components. Although the costs begin at a higher point, production moves more quickly down the experience curve, resulting in accelerated growth and lower prices compared to proprietary systems.

Finally, he offered some thoughts on how this sort of work might be extended to measure the economic benefits and costs of Internet interoperability. First, there needs to be some way to measure value to different groups. This metric might be different for subscribers, networks, firms, industries, communities, services, or applications. Second there needs to be some way to distinguish between different types of service. There will exist different levels or qualities of interoperability—not everyone or everything will necessarily be able to interoperate with everyone else. Finally, the models must relate questions of pricing, quality of service, and resource allocation to the costs of equipment, service, and bandwidth.

In closing, he noted that the Internet provides a wealth of material to work with. It is a system for which we have use experience and data.

3.2. Erik Brynjolfsson, MIT Sloan School of Management, "IT and Productivity"

Brynjolfsson surveyed the debate over the value of computers and networking. The key question here is, "Do computers pay off?" Is there an economic benefit to investment in computers and network infrastructure? Brynjolfsson first contrasted widely known notions of Moore's law and the so-called "productivity paradox" for computers. Then he introduced new research which suggests significant marginal benefit from investment in information technology (IT). He offered some explanations for the differences in these results. Without an accurate assessment of added value in an information intensive workplace, it is impossible for firms to correctly evaluate the optimum level of investment in IT. It is necessary to develop new metrics for the value of information, new accounting and economic principles, and new designs for organizations and markets.

He began with the conventional wisdom about computing and communications. There are real and measurable improvements in the cost to performance ratio of computer technology. Commonly known as Moore's law, the trend has been a doubling of complexity and capability every 18 months. On the other hand, there has been a great deal of hype about the economic benefit of information technology and without much empirical support. In fact, some studies have found evidence of a "productivity paradox" since the studies indicate an investment in information technologies do not appear to have led to improvements in measured productivity at the level of the aggregate U.S. economy.

Although many of our own frustrations with computers might seem to support this view that technology has not improved our productivity, Brynjolfsson argued that anecdotal evidence is insufficient, and that we need to look more carefully at the impacts of IT in the workplace, especially using firm-level data.

His own work includes data collected from 367 large firms between 1988 and 1992. In joint work with Lorin Hitt, a doctoral student, he framed his analysis in terms of production functions which relate the inputs of business to its outputs (mainly Cobb-Douglas functions, in this case), he asked the question, does a change in computer inputs lead to changes in output?

The primary finding of this work was that gross returns to spending on computer capital exceeds returns to ordinary capital by significant amounts. That is, spending computer capital and IS labor translates into an gross marginal improvement of 80% in productivity. Even after accounting for the higher depreciation rate of computer equipment, the evidence suggests that the net returns to computer equipment exceed the net returns to ordinary capital for this sample.

On the other hand, he has not found evidence that the financial performance (profits, stock values) of firms with higher spending on IT is improved. In fact, his initial results suggest that financial performance may be slightly lower for these firms. Brynjolfsson offered the explanation that increased spending on IT is associated with more competitive industries, which lead to both higher productivity, but also lower profits. This effect was an indication of surplus being passed along to the consumers.

Finally, and maybe most significantly for the Internet community, when they included terms for externalities into the production function, he found very large positive externalities in productivity. That is, if you include terms for the use of computers outside of the firm (but in the same industry or sector) into the input-output function, you find strong positive correlation between a firm's production and spending on IS outside that firm. So there is empirical evidence of large externalities which is probably associated with networked computing and connectivity.

So what are the implications of these results? High gross returns on IT investment imply that current levels of investment may be far below the private optimum for individual firms. His field research suggests that this is due to the hesitation firms have in making the complementary organizational changes required by IT, slowing the diffusion rate of the technology. Furthermore, large externalities imply that investment may be far below the social optimum for the industry or society as a whole.

He added that making the changes needed to benefit more fully from IT will not be easy. The current wave of "reengineering" in the industry shows that organizational and institutional change has high costs. Furthermore, in many cases, changes are required beyond the boundaries of any individual firm. Recognition of the full potential of information technologies will mean an industrial realignment and "value chain redefinition". We are beginning to see that now as firms increasingly outsource parts of their production and ask, where is the value added.

In closing, he noted that our current accounting practices and economic concepts were invented for the industrial age a century ago. The so-called productivity paradox is a sign that we are not measuring added value properly at the national level. Just as we need new designs for organizations and markets, we also need new metrics for measuring value in the information age.

4. Internet Engineering and Economics; Marjory Blumenthal, moderator

Marjory Blumenthal, from the Computer Science and Telecommunications Board, National Research Council, started by commenting that she is struck by the differences in vocabulary among different groups talking about these issues. With that in mind, she introduces the speakers for the session.

4.1. Hal Varian, University of Michigan

His group maintains a WWW site at for everything you need to know about Internet economics. [Ed. note: That site is no longer active. Try instead.]

IP has one quality of service. However, we will want more qualities in the future for different kinds of data (video, voice) and different types of users who want performance assurances.

But if all qualities of service cost the same, why not choose the best? Therefore, we need pricing mechanisms to support prioritization. Furthermore, pricing generates revenue which can be used to expand existing resources.

One way to do this is to have multiple networks with provide multiple qualities of service. But one line to the home (instead of multiple lines,) with many qualities of service, may be more cost advantageous. However, the one-line model may require some forms of public utility regulation.

Congestion is a kind of quality of service. We see this now at Web sites, ftp sites, etc. especially within Europe, which does not have as many high speed lines. The NAPs may be a source of congestion in the future. A settlements policy at the NAPs may come about, so there is some form of compensation for ISPs trading traffic. New Zealand has usage based pricing now.

Here are three proposed pricing mechanisms:

  1. Bohn, Braun, Claffy, and Wolff have proposed a system which uses the precedence field in the IP header. Packets with higher precedence are served first.
  2. MacKie-Mason and Varian have proposed development of the "smart market". This system also uses the precedence field embedded in the IP header. The router establishes the precedence cutoff level. Everyone admitted through the router pays the cutoff price (i.e. not the precedence price). Precedences higher than the cutoff are admitted by the router, while all others are not. This system is incentive compatible. Its basic idea is that price should be set based on the state of the network: people should only pay premium prices if the network is actually congested.
  3. The third pricing mechanism is based on the principle of dropable IP packets. If the network becomes too congested, then packets are dropped as necessary.

Note that each of these systems include the current system as a special case: packets with 0 precedence, 0 bid, or that are "dropable" use the current "best-efforts" model and would continue to have a 0 usage price.

There are also various proposals for pricing connection-oriented systems like ATM. Pricing is a lot easier with a connection-oriented protocol that can plan the quality of service for a session.

How would prices be set? It could be done on a per packet basis. (Using current utilization factors and capacity levels from the NSFNET backbone, the cost would be 1/600 of a cent per packet.)

Billing and accounting is an important issue. It is said that "80% of the cost of a telephone call is billing cost" but this is misleading. 100% of the *incremental* cost of an off-peak call is the billing cost; some fraction of the incremental cost of an on-peak call is billing cost. Averaging over peak and non-peak usage, you could get a number for incremental cost that is around 50-80%. But this percent is large because the denominator is so small—the incremental cost of a call is 0 during off peak and tenths of a cent during on peak. When you look at billing and accounting as a fraction of total cost you get a very different picture: it looks like these functions are about 6% of total cost.

But the debate about billing and accounting costs in telephony is somewhat beside the point since IP and telephony are very different. Centralized billing is not the only model for IP. A "distributed accounting" system like postage stamps meters may be more appropriate. because it allows the accounting costs to be spread around the network. In this model no centralized accounting is necessary and the and users can monitor their own usage.

In summary, there is a need for some form of pricing to support different qualities of service. Network congestion may be a big problem soon. Technology progress may bail us out of that problem. However, technology will also bring more demand for bandwidth. It is better to think about the problem now and try to develop some creative solutions rather than burying our heads in the sand.

4.2. Scott Shenker, Xerox PARC

He likes the idea of FINDING a research agenda, not necessarily having one. He will show where engineering intersects with the issues discussed in the workshop.

First, we must ask—will bandwidth be plentiful or scarce?

There are three ways to deal with overload of the network:

  1. Overprovision—Provide for more bandwidth than can possibly be required.
  2. Quotas—Establish quotas on users that will share the pain.
  3. Kick off some people from the network when it becomes too congested.

The question to be asked, then, is—do we need a common admission control system? The answer seems to be yes.

We must examine the economics of admission control. When the load is high, is the second or third option better? The answer depends on the shape of the utility function of the application. If the function is concave (decreasing marginal returns), then we should never turn people away (option #2).If the function is convex (increasing marginal returns—e.g. video applications), then option #3 (removing users from the network) is the best option. Changes in the implementation will change the respective utility function. While most applications have a concave shape, some are beginning to have a convex shape.

For the moment, let us assume the following conjecture: Some applications will want admission control of overloads. Then, given that optimal thing to do is admission control, does one exercise control or just throw more bandwidth at the network?

To answer this question, look at the economics. Economically, no admission control is cheaper and simpler. However, it requires more bandwidth. The question remains—how much more and who pays for it?

Using initial analysis, it seems that high bandwidth users will set provisioning levels even if they are a small fraction of the total load. For example, ten percent of the load of high bandwidth applications leads to ten times the provisioning level. Heavy users will always prefer admission control. However, there is a need for more economic analysis in this area.

How do we price network service? There are two reasons to set prices—cost recovery and congestion. There are different kinds of charges (e.g. access capacity and usage). The marginal costs are small compared to the fixed costs involved. So, there is no perfect scheme, since marginal costs will not recover costs. Using access charges artificially discourages connection to the network. The use of capacity charges discourages occasional high bandwidth users. Usage charges discourage use in general.

One criterion is minimization of distortions. Different segments might use different solutions to this problem. Elasticities, risk aversion and utilities are important factors to understand in answering the issue. Also, there are arguments for why usage based pricing is not good. A key issue is who will service providers interconnect with?

It is important to consider competing architectures. If you had five billion dollars to spend, what would you do? Would you build part of the Internet, Microsoft Network, or a cable-like video on demand system?

People want to know how the Net is used because that is how people make money off of it. Do we need to provide incentives for people to make money off of the network?

Today, we cannot assume that the Internet will continue to work properly. In the worst case, on-line services and cable-like service will capture most of the network bandwidth. The Internet, as we know it, will become a relic. The Internet is the most socially desirable architecture. The key question is whether the Net can attract private investment. (The key question is not whether we can recover costs.) If investment cannot be attracted, why not? If so, can we articulate why? There is not much to be optimistic about.

4.3. Steve Deering, Xerox PARC

Steve Deering yielded his time to open discussion.

4.4. Questions and Comments

Comment: The Internet is the first mechanism to recognize that bandwidth is not a cost.

Brynjolfsson: Bandwidth is not ubiquitous in that I cannot get infinite bandwidth into my home.

Reply: It used to make sense to consider bandwidth. Now, it is not sensible. It is an antiquated system.

Shenker: The Internet is revenue handicapped.

Deering: My computer can run up a large bill in one night using on-line services. It is not feasible to check every packet. Then, how is billing going to affect usage based consumption?

Varian: In countries with time of day pricing of electricity, the water heaters are smart enough to heat water at night; if a water heater can be that smart, think how smart a workstation can be. People can use applications that manage the load of the network differently. Even now, there are devices in the home that can run up large bills (e.g. long distance telephony).

Shenker: The problem is with this resource it is more complicated than simply night versus day. Did you mean to have a stamp on every packet? If so, the envelope of the packet will be large.

Varian: Not every packet will be stamped. The digital stamps only make sense for protocol that has a setup stage, like ATM.

Question: What if there were lots of small investors totaling $5B instead of one large one? Also, the Internet is not a goal in and of itself. Therefore, should we change these assumptions?

Shenker: The assumptions will not change with a group of small investors since they will be aggregated. In regards to the second question, I have assumed that we want an infrastructure containing these services. So the applications will indeed be constrained.

Question: It becomes problematic as the number of networks connected to the Internet increase. It is easier with video dial tone since we know what people want to watch and there is one source. Can this also be supported with Internet over one pipe?

Deering: One could probably charge more for movies over IP since it provides more functionality.

Question: You said that bandwidth is not a resource to worry about. But, a megabyte used to be an infinite amount of memory and was built into systems. In the future, terabytes may not be enough. So we will have to deal with congestion at some time. In regards to Scott, it is interesting that he has assumed that the Internet will be privatized, and will not be considered a public good. That brings up the question of whether the Internet is a public good like the highway system. We may end up with the prisoners dilemma, if it is private, since they will want to develop private channels and applications, but they will be unable to do so because of competition.

Shenker: I believe there should be some government intervention.

Question: If usage pricing is done based on the load of the system, then it is possible to artificially increase the load of network. The network will be open to manipulation of billing. Second, I have concern that with multiple routes of a packet over multiple routers, it will be difficult to bill. There is no uniform load across routers. How do you price in this system? When do you price? At the busiest point? At the first point?

Varian: These issues have been addressed elsewhere. Yes, there is always abuse. If there is a monopolistic supplier, then he can abuse the system. Therefore, we need to encourage methods to provide for a competitive market. Sensible pricing is necessary to support a competitive market for network provision. It is clear that you can't have a very sophisticated accounting and billing system in the IP world. (And you probably don't want one.) In a connection-oriented system, it is much easier to bill since the route is known.

Question: It is clear that infrastructure can do simultaneous video dial tone (VDT) and Internet type services. If I would invest $5B, and if content is what is important, then they want the cheapest method of transmission. So I envision that it will be a cartel of companies that will sell network connectivity. However, in this model, we need only one person to put up a free open network that will bring the market down.

MacKie-Mason: We should make a comparison between the information highway and the transportation highway. On the road, one person can put only one car on road at a time. In the information world, one person can put lots of traffic on the network. This is precisely the place where public good provisioning fails. There will be costs if we want to preserve it as a private good.

Question: People are used to paying a flat fee. There will be a mix of users of the system. Using the highway analogy, on the Garden State Parkway, some people leave the toll road to struggle though free roads. But some people will pay more for the clearer toll road.

Question: What is the cost of operating the Internet? All cost estimates of operating the backbone do not include local area network (LAN) costs. The benefit of the network is a positive externality. More connection to the system adds value. I pay a high tax to the local township for part of road near where I live. So people need to understand how public goods are paid for. Once people understand, is it possible to privatize.

Deering: Line rental is by far the biggest part of the cost for running the network. I would like to know the costs for private networks, but they will not say how much it costs.

Brynjolfsson: Usage fees on on-line services have gone by the wayside. Why is it different for the Internet? A lot of the value of the network is content and not how it gets there. There needs to be pricing for sharing information and incentives for producing information.

Varian: The problem is keeping services that adversely affects other users and applications off the network. The role of pricing is to allow traditional services to flourish.

MacKie-Mason: Also, on-line services provide content, not bandwidth. That is why pricing usage of bandwidth does not work.

Deering: To keep big users happy, we should provide for them and the small users will follow along. Big users prefer that high capacity demand not be blocked. They will not want transmission control.

Question: People don't watch movies in real time. If they are charged premium prices for real time video, it will not sell.

Shenker: What large users really want requires further study.

Varian: Internet telephone may cause problems in the near term since it will be most heavily used for routes that are already congested on the Internet (i.e. overseas calls).

Question: Audio does not tolerate dropped links, so people will not use the network for it if it is congested. There are two types of traffic. The pricing models should be determined by the different utility functions. Two models will be developed—one for each function. If we try to turn the Internet into a connection- oriented service, then we will be taking it back into the past.

5. An Inquiry Into the Nature and Wealth of Networks; Linda Garcia

The problems facing telephony pale in comparison to the Internet economic problems. Linda Garcia discussed her work at the Office of Technology Assessment which has given her insight into the problems of Internet Economics. Specifically, she has worked on a report on electronic commerce which looked at computer aided logistics and support (CALS) and other networks. They tried to look at criteria which demonstrates the framework for electronic commerce.

There were five major areas of change that were examined by this report:

  1. technology advance
  2. performance of business
  3. market structure changes
  4. culture changes
  5. legal changes

They also looked historically at transportation and communications markets.

Why have a market in the first place? There was a reduction in transaction costs—it became very efficient and even more efficient when information technologies were introduced. There was also an expansion of trade. Wealth of the European markets, for example, grew as commerce by sea expanded. Today, the wealth of commerce grows for businesses because of networks like the Internet. Adam Smith wrote The Wealth of Nations at a time when the road systems around Europe were growing. And similarly, today, the global networks are generating wealth because it is in these areas where transactions costs are reduced and there are economies of agglomeration.

The information infrastructure affects the reduction of three transaction costs (monitoring, searching, and exchange). Along with these transaction costs, there are three types of transactions that were explored—point to point, point to multipoint, and multipoint to multipoint. There is an incredible competitive advantage if you create applications in a multipoint to multipoint environment. The notion of separating content and conduit is a difficult thing to do, and so you need to deal with them as a system.

At one time efficiency was enable by vertical integration. Today, however, you see the middleman removed and that reduces the transaction costs. There are different ways you can do this: change the organization structure, out- source job functions, and become a virtual corporation (out-source almost all of your work and you become the manager). Technology reduces the need for the older organizational hierarchy. The report then looked at the "productivity paradox"—decreases in productivity due to information technologies. The report went on to show that this productivity paradox may not exist because we have new organizational benefits of information technologies.

Access was the next big focus area of the report. The report stressed access to markets and not access to a bit stream, for example. In order to understand the value of a network, we need to go above the network layers and concentrate on the applications. Common carriage and universal service are two regulatory areas where the U.S. government may get involved. Government also gets involved in more market driven roles. For example, work in standardization, education, and participating in product development. She stressed that we need standardization at multiple layers. Standardization at the network layer is fine, but we need to standardize the application layer as well.

We need to address many problems in the future: intellectual property, trade, etc. and we need to integrate these into a new framework when looking at issues surrounding Internet Economics.

6. Panel: Internet Resource Allocation: Congestion and Accounting; Hal Varian, moderator

6.1. Frank Kelly, University of Cambridge

Kelly introduced himself as a "mathematical modeler of telecommunications networks—a newcomer to Internet, but familiar with the hard problems of telephony such as routing calls and blocking probabilities."

His talk was entitled "Charging and Accounting for Bursty Traffic." In telephony, well-defined methods of traffic engineering are applied to aggregations of telephone calls to determine optimal network sizes and charges for usage. In contrast, Internet traffic is bursty and it is not obvious how to measure or characterize the traffic both for network design or usage charging purposes. For example, to avoid dropped packets, the same mean traffic level may require different network sizes depending on the variance of the traffic distribution (a.k.a. the "burstiness" of the traffic).

Kelly proposes that usage charges must be sensitive to this difference; specifically, he proposes using "effective bandwidth" as a way for users to characterize their traffic to the network. (Kelly observed that phone people think each traffic source should be characterized first, while computer people think the sources shouldn't have to be described at all—the network should figure it all out.)

The concept and definition of effective bandwidth is further developed in Kelly's paper. The proposed tariff is based on the user's estimate of the mean bandwidth to be used (a[m], a per-second charge) and the network's measurement of the actual mean (b[m], a charge per Megabit). The charge is minimal when a user's guess is closest to the actual usage, giving each user some incentive to accurately characterize his or her traffic source. The incentive is weak or strong according to how easy it is to statistically multiplex the source.

For example, as the user's peak load becomes a larger fraction of the network capacity, more of the usage charge is contributed by the "per second" factor. This is appropriate since in this situation, it becomes harder to serve others simultaneously.

6.2. Nevil Brownlee, University of Auckland

Brownlee introduced himself as a physicist who would be describing "New Zealand's Experiences with Network Traffic Charging." New Zealand's network consists of a Frame Relay backbone linking 15 sites (government, research etc.) within NZ and, since 1989, a single link (now an undersea cable) to California. The connection to the rest of the world is called "NZGate." Brownlee's experience is that usage-based charging has worked well on the simple shared resource (i.e. the single NZGate) but not at all well on the more complex shared Frame Relay network.

The link to the rest of the world is too expensive (on the order of $100,000 per month) for any one site to cover its cost. However, between April of 1989 and July of 1994 it has gradually been upgraded from 9.6 Kbps to 512 Kbps, with corresponding reductions in the cost per bit.

The algorithm used to share the cost of this link has the following goals and features:

  • Set charges so as to maximize link utilization.
  • Charge enough to cover actual costs plus a percentage for development, to fund more capacity as demand grows.
  • Charge by usage volume—which means that traffic through NZGate in both directions has to be measured.
  • Charging rates are divided into two components: committed and uncommitted. "Committed" refers to the amount of traffic each site contracts to use each month (i.e. a flat payment), while "uncommitted" rates are charged for amounts above that threshold. There is a one month smoothing period, so that sites are forgiven for an overload in the first month it occurs, but have the overload amount added to their committed payment if it continues to occur. The actual charging rates (quoted in $NZ, which at the time of the conference were trading at approximately 0.62 $NZ to each $1 US) range from $4/MB for 100 MB committed rate per month to $2.30/MB for 6 GB committed rate per month for committed traffic, and $6/MB for uncommitted traffic.
  • Discounts for low priority traffic (such as email and FTP data) of 30%, and for normal priority traffic (everything else except interactive traffic such as telnet and FTP commands). The Cisco router managing the trans-Pacific link uses (low) priority queuing on this traffic.
  • An off-peak discount: 80% off between 8 PM and 9 AM, to encourage people to schedule ftp transfers off hours. In the accounting, this discount shows up as a reduced traffic count. No data was available on whether this discount produces the intended effect.
  • A 50% discount for out-bound traffic. This was added recently, because it was observed that more traffic was coming in (e.g. downloading web pages) than out. This change encourages Brownlee to give out his URL.

The resulting algorithm seems to work well for sharing this single common resource. To Brownlee's knowledge, no one yet has ever masqueraded as another site to evade charges. The algorithm has not yet dealt with multicasting.

Sharing the cost of the Frame Relay network is a different story. The 15 sites within New Zealand are divided into four management groups (universities, agricultural research, industrial research, and national library and others). Fixed cost shares are allocated among these four groups according to the size of the organizations involved. Volume charging did not work as well, since each site's service requirements kept changing. The group held a workshop in August of 1994 to try to come up with something more flexible, but was only able to agree on needs, not a mechanism: These needs include supporting:

  • "Common good" services (e.g. Domain Name Service, Network Time Protocol);
  • Broadcast services (e.g. network news);
  • High-bandwidth pipes (e.g. desktop video, two-way interactive for remote teaching and learning),
  • High-volume data transfer (e.g. FTP);
  • High-priority services (e.g. telnet, WWW).

Volume charging can't handle these needs, so instead each site simply pays its own access charges, and gets whatever size pipe it wants to pay for. Each management group specifies how much effective capacity they will pay for (i.e. within the Frame Relay cloud), producing a set of capacity matrices, which are not necessarily symmetric. The result is an overall capacity matrix.

In conclusion: volume charging works well for the single shared resource (NZGate), but not for the more complex network. Notice that quality of service needs to be considered, and actual flows have to be measured.

6.3. Jeff MacKie-Mason, University of Michigan

MacKie-Mason (an economist) described his talk, entitled "Internet Resource Allocation: Congestion and Accounting," as an attempt to reach common language if not common ground. The language barrier is especially evident when discussing "efficiency," since network and economic efficiency are totally different concepts.

How does economics fit in dealing with congestion? An economist values a network in terms of users' perceptions of it. Users have different (heterogeneous) valuations. So, for example, delay can cost user A more than it costs user B, or cost A more in the morning than A in the evening, or cost A more when sending video than when sending email. Therefore, network design has to support multiple service qualities, and it should be flexible, i.e. not lock applications into specific qualities. (For example, my email may be more important than your video at times.)

The Internet is a better metaphor for integrated services networks, since, unlike telephony and cable, it does support multiple application types—but with only one service class. It works best with excess bandwidth or with applications that can adapt to the particular service class provided.

Congestion, however, reduces resource slack, which implies resource allocation. It is happening: the question is not whether, but how. For example, when the Internet is congested now, TCP applications give up their bandwidth (they back off), but UDP applications do not, *regardless* of user valuations. Conversely, when real-time applications suffer enough delay, they become worthless, also regardless of user valuations. Faced with this situation, users have a choice of "shouting louder" (e.g. requiring that the Hz allocated to the MBONE be reduced), or setting up a separate network.

The "economics" tool enters in as a way to "use the users." "Dumb" users just offer load. MacKie-Mason's idea is to treat users as the smart devices that they are, in keeping with the philosophy of moving intelligence out to the periphery of the network (articulated earlier in the workshop by David Clark). Keep the network simple: let it give feedback, and expect users to adapt their send rates.

A good feedback mechanism should respond to heterogeneous/relative valuations (vs. e.g. TCP "slow start" which is anonymous and inflexible). It should also be "incentive compatible"—people won't be "nice" or "truthful" when there are a lot of them and they are anonymous.

Congestion pricing is one such approach, and MacKie-Mason and Varian's "Smart Markets" is one implementation. It is flexible, maximizes economically efficient use of the existing network, and gives efficient signals for network expansion. Compared to blindly charging for usage, congestion pricing is superior since it only crowds out usage in favor of more valuable usage. Its goal is "not to transfer wealth, but to communicate the information needed for efficient allocation." This is, however, only part of the answer—it only provides cost recovery for incremental investment, not already-existing investment.

6.4. Nick McKeown, Stanford University

McKeown described himself as a technologist, despite wearing a tie. He presented work done while at UC Berkeley, which aimed to answer the hypothetical question: what if Berkeley's link to the rest of the world were so expensive that it were worth recovering its cost? (This situation does not hold now, since Berkeley's current connection to the rest of the world consists of 2 T1 lines shared by 40,000 users, and is therefore very inexpensive.) Also, who are the users?

Some goals of the billing system:

  • Make no changes to existing Internet protocols or applications;
  • Explicitly involve the user in the decision to consume network resources. This is crucial if users are to change their behavior in response to a pricing/billing system. Involving and authenticating the user enables the system to be credible. It is the explicit involvement of the user that distinguishes this billing system from the previous systems. Other systems at best only identify the end machine. Somehow continue to support sharing of information—e.g. by making the beneficiary of information pay for it (instead of the provider). The basic architecture adopted for billing for usage of the external link consists of a "billing gateway" which intercepts requests for connections outside Berkeley, and forwards them to an access controller, which asks the originating machine for user identification (provided by a "uid daemon" on that machine). Each user has a purchasing agent that can either automatically make decisions (e.g. according to a set of configured rules) or ask the user (e.g. do you want to pay X for this connection?). Then, traffic is metered over the connection. Complications: first, notice that this process lengthens connection setup latency. Also, non-TCP traffic is not billed, although the scheme can be extended to UDP. Finally, how do two sites cooperate and decide who's the beneficiary is (e.g. in an ftp)—whoever is willing to pay for it? In September of 1994, McKeown and others performed a feasibility study on the Berkeley campus, comprising 40,000 users on 22,000 computers. They found that TCP traffic accounted for 94% of the bytes transferred. Also, they measured the current connection setup latency to determine what increase in setup times would be acceptable to users. From a distribution of actual setup times, they deduced that an additional 0.1 seconds of setup latency would be acceptable to the user. McKeown argued that this is readily achieved in their system. McKeown stated his belief that router technology can only develop with computer technology because of the shared microprocessors and other computer chips. Because of this, routers are under pressure to handle the traffic generated by computers which are just as fast. If routers could develop faster and independent of computers, overprovisioning would be possible. This is not the case, however, and router technology advances will parallel computer advances.

    Their study of user traffic showed enormous variation—a graph of the traffic from the 65 most active users in the EE department is a complete scatter plot. They found this to be true over 1 day and also over longer periods. Additionally, they found considerable resistance to the idea of identifying particular user's traffic—users preferred to remain anonymous.

    Finally, McKeown considered the benefit that could be gotten from spreading delay-tolerant traffic over the whole day (considering that email and netnews make up 35% of the traffic he measured). He did not propose a mechanism, only stated that if this were done, it could reduce traffic above the mean by 30%—a big deal in a heavily utilized network. Also, short-term traffic shaping, such as 1-second traffic buffering, would result in a 50% reduction in peaks.

    McKeown concluded with an invitation to others to propose interesting things to study from the vast amount of data they have collected.

6.5. Van Jacobson, Lawrence Berkeley Labs, Respondent

Jacobson responded to Kelly by noting the 180 degree shift between telephone and Internet thinking—partly caused by the telephone network's use of bigger buffers. Jacobson believes the user is far more likely to know his mean utilization needs than his peak, since the access line bandwidth is known. Can the effective bandwidth model work if the free parameter is exchanged?

Jacobson made the general observation that experience with the MBONE, which was designed to get some experience with user quality of service, suggests two different types of traffic:

  • Computer to computer traffic, for which delay is totally irrelevant. It is silly to talk about modulating delay for those applications, since there are very wide bounds around the acceptable time frame. However, they usually want high bandwidth.
  • The opposite pattern is shown by human to human traffic—low bandwidth requirements and high delay sensitivity.

This implies two service classes (editor's note: real-time videoconferencing between people appears to break this taxonomy).

Jacobson observed that New Zealand's (partial) success with usage-based charging seemed to be unique. Usage sensitive charging was a disaster in Chile; in the U.S. Department of Defense community it drove users to private networks with fixed access charges, and put the usage-based charging network out of business. Jacobson wondered what was different that made it work in New Zealand.

Finally, Jacobson observed that who gets billed for usage has a big effect. The telephone model of charging the sender discourages publishing. Has Internet expansion been driven by the value of information to its receivers, or by the providers of information?

6.6. Questions and Comments

Questions and comments from this panel were postponed until after the next panel.

7. Internet Pricing, Interconnection, Settlements and Resale; Priscilla Huston, moderator

7.1. Padmanabhan Srinagesh, Bellcore, "Internet Costs and Interconnection Agreements"

Nagesh started by stating that he agreed with most other comments of today. However, people focus a lot on bandwidth when they talk about the Internet, and there are other things that are important. He wants to talk about Internet as a service as it is perceived by customers. ISPs (Internet Service Providers) offer hardware and software, customer support, IP transport, information, access to the Internet. Access is somewhat hard to define because you don't know how many people you can reach over the Internet, but you can reach all of them from any ISP.

There are a many costs associated with offering all of these things:

  1. Layer 2 transport (private lines, SMDS, Frame Relay, ATM)—ISPs used to be more open about these costs and Padmanabhan was able to estimate costs for various ISPs and found that they represent approximately 30-40% of total costs. So, these would be the costs of the bandwidth that we talked about earlier.
  2. Network Management (address assignment, routing, etc.)
  3. Customer support (e.g. marketing, etc.)
  4. Information acquisition—AOL, Prodigy, and other ISP acquire info and resell it; that costs money.
  5. General management.

If you try to determine the incremental costs, you get different answers for each of the above items.

Incremental cost of layer 2 transport is 0 if there is no congestion. If there is congestion, the incremental cost is the delay that you place on other people.

Network Management is mostly considered a sunk cost but can vary depending on the situation. For example, a reseller with a PPP link to a national ISP has less management (and lower costs) than a reseller who peers with a national ISP using BGP. So, the management costs may vary depending on the routing technology that the reseller uses.

Customer support costs can vary. ISPs may try to seek out certain types of customers to limit costs, i.e. those who know what they're doing so they won't be constantly calling the ISP's support services. The ISP's marketing strategy determines the types of customers that it will have.

What does Internet access mean? How do my packets get transferred from my provider to another? There are interconnection arrangements such as:

  • CIX—multilateral peering without settlement
  • MAE-East—bilateral peering without settlements
  • Private bilateral peering agreements—FIXES, SWAB, MAE-East+, NAPS...

These interconnection arrangements are getting more complex, but they somehow seem to work—as Dave Clark pointed out, the Internet isn't falling apart.

What are some of the economic incentives?

  • Incentives to arbitrage/resell
  • Effect of new "cloud" technologies
  • Incentives for vertical integration and vertical partnerships

1. Incentives to arbitrage/resell ISPs that have a large backbone have sunk costs. They either own facilities or have signed a contract for several years (from AT & T, Sprint or MCI for example) for the leased lines that constitute their backbone. All ISPs. including the ones with leased facilities, likely have sunk costs that need to be recovered. Prices may need to be in excess of incremental costs for cost recovery in the long run.

For example, some ISPs charge approx. $2000 per month for a customer with a T1 attachment. It has been claimed that a T1 can support up to 3000 dialup lines. Resellers who purchase a T1 to a national ISP and resell 3000 dial up accounts at $40 can undersell some national ISPs, and make a tidy profit.

Padmanabhan talked the application of a particular economic framework to the issue of resale. The economic literature on interconnection has developed the notion of the ECPR (efficient component pricing rule, from Baumol and Willig). It was originally based on an analysis of competing long distance carriers that wanted access to the local exchange. The theory argues that if Long distance rates subsidize the local network, a competing carrier that wants to interconnect to the network should not only pay the incremental cost of the connection, but it should also make up the contribution that the long distance calls were previously making into the local network. If you calculate that subsidy correctly, then entrants will only come in when their incremental costs are lower than the incumbent's incremental costs.

Similarly, on the Internet, sunk costs are in the large national backbone, and end users pay for their access to ISP's POP (point of presence). When the reseller comes in, he is basically leveraging off the sunk cost that the incumbent has already put into the backbone. If you believe in the ECPR, then the resellers should pick up part of the costs that the incumbent has put in the backbone, and the resellers should be charged more than the end users. Padmanabhan, however, said that he would not discuss whether the ECPR is correct or not.

2. Effect of new "cloud" technologies, and 3. Incentives for vertical integration and vertical partnerships Some underlying physical network providers are starting to provide such things are wide-area ATM services, etc. So, there is a somewhat vertically dis-integrated industry structure; some providers sell higher layers of service, some sell lower layers of service.

There may be an incentive for vertical integration and vertical partnerships to emerge in this environment. The providers of the lower layers have put in the large national backbones and have substantial sunk costs. However, Bertrand competition may drive the price to the incremental cost (which is 0 for fiber). Sunk costs cannot be recovered. One way of generating a revenue stream that permits cost recovery is to offer customers long-term contracts.

Initially, in the long distance markets, these contracts were offered to large customers, but as competition continued, these contracts are being offered to a wider range of customers—down to the single user signing up for a 6 or 12 month period. Long term contracts among different network providers at various layers in the hierarchy bear a resemblance to vertical integration.

Concerning the Internet, Padmanabhan feels that, long term contracts are in current use, and there appears to be a growing trend to long term contracts and vertical integration/partnerships. ISPs will increasingly be concerned with the marketing aspects of the business. They will want to target certain customers who will have low costs, i.e. those who don't call the help desk all of the time; they will want to reduce routing costs by having interconnect agreements that are easy to manage, etc.

7.2. Joe Bailey, MIT, "Issues of Economics of Interoperability/Interconnection"

Bailey said that there are multiple models of interconnection:

  1. Bilateral agreements—ISPs agree to interconnect for economic reasons
  2. 3rd party administrator—provides interconnection between a number of hosts
  3. Cooperative agreement—e.g., certain governmental agencies, FNC, NSF, DOE, NASA, etc., have formed the FIXs. They look to interconnect with each other but do not try to make money off of each other.

He continued by saying that there are many resale policies which are important at the interconnection point. For example, the two methods of discouraging resale at the interconnection point are pricing and policing. Either method needs accounting to be done which, in turn, raises security concerns (checking for packets which are against the resale policy or collecting data to meter the usage). The resale issue will be at the hear of interconnection problems.

Bailey voiced concern about the three models of interconnection stating that there are vested interests (which may be modeled economically by objective functions, for example) which change the behavior of the interconnection agreements. For example, with the 3rd party administrator model, if they're making money by simply connecting others, they will actively try and prevent resale. We have seen this with the Commercial Internet eXchange (CIX).

The driving economic forces behind interconnection, Bailey said, are positive network externalities. This is clear with email, but could there be instances where the network externalities are negative? For example, electronic commerce on the Internet. If a provider and competing providers are interconnected and both have electronic commerce services, the local provider would limit its customers' access to the other provider's competing services. Wenling Hsu gave another example with an intelligent agents. They were denied access to certain servers for competitive reasons. Bailey also mentioned the possibility of regulations and interconnection policy. For example, we require common carriage of service providers, should the government step in and regulate? If we do not want this, we need to make interconnection agreements to prevent denial of access.

He continued by saying that interconnection agreements are application driven. Bailey thought that the Internet was application driven and people aim for interoperability. The users benefit from common applications, not just the transfer of packets from one place to another. So, when designing protocols, think of the applications.

Bailey posed the open-ended question: how do we take new applications and map them down to lower layers at which we will interconnect, and how will that affect the interconnection agreements?

He mentioned that heterogeneity (of users, computers, networking environments) was previously discussed. He said that this heterogeneity sometimes leads to non-interoperable systems. However, this depends on how one defines interoperability, and this definition can be subjective. Seeing how different people can view interoperability differently, this can bring difficulty in establishing interconnection agreements.

If we take an application view of interoperability, then it might be easier for economists, who have traditionally looked at applications and the effect of productivity, organization, etc., to study the Internet and its effects. A caveat is that the applications on the Internet are never fixed, and new ones are always being developed.

Where do we go from here? Bailey proposes one action item and one research question:

  • Action item: if we are concerned with interconnection and congestion, then we need to collect network data. The more interesting data would be at the application layer, as opposed to the network layer. He would like these types of data that the federal networking community has accumulated to made more readily available.
  • Research item: technical people and economists should work more with the application layer to address their concerns and issues (and he hopes that this workshop is addressing these such issues).

7.3. Bob Collet, Commercial Internet eXchange (CIX)

He is here wearing two hats: President of the Commercial Internet Exchange, and also Director of sales engineering for the government systems division of Sprint.

The CIX has about 122 members. Exchange point in Santa Clara, CA.

Bob compared X.25 prices to IP prices and found that there is a magnitude of difference and that this difference is continuing to grow.

In order to look at pricing and interconnect issues, you need to consider some basic numbers. One measures a business today by operating margin: gross revenue - access costs - operational costs - general and administrative costs = operating margin. Thus, one uses this operating margin to measure profitability of the business.

There is nothing unique about the Internet (when compared to ATM, frame relay, X.25, etc.) that would prohibit it from fitting into this model. Thus, one can determine a price to charge.

Bob said that unit costs are declining across the board; circuit costs and router costs. The only costs that are rising are router debugging costs. As more features are being added into the network, such as multicasting, costs are rising to implement and support those.

Other costs include staffing, development of back office systems.

He found that customers wanted service as available and reliable as telephony.

When measuring capacity of the network (# of routers, links, etc.), one must look at link utilization, router CPU utilization and router free memory. The combination of these 3 things yields the quality of service that one is providing a customer over a dedicated access connection (as opposed to a dial-up connection).

The present network configuration is: T3 links into Cisco 7000 routers or their equivalent; within the nodes, routers are interconnected with FDDI wiring hubs (100 Mbps). As the net grows, this won't suffice.

Bob sees a fairly solid growth path, at least for the next few years, which is the planning horizon. So, within the nodes, options are the DEC Giga Switch, the Cisco LightStream Switch. So, he feels that the intranode requirements can be met with the technology that we see on the horizon.

For the internode connections, the next step would be parallel DS3 links. And we can do that today. Next, would be point-to-point SONET (also feasible today). As for the routers, we just add more routers. As for all of the discussion about costs, when orders come in, we just put in the additional bandwidth and charge for it. So, there's no real magic to it.

However, some people want guaranteed service (best effort does not suffice); we use ATM to the NAP for these customers. So, the proper combination of IP routing, ATM, frame relay, can be mixed and matched to provide the optimal service for each customer.

Concerning the question of how to differentiate between a reseller and an end user—Sprint doesn't differentiate (at this time). Bob proposes to drop the term "reseller" and use "peer network". He sees these peer networks as a opportunity to have the niche requirements of the network be handled close to the end users.

With regard to settlements, across the industry, there are no settlements. Although, one could argue that there is a settlement when an ISP connects to an NSP: the ISP pays all of the costs of interconnection. And that model works domestically as well as internationally. That may cause some problems in the future as traffic becomes more bi-directional and as overseas requirements become greater. But right now, the model seems to work.

Some issues arise, such as how to price multicasting service. Some argue that this should be a giveaway since this actually takes load off of the network.

Bob claims that bandwidth is not free, a cross-country DS3 costs about $100,000/month. So, it is not free and he doesn't anticipate it to become so for quite some time.

Bob thinks that the future lies in transaction based billing, such as Marvin Sirbu (CMU) will talk about later. Discussions about usage-sensitive billing seem to fall when one talks about FTP. But the CMU billing server provides some hope for the future.

Concerning the discussion about telecommunications policy in D.C., he feels that there is the potential for LECs to have the same type of market power that the Bell Operating Companies have, and that the regulations applied to the BOCs should also be extended to other facilities-based carriers. The two key regulations are CEI (comparably efficient interconnection) and ONA (open network architecture). Between these two, ISPs, who mainly work at the logical level, should be able to get equitable access to features they need to provide service to users within any particular territory.

7.4. David Clark, MIT

David attempted to decide what he has learned and realized that he has heard different/conflicting points. For example, issues of interconnect are not very clear because of forces toward openness; potential of positive and negative externalities; economics are very unclear; etc. He says that there could very well be the hypothesis that the bandwidth costs will not dominate the costs of the interconnect, but he doesn't know whether that is true at the peer level.

David asks, what are the real forces of economics? Padmanabhan adds that pure competition needs no rules, but it needs constant returns to scale.

David finds this to be a cloudy, murky area. He doesn't know if it will be dictated by competition or by rules. He wonders if building an open system has an opportunity cost, that is, if more money could be made by building a closed system.

7.5. Questions and Comments

Q: (to Bob Collet) From a Sprint point of view, do you see the adoption of a usage based pricing system? R: Such a system is difficult to manage, esp. as you go to higher bandwidths. So, how do you do anything in an environment that will scale to hundreds of thousands of sites?

Q: Why does X.25 cost so much more than Internet pricing? R: The prices of X.25 are stable, no R & D activities as in the IETF. Also, he heard that the world is inherently connectionless, and that may result in lower operating costs.

Q: What about hostile reselling? R: (Collet) There's so much demand now, it's not an issue—plenty of niche opportunity.

Mitch Kapor, of Kapor Enterprises Incorporated, identified himself as former president of the Commercial Internet eXchange (CIX). Because he was a former CIX president, he said he could speak more openly about some internal issues than Bob Collet, the current CIX president. Mitch said that the CIX is a transitional structure that has served its purpose. There is incentive for ISP to come into CIX from behind one of the 6 major providers. That way, the ISP doesn't have to pay the membership charge.

Clark: We need a strategy to recover fixed costs. Bailey: If usage goes up but users don't, you can use usage based costs. We must realize that the present # of users a small compared to potential # of users—but not if you provide service to a fixed # of users, as in the NASA Science Internet (NSI).

Kapor: now there is plenty of potential demand; in the future, there'll be plenty of serious competition. The only way out is with the introduction of new applications and services.

Clark thinks that this is the decade of the friendly competition—we have one decade to buy up the cable companies, then the government will see what competition looks like and we'll be re-regulated (according to an LEC colleague of Clark).

Kapor: a reason that the net has done so well has been because of the culture, not necessarily because of the economics. So, the question for the future is how can we keep these cultural practices alive against commercial pressure? when designing new protocols, etc.,—is it open, neutral, negative, etc.?

The following are various comments, authors unknown.

It would help to establish a set of cultural goals and draw up a framework. So, let's produce a set of goals from this workshop. An example would be to share information that people would be interested in, i.e. don't flood mailing lists, etc.; Have the leadership of the communities identify what the goals should be. The people who created this culture should define what its future is.

If we have a usage pricing scheme, it should only matter when there is a bottleneck, otherwise, there should be no usage fee.

It's more complicated than usage—different people create different value; different bits have different value and a scheme that simply counts bits will not do.

8. Broadband and Shifting Paradigms in Telecommunications; Richard Solomon

Richard Solomon talked about how telecommunications technologies, although they are made to make life easier, actually complicate things. He demonstrated this by taking out his portable computer and all its peripherals. When he was finished, he had a stack of cords, cables, battery chargers, etc. stacked on top of a table. In this stack were 4 different battery chargers which were not interchangeable since they all conformed to different standards. After proclaiming that this mess was not portable at all if he planned on traveling around the U.S. And if he wanted to travel in Europe...he had to bring another large duffel bag full of converters and cables. In this bag there were many telephone jack converters—some of them necessary just to communicate from a particular hotel since they didn't follow the country's standard. His point was well made—standards in telecommunications may try and make things easier, but often make things more complicated since people don't necessarily follow them.

Solomon then proceeded to talk about the Internet and its economic development in an historical context. The history, he said, dates back to anti-trust regulations which began shortly after the Magna Charta in the 13th century. Since then, regulation has applied to many industries and technologies, including telecommunications. However, this telecommunications regulation was taken almost word-for-word from the railroad regulation that preceded it. Richard then proceeded to recount many anecdotal stories which lead to a few conclusions. Among his many stories, he spoke of the corruptness of AT & T and a federal judge which helped AT & T receive a monopoly status in the U.S. while the judge received many shares of AT & T stock. AT & T also promised to provide telephone service to the U.S. military free of charge. He spoke of the problems of starvation in this country because grain producers raised prices by limiting supply—this was the source of regulation as well. Solomon also talked about the private armies that telegraph, telephone and railroad companies had to protect their investments.

There are a few conclusions that stem from this talk.

  1. People are greedy and we need to "follow the money" if we are going to try and understand the economic motivation for providing service whether it is Internet provision or telephony service.
  2. Regulation is not necessarily a bad thing and many regulations, including the anti-trust regulations that are in place in this country, protect consumers from the otherwise greedy people.
  3. Standards are difficult to agree to in a competitive environment and it is usually the lack of standards which typify the telecommunications industry.

9. Commerce and Information Security on the Internet; Stephen L. Squires, moderator

Stephen L. Squires started off the panel discussion by introducing the topic of commerce and information security by stating that the importance of security as a building block. Without information security, he argues, many of the economic solutions proposed by the papers in the proceedings would not be possible. Certainly, without security, information technologies cannot realize their full potential, he said. We need to address these information security concerns by identifying: 1) the role of the government 2) the role of the Internet's users and 3) the role the Internet has in a global economy. We need to cut the "Gordian knot" which binds the complex issues of information security and information technologies.

9.1. Marvin Sirbu, CMU, "NetBill"

Dr. Sirbu focused his remarks on "NetBill" which he described as an electronic credit card to enable network based commerce by facilitating the exchange of information between the consumer and the merchant.

The system requires the consumer to establish an account with Net Bill, which then allows the user to conduct electronic transactions. This simplifies electronic commerce for the consumer because only one network account is needed. NetBill also makes it easier to become a merchant.

Security and privacy issues must be addressed to ensure the integrity of electronic commerce systems. NetBill provides for digital signatures, encrypted goods delivery, and pseudonyms to support anonymity.

NetBill utilizes an open protocol that facilitates a wide variety of functions. The system works in the following way:

  1. client requests price quote
  2. service providers makes an offer
  3. client accepts offer
  4. goods delivered encrypted
  5. receipt acknowledged
  6. transaction submitted
  7. transaction approved
  8. key delivered

NetBill aggregates transactions and uses a systems level perspective to automate transactions.

In conclusion, Dr. Sirbu noted the possibility of integrating NetBill libraries with Web browsers and servers and claimed that NetBill will facilitate an open market for electronic merchants. He anticipates a pre- commercial trial in 1995 that will include various university libraries and publishers.

9.2. Clifford Neuman, ISI, "NetCheque System"

Many services envisioned for the NII will only be realized if service providers are compensated for the services they wish to offer. NetCheque is a system that supports payment for access to NII services.

An electronic payment service must have a secure, reliable, and efficient system design. Specifically, the system must be flexible to support many different payment mechanisms, scalable to support multiple independent accounting services, efficient to avoid long bottlenecks, and unobtrusive so that users will not be constantly interrupted.

There are several forms of payments that must be considered including secure presentation, electronic currency, and credit-debit instruments.

Dr. Neuman presented an overview of the ISI design that included a discussion of how funds may be cleared through multiple servers and an example of how a personal check would be cleared.

In summary he noted that the NetCheque release was available December 1, and ISI will grant NetCheque accounts. The accounting server software has yet to be released. For more information contact

9.3. Dan Schutzer, Citibank, "Overview of Electronic Commerce"

In his presentation, Mr. Schutzer gave a broad overview of the NII and electronic commerce.

The Internet presently has over 30 million users and is growing at a rate of 1 million new users per month. The Internet is international in scope, and has a user base that is comparable in size to a country. Some distinguishing attributes of the NII include its rapid rate of change and the fact that senders and receivers do not see each other face to face.

The key needs of electronic commerce include security and privacy, and intellectual property protection. The architecture for electronic commerce should be open and provide for a competitive market. It would be desirable for it to be possible to reach everyone without requiring merchants to vie for electronic store front space. Moreover, the Internet should be an attractive place for users.

He concluded by noting that telcos may have a competitive edge over cable companies because telephony service reliability tends to be better than the reliability of cable.

9.4. Shoshana Loeb, Bellcore, "Real-Time Billing for Internet Services"

Dr. Loeb noted that traditional billing services are not flexible enough to accommodate real-time customer management and billing. Real-time billing is important because telecommunications must accommodate evolving services, and present systems are inadequate. Using phone billing systems for Internet services is hopeless.

There is a market demand for real-time billing because it is increasingly difficult to differentiate new services. Billing services may provide a competitive advantage to businesses.

There are two trials of real-time Internet billing services that Dr. Loeb is currently participating in. These include ValuDataNet and BillKeeper.

Dr. Loeb, noting that the market is fragmented now, asserted that multiple businesses may work together to provide a single service to the customer, and that an "open market scenario" could evolve. 9.5. Roger Callahan, NSA, respondent Dr. Callahan was the respondent for the meeting session. He made several general comments about previous presentations including Linda Garcia's and Joe Bailey's. He noted that many schemes for electronic commerce seem complex from the user's perspective, and there is a need to develop an open interface standard. Congestion is also an important issue. Furthermore, he noted that Internet technology may not be suited to real-time applications and perhaps the technology is being misapplied.

In conclusion, Dr. Callahan remarked that there is a need to build consensus among stakeholders on the issue.

9.6. Questions and Comments

Questions and comments for this panel were delayed until later.

10. Future Directions for Internet Economics; David Reed, moderator

10.1. Ketil Danielsen, University of Pittsburgh, "User Control Modes and IP Allocation"

Danielsen is from a computer science background. He presented some ideas about how users can obtain some control over expenditures given that they are facing a usage-based, dynamic packet price. Also, he talked about how we could possibly look at this control problem in other types of resource allocation methods. In general, Danielsen is interested in how we can make computational markets viable in the sense that users are involved in real-time allocation. He talked about administrative and incentive-based allocation, IP pricing and demand processes, user control of network usage, and my proposal for an Expenditure Control Interface (ECI). The ECI design does require some changes to the existing protocol, but he does not think the changes are too substantial.

First, allocation in IP is traditionally via administrative procedures that are pre-programmed software (e.g., FIFO, adaptive routing, and window regulation), and it is impossible to involve the user in this process. However, there are various points where the user is involved and where the externality and allocation effects are affected by what the user decides. For example, the user can choose what server to connect to, what information to request, possibly the quality of the information, when to make a request, and whether or not to cancel a request.

Usage feedback has been proposed by several others earlier. He argued that usage feedback in terms of IP packets or priorities is not a comprehensible commodity to the user, so you have to transform that feedback into something the user understands. Therefore, he proposed a modification of this feedback in order to improve what users can do.

For IP pricing, he assumed that the IP packet is the basic service unit. The pricing process can be viewed in two ways. First, for regular IP pricing, you can observe the load and compute and distribute the price. In this case, the price is equal to the marginal value of usage and changes at some pricing period interval. Second, for Smart Market IP pricing, the price is equal to the highest bid among unserved packets and changes every packet service time. For the regular IP user, you have to decide the packet usage volume. However, for the Smart Market IP user, you have to decide the per-packet bid and usage volume.

We do know that users need to bound expenditures somehow by using dynamically-priced Internet resources. Also, the network will probably benefit from efficient user behavior. He proposed a real-time control interface that could provide the user with actions to influence usage volume and bid levels (in a Smart Market network). Finally, you have to transform the usage feedback into some comprehensible format because the user doesn't necessarily understand the action-reward relationships (i.e., the user needs relevant feedback).

What he proposed is an Expenditure Controller Interface (ECI). The user specifies three parameters for periodic IP pricing: the budget period length (e), the budget unit (m), and the budget multiplier (u). The length e is the length of the period where the ECI has to limit expenditures. The budget unit m is a monetary unit which is used with the budget multiplier u to come up with a budget for a particular period (i.e., bound = P = mu). He expects the user to control the budget multiplier u. In the binary case, the user would let u be zero or one, so it would either let expenditures be allowed or not. This could be implemented via a keyboard-entry or window-based system. Danielsen expects m and e to be specified very infrequently, and thus u is the real-time control for the system. ECI, which is a piece of software at the transport layer, has to inform TCP of what the consumption limit for the each e time units. This consumption or usage limit x is equal to (P + a)/p where a is the surplus from the previous period (if allowed). We have to assume that ECI knows the network price p via some channel.

We also need to establish some interprocess communication between TCP and ECI so that TCP can inform ECI how much was actually consumed. So ECI is going to use what it knows about the price and the current usage level to compute the feedback for the user. Finally, the user is informed of the expenditure level because he does not really want to know about usage levels in terms of IP packets.

If you are dealing with a Smart Market subnetwork, the user will specify the bid pbar asynchronously to the ECI. The user does not know what x is, but ECI is informed of this from TCP while ECI instructs TCP of pbar. The user has to specify pbar through some interface like a sliding bar in a window. The user does not know exactly what pbar is, but through trial and error along with feedback, he can make some correlation between what he does with the bar in the window and what effect this has on his expenditure and quality of service.

Therefore, the ECI system gives the user real-time control of expenditure along with relevant feedback to improve his selections. Danielsen calls this transparent control because he does not have to know about the underlying network service interface. While he talked only about how ECI could be implemented with TCP, there are multiple ways that you can send an IP packet including raw IP or UDP. If you could integrate ECI with all of these possible channels for an IP packet, you could achieve application independence. A similar approach is applicable to other computational markets. He currently has an ECI and TCP/IP simulation model in development.

10.2. Wei Deng, MCI, "ATM Pricing Issues"

Deng shared some thoughts on ATM pricing issues. These thoughts are his own and should not reflect at all on MCI. There are two reasons that he chooses to look at ATM pricing. First, he assumes that ATM will be the primary network platform in the future. Second, if this is the case, then ATM pricing will be one of the major issues. There are many good examples in industry showing that a commercial service cannot really take off until the commercial provider prices it correctly. The price needs to be low enough that the customer can accept it, but high enough so that the service providers see the business opportunities. A good example in the telecommunications industry is the frame-relay product. Frame-relay struggled for the first two years until customers found out that it was cheaper than the corresponding private line service.

Deng took the viewpoint of a service provider for the purposes of his discussion. He agrees with Dr. Marvin Sirbu that even flat-rate pricing is still usage-sensitive pricing because the flat-rate depends on the access speed you require. The first pricing scheme a service provider uses is flat-rate (i.e., non-metered) pricing. One benefit of such a scheme is that customers are happy with it because it is simple and easy to understand, and they are not inhibited from using the system. Another benefit is that it is easy to implement billing systems for flat-rate schemes. One drawback is that there are no incentives for customers to make efficient use of bandwidth. Also, since everyone pays the same flat fee, the service provider cannot assign a higher priority to those customers who are willing to pay more. Interconnection settlement is also an issue because people are not sure how to share fees between interconnected providers. Finally, there is an issue of equity in that the small volume users actually subsidize the big volume users of the network.

The other option as a service provider is to use usage sensitive pricing. There is an argument that says "bits are bits" or, in our case, ATM cells are ATM cells. With this type of pricing, the service provider bills based upon the an ATM cell count. This leads us to the situation where either a video movie is not affordable or telephone calls are free. For example, consider a consumer who is watching an MPEG-2 video movie that is two hours long and contains 12,000 Mcells. If the consumer pays $3.50 for the movie, then each Mcell costs the service provider $3.50/12,000 Mcells or $0.000292 per Mcell to deliver. Then consider a two hour phone call which uses 5,000 Mcells. At the previous price of $0.000292 per Mcell, this phone call will cost $0.000292/Mcell x 5,000 = $0.0014. Similarly, if you want to charge $3.50 for a two hour phone call, the video movie would cost you more than $1000.

Another issue is the quality of service. If you use the cell-counting based billing then, then for the same number of cells, different services with different quality requirements will impose totally different costs to the network. Cell-counting based billing does not reflect this cost though.

With all of this in mind, Deng says that we have some idea of what characteristics a desirable pricing scheme should have. It should be simple to understand and implement. At the same time, the scheme should be sophisticated enough so that the users have incentives to act in the most efficient manner. This prevents users from reserving bandwidth that they are not going to use. Finally, we want the pricing structure to be sensitive to quality of service parameters.

His approach to an ATM pricing structure is shown in a matrix he created. The matrix shows time delay, call loss rate, and mileage band (for distance sensitivity). Everyone agrees that if this pricing scheme was put into practice, it would be a disaster because users do not understand these parameters. Also, this will be a very big change for engineers to put into practice. Nevertheless, this scheme is my dream scenario.

However, suppose that we group the combinations into service classes (I, II, III, and IV) which customers can choose based upon cost/service quality tradeoffs. One question is how to translate customers' requirements into these quality parameters. Another question is how to translate the parameters into engineering solutions.

10.3. Louis Fernandez, NSF

Fernandez began by reminding the participants that NSF continues to support the development of the Internet and its study even though direct subsidies for the operation of the backbone are being phased out. For example, NSF has funded the creation of six Digital Libraries whose purpose is to create new collections of data for Internet access. In addition, the Economics Program has funded research on the economics of the Internet (such as the work by Varian and MacKie-Mason reported in this workshop) and welcomes grant proposals for further research in this area.

He then discussed the question of whether Hal Varian's proposal to price off-peak usage of the Internet at its marginal cost of zero was consistent with financial survival of all Internet providers. Some Internet providers would remain even in the extreme case where there is no flat access fee but only congestion-based charges. At the market equilibrium, the demand for Internet services is sufficiently large relative to than network capacity that the resulting congestion produces enough revenue to cover the cost of supplying the capacity. Hal Varian's two-part tariff—an access fee plus a congestion fee—is probably superior to charging just one of these fees. But it is not clear whether such a pricing policy by private Internet providers is consistent with a competitive equilibrium. An equilibrium with everyone charging the same access fee seems vulnerable to competitive price cutting.

Returning to the discussion the previous day about the differences between highways and communications networks—such as the ease with which a single individual can overload the second but not the first—Fernandez pointed out that a more important difference is the cost of bypassing a segment of the network. When you send bits over the Internet, the length of the path traveled is usually inconsequential. As a result, the cost of bypassing a node on the network will be zero. In contrast, when transporting goods over a road system, the distance traveled matters greatly. As a result, bypassing a road link may be very costly—even if many different paths can be taken between any two fixed points on the grid. An implication is that if you break up a single communications network into small highly interconnected sub-networks, the sub-networks will have virtually no monopoly power. If a provider charges more than the equilibrium market price for transmitting data, everyone can route their traffic around that provider at essentially zero cost. In contrast, if the interstate highway system were broken up into a collection of toll roads, each of these roads would have a local monopoly by virtue of the high cost of trying to bypass it when carrying goods between points for which the link is the shortest distance. As we move from public to private provision of Internet communication services, it is not obvious a priori whether the emerging market structure (barring government intervention and regulation) will resemble monopoly, oligopoly, or near perfect competition.

10.4. Sandy Merola, Lawrence Berkeley Laboratories

The Federal Agencies have been responsible and forward looking in their funding of Network Research and Production. This has benefited the entire Internet by providing the needed focus not just on network capacity users, but also on network capability users (those who can and would use the full capability of the net). Continued benefits will be accrued from the leading edge network users.

The Internet community is currently a select subset of the potential community and we should move slowly in developing both a concept and implementation of a future economic model via the traditional rapid prototyping approach.

We must always be vigilant to excesses of government control or corporate desires to overcapitalize on public goods.

10.5. Questions and Comments

David Reed from CableLabs started the discussion by making a few comments. First, the notion of adding new services to the Internet needs some reasoning. Important questions need to be answered. Are there economies of scope? Is it a competitive environment? Does it make economic sense to add features and functionality to the Internet? Regarding the comment that off-peak usage should be free, it's not necessarily a given that the Internet will be unique in its cost characteristics. If it is true that off-peak usage should be free (i.e., all costs are fixed), then he expects that long distance telephone will also be this way. The comments on ATM pricing by Wei were interesting. Recently CableLabs put out an RFP on providing telephony and data services over cable systems. The question really boils down to whether the ATM equipment is available for deployment. If plants are upgraded before ATM is available, an integrated system won't be available. However, he thinks integration of the bit-stream is important.

comment: Cost recovery through usage only is a bad idea because there is a big fixed cost for the Internet, then no cost for usage except for the occasional congestion.

remark: Zero marginal cost does not always mean something should be free. With software piracy, people say "these companies are charging me a lot of money," but no one sees the high development costs. In a network situation with zero congestion, there are still costs associated with service.

remark: People like something they can understand and which is simple, and quality of service is important. Users will eventually be able to determine what quality of service is worth to them (e.g., Web service) Also, the commercial world (at home) already pays flat fees for local phone service, so users might also understand service levels fees for Internet service.

remark: Internet architecture expressly allows dynamic routing. There are problems keeping it stable though because of the heterogeneous Internet network. (It is easier to do with the phone network because calls are all of the same bandwidth.) I disagree with Hal because I think he is underestimating the cost of usage-sensitive billing. For example, the telco's cost of billing is more than half of their cost. In the telco structure, usage sensitive billing is thoroughly integrated into the system. Integrating usage-sensitive billing in the Internet is bashing against the foundation of the system. This usage-sensitive billing system would do harm to the Internet which you're trying to promote.

Louis: It seems that usage billing is not worth doing now, but it may be in the future if more commerce is carried on the Internet. (In this sense, it is similar to the highway system.) We should start thinking about these issues now.

Hal Varian thinks there is a consensus that we need to understand these costs better. I don't like the centralized telco accounting model either.

Dave Clark said he was struggling with Louis' suggestion that people withdraw until equilibrium is reached. You have to compare user satisfaction in both situations. Marginal costs don't necessarily rise with increasing bandwidth.

remark: Regarding the dynamic routing in a telco network, there are real-time prices that are used that are essential for control of the long distance network. In order to control dynamic routing, you need real-time mechanisms like these.

remark: The phone network is built according to the busiest hour of the year to accommodate all demand. Also, new features require lots of software verification (i.e., there is a lot of overhead for phone new features). There is a tradeoff for the service provider to accommodate customers or meter usage. I don't think congestion is a major problem because technology will take care of it in time. A distributed caching scheme solves many of the congestion problems. We can push all accounting to the periphery. However, is it possible to create a protocol for usage-sensitive pricing without counting all of the bits?

remark: It seems that there are assumptions by engineers about economic ideas. Economic proposals have many implementations for engineers to consider. For example, the telephone system is not the only way of doing usage sensitive pricing. I think you can keep track of the periphery only. Also, I think engineers should see new, exciting opportunity for implementations. However, we as engineers don't want to do it unless it will make things better for the users.

remark: There is a lot of complexity with these issues of usage-based pricing. For example, toll ways can still be crowded even though you pay. Thus, usage based pricing doesn't always work.

remark: Regarding the MCI proposal for service classes, Wei's matrix needs to take into account the elasticities in the applications. I think this is an important quality of service issue.

11. Democracy and the Internet Economy; Mitch Kapor

When economic and technological issues come to the table, it is also important to look at cultural and political issues. Recently there has been a phenomena called "domainism" that has shown up on the Internet. Large numbers of new immigrants have arrived recently to the Internet (from places like America Online), and there has been a large amount of disparagement of them by those already on the Internet. Simply because someone has AOL in their signature, people assume certain characteristics about them (e.g., they are stupid or ignorant). Now, your domain is the equivalent of your skin color (something you can't hide). Domainism devalues people based upon their domain, similar to racism. When people on the net stereotype, it is a good example to those who would normally act differently otherwise.

What are some characteristics of the way the Internet works? We should look at the level of governance and coordination. The Internet has the following characteristics:

  • largely non-hierarchical
  • egalitarian (people don't have a larger voice in virtue of their position)
  • highly informal (evident in mutual interconnection agreements)
  • life on Internet is motivated by a research agenda; also, there are people wanting to create and operate in a virtual environment that they enjoy and want to live in
  • participatory feeling about the net; people are encouraged to go out and do something (If you don't like things the way they are, you are encouraged to go out and improve it by writing new software. For example, Phil Zimmerman wrote PGP in response to a lack of security for common users.)

These add up to a kind of life on the frontier, characteristic of the development of the physical infrastructure. It has an anarchic quality, but things get done through the old boy network. It is highly decentralized, with emphasis on individual liberty. Control pushed out to the periphery more so than ever before in infrastructure (like telephony). All of these qualities put together show the spirit of the net. There is a link between this and the spirit of Jeffersonian democracy. Jefferson once said that "if we had to depend on Washington to tell us when to plant our crops, we'd all starve to death." He was very skeptical of central authority, and his philosophy emphasized local community (which was the context of individual liberty). In order to enable freedom and liberty, it is necessary to avoid centralization of control. Thus, it is important to have system of self-governance.

How likely is this spirit going to survive, or will it be overrun by other forces? Jefferson lost to Alexander Hamilton's economic views of centralized authority. What are the various types of pressures going to be towards centralization of control on the Internet, and how strong will they be? If you take my view that decentralization is a good thing, how do you keep the balance? He believes you can't NOT take a stand on this issue. All decisions favor one point of view or another. Architecture is politics. Every architecture says something about control and where the power lies. For example, asymmetric bandwidth (e.g., in video on demand cable systems) is a political statement because it says that some people have a privileged voice in the system and others don't. When you regard people as consumers rather than citizens, it makes a difference. Everyone should be able to be a publisher as well as a receiver of information. Kapor advises everyone to be conscious of this factor and to consider implications. However, he knows that it is often swimming upstream attempt to broaden the field of discourse. He has many scars from these experiences.

What types of risks are there of centralization? As the Internet becomes more successful, it will run the risk of greater types of regulation. For example, the Communications Decency Act of 1995 by Sen. Exon would impose self-censorship by providers. These regulatory proposals are not accidents. The opposition petition now has more than 100,000 signatures, which shows the power of mobilizing effort through the net. At some point, large commercial interests will try to take over the system as it gets more economically important. This is why continued development of tools and environments which empower users and small groups is important. For example, Usenet news "interprets censorship as damage and routes around it." Give consideration to openness (cultural and political) as much as to economic and technical efficiencies.

Q: You mentioned 400 channels to home and that it's bad to call it the NII and be done with it. How can we combine the two?

R: People were expecting that interactive TV would be developing more rapidly and the Internet more slowly, but it's been the opposite. We should evolve Internet as rapidly as possible so that at the point when cable and telco operators become operational (5-10 yrs.), robust capabilities will already exist on the Internet. This will lessen the preemption risk by a large company. For example, Internet mail has triumphed through the network externality. I believe the day of reckoning between the Internet and preemption risks will come 5-10 yrs. out, and that it will probably have something to do with Microsoft.

Q: Soon, we'll see the next great sexual marketplace. Any comments?

R: Like VCRs, the Internet will do it too. It is important to retain First amendment values. Give choice to those on the periphery (end users) instead of imposing centralized censorship.

Q: Please comment on intellectual property on the Internet.

R: You now cannot get a copyright on a menu structure, and I think this is a good thing. Look and feel is dead. (It was overturned by the US court of appeals today.) However, there need to be economic incentives for users to create content, but statutes don't exist to do that now. For the technical community, now is the time to define technical requirements to allow people to get compensated for content.

Q: How does the Internet relate to competitiveness in the global marketplace?

R: It's very difficult to keep packets from crossing the border. There is the increasingly real notion that you can have datahavens that offer amenable environment for data. Also, watch India's silicon valley. However, we don't understand the global impact of the Internet yet.

Q: Are there any downsides to the Internet?

R: Yes, things can get out of control. Tools of technology are readily available to criminals. I tend not to focus on it because other people seem to do such a good job.

12. Setting an Agenda for Action; Lee McKnight

Lee McKnight opens the session by reviewing some of the major points which were touched upon in the previous two days of discussion.

One of the common sentiments of the two days of discussion was, "If it ain't broke it, don't fix it." The Internet is fine, and any interference only damages the Net. Yet, congestion is evidence of incentives not providing the correct environment which is a hindrance to the wide spread use of real time, large bandwidth applications. A possible solution is to build prototype models and technology so as to learn and experiment with these new models of network usage and control.

Some specific points concerning the key players were discussed:

Government: "Don't touch too much."

  • The recommendation for government was "watchful waiting" (regulatory forbearance.) The government shouldn't do anything rash, especially if the technical and economic mechanisms are not fully understood.
  • The government could also support Open Data Network R & D as suggested by the National Research Council, including pricing
  • The government should allow industry and academia to experiment on their own nets so a variety of possibilities may properly develop.

Business: "Follow the Money"

  • Business was advised to "follow the money, but respect the culture." That is business should recognize that what is attracting business to the net is the net culture, which makes it an attractive place for people (in business eyes, potential customers) to gather and interact. But if the culture is not preserved, it won't be a fun place to be, so businesses could kill the market if they're not careful to respect net norms.
  • If businesses can enlarge the pie and take a piece that's fine, but capturing the slice consumer's already have could be dangerous.

Academician: "Further research is required."


  • Preserve the Net culture and ethos. Educate new users so as to minimize culture shock and friction.
  • Old timers must be considerate and open to the newcomers.
  • Internet Engineering, Economics and Culture 101—Understand the importance of research and statistical sharing.
  • Has the ethos of cooperation and sharing on the Internet been "accidental" or is it a fundamental aspect of on-line culture.

Economics of the Internet

Lee McKnight reviews some of the questions that arose during the workshop. What is the proper way of pricing in this market? What types of valuation, metrics, and culture awareness must be employed? What are the management, marketing, and overhead costs associated with on-line commerce?

Some argue that even if we do not know all these answers immediately there is plenty of profit in the market, but one still must not be rash with unrealistic projections.

Hal Varian questions the underlying assumption to the above questions and asks, "what if there is no congestion?" Then there is no need for usage sensitive pricing. Marvin Sirbu disagrees to this, referring to the charges one must pay to have a T1 line, but others argue this is a "access" charge more than a content usage fee.

If network congestion is of such great concern then what are some other methods of finding a way to alleviate congestion? It is agreed that some sort of "signaling" system is required to minimize congestion. Currently, the signal is that bits are thrown away and communication becomes slow and difficult. This solution reduces the social welfare and is sub-optimal.

Lee McKnight then asks which infrastructure will dominate the NII? Will the TCP/IP protocol continue to evolve and be useful, or will real time telephone interests gain the upper hand? Some question the need for telcos, preferring to keep the Net as it has been.

Lee McKnight continues the discussion on the perceptions gained from the conference. He mentions a sense of foreboding from economists and others. There is a fear that an avalanche of technology and people shall over-run the field as the Internet continues to expand at a phenomenal rate so that there will have to be a frantic effort in order not to be trampled or be passed by. In order to avoid this, one must realize the importance of doing research and keep an eye towards the future.

The issue of usage based pricing is raised and the discussion turns towards the New Zealand case as an example of successful usage based charging. The country of New Zealand is a group of islands west of Australia that only had one link to the Internet, which was controlled by a central authority. It is proof that usage based pricing can work if people agree to share and a limited monopoly agrees to work with all interests fairly. This scheme may regulate traffic congestion and generate revenue, but also may lead to the loss of information production by smaller users. So how does one charge, without discouraging small information producers? One solution could be to have different rates for consuming and producing information. Or, if one can rely on robust security mechanisms which guarantee a return of revenue to the authors, they will recoup the "network costs" with royalties.

Specific Suggestions

What are some specific suggestions one can make towards answering some of the above questions? Perhaps a more in depth, external study or case study could be conducted on New Zealand. As an example, one could do a study from the Internet point of view. (Supposedly there is a master's thesis from the University of Canterbury that attempts to do this.) Nick McKeown of Stanford has data sets for such studies but requires a model on which to run the data. Walter Wiebe proposes that the group come to agreement on what the data and rationale for the study, then someone can set about to doing it.

Open Comments

As Joe Bailey and David Brown discovered at NASA, it is hard to ascertain user behavior without using real money in attempts to study behavior patterns. Unfortunately a real test with NSI didn't seem feasible because of conflicts with NASA's mission goals. David Clark agrees with this assessment, users do not really behave as they think they would in relation to network congestion, money, and other attributes. Follow-ups to the New Zealand conversations are made, and some charge that the Net culture was injured under this scheme. As to whether usage actually declined, someone mentions that the traffic growth for New Zealand is the same. But it is unknown whether this is inbound or outbound traffic. It was unclear whether the growth in traffic is the same, or if the actual traffic rate is the same.

Another comment returning to the importance of agreeing on the data sets is made concerning the work by Jim Keller and Brian Kahin who have been collecting data on service providers. Unfortunately, commercial providers have not been willing to participate.

Further questions about billing are raised. Since we are trying to measure quality/dollar, how does one measure quality in an objective manner? What are the true billing and accounting costs? What is the linear term in front of the speed in doing these analyses? (Perhaps it would be useful just to chart some bounds, if bandwidth/speed increases by 5, how much will the cost rise?) If a rational market and culture are misaligned, what does one do?

A suggestion that people define terms and frame the debate is offered.

Someone agrees that the incentives within the network model are broken, but some of these issues are not new to commerce. Lee McKnight mentions that Marvin Sirbu's security concerns with open network transactions is a new aspect to the debate that did not exist on such a scale previously.

One also must not ignore the strategic components of interconnection. For instance, Mitch Kapor had stated earlier that this is the golden age of cooperation and digital plenty. But what happens when the market stabilizes and we reach the top of the S-curve and people must fight to keep control of their turf?

Someone argues that flat, hidden billing has lead to the Internet culture and a variety of other good things. For instance, MBONE would have never happened in an any other environment.

Someone argues that the Internet may not be the model for the future. The Internet (ARPANET) architecture is about twenty years old and may not be appropriate for the mass of personal computers which will inhabit cyberspace. An argument involving the above speaker and David Clark started concerning whether the Internet is a "managed" entity, some argue that the IETF does manage the Internet rather strongly. Clark mentions the example of the 30,000 private Bulletin Board Systems (BBSs) that show that there is support for a broad range of billing, service, and usage schemes.

Some conference members return to the broad goals of the consensus section and propose an enumeration of additional key players in the NII:

  • Service Providers—will be trend setters
  • Hardware Software Vendors—will have the important role of setting some pivotal standards.

These two institutions shall be very important in the evolution of the information economy as seen from Mitch Kapor's speech. Those that determine the protocols and standards also affect user behavior, this is an important power. David Clark furthers this by saying businesses are actively scheming on how to take over IETF meetings.

Joe Bailey returns to the question of modeling stating modeling should be conducted which would require data on traffic flow. (A reference to Hans-Werner Braun and Kim Claffy's work at San Diego Supercomputer Center is made.)

Wenling Hsu mentions some pertinent statistics of usage behavior: Boardwatch wrote that there are 500,000 heavy users which can be characterized as owning $6,000 dollars in computer related equipment. There is also about 8 million casual users. A point concerning such users is that they are currently willing to pay both phone and service provider charges.

Someone challenges long distance companies to provide flat fees for weekend or late night hours. During this time, the bandwidth is really being wasted, and there is a greater range of demand elasticities than is apparent in the current market differentiation scheme. Luis Fernandez gives an example concerning on-line access (his daughter is a heavy user) from rural areas as evidence that the above late night/flat rate scheme could be successful. Someone mentions that resellers are now capable of renting lines at flat rates, but this is never passed on to the consumer.

A scheme in which billing discrimination occurs depending on whether one is a commercial entity making profits with the bandwidth versus a private consumer is offered. Yet, one must be careful of one's pricing policy for it can lead to market distortions.

Hal Varian brings up the Rutgers study of inner-city residents who were more likely to give up phone service than cable. One of the reasons for the loss of phone access is because of the uncertainty in long distance and other charges. This uncertainty was further promoted by an externality in that if other people got rid of their phone, they would then go to the people who still had a phone to make calls. In turn, these people would have bills that were too high, and would cancel service as well. The question is raised as to whether one can block long distance access from a telephone, Richard Solomon responds that it is possible, but the phone companies don't want you to know that because they make money on long distance calls. Someone mentions certain areas in DC do have restricted phone service.

The discussion returns to usage fees and someone mentions a study that showed that 70% of people under a calling plan would have done better with a usage based calling plan that a flat flee. Hal Varian says flat rates are so appealing because people are willing to pay to have the uncertainty and constraints removed. David Clark mentions that businesses often refer to a Bellcore study that shows businesses are able to accurately optimize their usage rates and phone plans. This does not seem possible with the Internet. Someone supports this lack of applicability to the Internet by stating that the Internet is unpredictable.

The conversation turns to the earlier question of how do we beat the issue of congestion and usage? Some of the problems that have been encountered are ideological, whereas others are economic. It is difficult to figure out where the line exists between the two spheres. Someone mentions a driving motivation for much of the phenomena is a fundamental fear of spectrum scarcity. Lee McKnight says that David Clarks 's statistical sharing proposal has great potential for understanding bandwidth allocation. Joe Bailey likens statistical sharing to multiplexing with "soft boundaries". For example, say X and Y share a bit-pipe and X pays 25% of it and Y pays 75%. Then, the router on each end allocates 25% of the bandwidth for X and 75% for Y. However, if Y is not using part of her 75% and X needs some, X can use it. Of course, Y has priority.

David Clark mentions another motivation is people's fear of worst case behavior. Someone mentions that congestion in not an inherent evil, but it is an incentive to further optimize technology and compression techniques.

The issue of cultural identity on the Net is raised. Someone mentions this discussion has been US centric: one should look into other nations conception of business, taxes, convergence, and culture. MB Sarkar worked on producing a TV commercial in the middle east and it never succeeded because the commercial showed a female hand with painted finger nails. This commercial was vehemently opposed based on conservative cultural taboos. Furthermore, it is interesting to note that the royalty could access the Internet, but the rest of the population couldn't from fear of persecution.

Joe Bailey stresses that it is very important to understand the Net as it is here and now.

Someone mentions that if you are engineering the GII, then you must engineer it globally.

Someone asks Lee to summarize the subpoints. McKnight responds that this can be done on the mailing list and hope to have input from everyone.

Hal Varian's homepage is given as a resource: [Ed. note: That site is no longer active. See http://www.sims/]

David Crawford said that the content on the Internet is currently provided for free to users. It's either shareware, freeware, or was pirated. Providers who want to charge for content currently must form a coalition with bandwidth providers. Examples are Compuserve, or Lexis/Nexis—they bundle the bandwidth with the content, and sell documents over their private network. He mentioned that he is encouraged that implementing Marvin Sirbu's and Clifford Neuman's transaction billing models will allow more specialization as the content business could be separated from the bandwidth business.

13. Post Workshop Comments

13.1. Lixia Zhang, Xerox PARC

Two significant events happened over the last couple of years. First, we started enhancing the Internet services from datagram delivery to integrated services, i.e. from a single level best-effort service to multiple levels of different Quality of Services (QoS). Secondly, the Internet entered a rapid transitioning process from a research & educational infrastructure to a public facility. These two events brought a previously under-addressed issue to the center of the focus: how should the Internet services be charged to make the net an economically viable system? Zhang is an engineer by training but have zero knowledge about economics. Therefore she listened to the most part of the workshop with great interest.

Zhang sees that one of the most significant progresses made during the workshop is the recognition of an "Internet culture", that distinguishes the Internet from all previously existing telecommunication systems. "Openness" and "decentralization" were mentioned as among the basic attributes of this "Internet culture". She adds the basic attributes the encouragement of usage and sharing. The very existence of an Internet connection, either being "free" or with a fixed fee, has (1) invited creative minds to unconstrained inventions of new network functions, such as IP multicast, and new applications, such as World-Wide-Web; and (2) accelerated the penetration of new network functions and applications—people are eager to pick up all latest inventions and try them out, providing timely feedback to the designers to further improve the inventions, which then become even more attractive to new users.

She considers the second point above a centerpiece of the Internet culture, i.e. the sharing of ideas, data, information, and applications. This sharing creates positive feedback into the loop, and more and more people find the Internet attractive and useful, and join in with the potential of making their own contributions, leading to a larger, and more attractive, pool of sharing.

As a personal view, Zhang believes that the WWW merely announced a new era of new network applications. The Internet just grew to a sizable infrastructure (and reached enough audience) that opens up the potential for a new innovation age of network applications. The more people get onto the net, the larger the pool of creative minds we have for new application inventions. Therefore for any proposed charging model, she would like to ask the question: would the model encourage, or constrain, the potential for these inventions?

As a side note, she hypothesizes that, yes, we probably will continue to have some special network testbeds, either government or industrial sponsored, for new inventions, but only a very limited number of people will have access.

Many people at the workshop had indicated a desire to "reserve the Internet culture". To do so means continued encouragement of usage and sharing. What kind of charging model can serve this goal well? None of the usage-sensitive billing proposals addressed this issue.

More network usage and more information sharing require more bandwidth support, therefore a fundamental question is where to find the money to fund continued capacity growth. Without understanding economics 101, my intuition says that more income should come from more usage. That is, people would be willing to pay more if they use the network more frequently, or find the Internet access more valuable, therefore they may be willing to pay an increase in the flat fee. Such increase in charging (resulted from increase in usage), she feels, has very different impact on people's behavior from usage-based charging, that she is afraid can easily scare people away from making more use of the service.

When the system gets overloaded (and Zhang agrees that overload will occur from time to time), pricing can be used as an effective congestion control tool. For example, telephone systems have been using pricing to shift load to off-peak hours. As Deering described, however, the current practice in the Internet is relying on collaboration to resolve resource conflicts. For example, with limited bandwidth available for MBONE traffic, people follow the "sharing" culture and negotiate to resolve scheduling conflict for different events. Zhang believes that the ultimate congestion control rest in human users' hands. As several people pointed out at the workshop, bits are not all equal in terms of values. If end users are willing to collaborate in resolving congestion, that would be a more productive way to solve problem than using charging tool, which resolves the problem by the depth of one's pocket.

The above discussion leads to a fundamental assumption about whether the average Internet users are good citizens or potentially dangerous enemies. The "Internet culture" has relied on the former, and as Van Jacobson and Steve Deering pointed out, collaboration is likely to lead to better overall results. Admittedly there are probably always misbehaving or malicious users around, as we have all witnessed their existence by frequent network breaking-ins. Nevertheless a system will be designed very differently depending on whether the criminals are norms or exceptions. However, as more commercial institutions find the business values of the Internet service, will the assumption of "good citizens" break down completely? Is the Internet culture economically viable to retain?

She observed two different concerns from the workshop discussion. The first one is mostly focused on exactly how we should charge for the service ("what's the best way to compute the bill?"), while the other is mostly focused on how we can make the best use out of available resources, putting charging at a second place. To be an economically viable system, Internet service must make enough money for both cost recovery and profit. However it is unclear to me whether an exclusive focus on billing is the best means to reach the end.

13.2. Alok Gupta, Dale Stahl, Andrew Whinston, University of Texas at Austin

At the Center for Information Systems Management (CISM) at the University of Texas at Austin, Gupta, Stahl, and Whinston have been involved in National Science Foundation (NSF) supported research on economic aspects of Internet and electronic commerce. Some of their working papers and conference proceedings of "Making Money on The Internet" held in May 1994 can be accessed from a WWW server ([formerly]).

Their main concern, as many others in the workshop, has been the development of a pricing mechanism for the electronic commerce. They summarize their research here because there were several unanswered questions raised in the workshop that have been addressed by their work. Since the marginal cost of providing services is essentially zero, marginal cost pricing is not going to work—as appropriately pointed out by David Clark and Frank Kelly. Further, they agree with the anonymous suggestion that any usage based pricing should matter when there is a bottleneck or congestion, i.e., charges should be different during the peak and off-peak periods. We will like to applaud David Clark for presenting the idea of expected performance and will like to point out our research which has exactly focused on that aspect, e.g., "An Economic Approach to Networked Computing with Priority Classes," and "Pricing of Services on The Internet." These papers are available on a WWW site ([formerly]). One of the published papers outlining their theoretical results is "A General Economic Equilibrium Model of Distributed Computing," in New Directions in Computational Economics, eds. W. W. Cooper and A. B. Whinston, Kluwer Academic Publisher, 1994.

Gupta, Stahl, and Whinston work in the economic framework of the General Equilibrium Theory to achieve a "stochastic equilibrium." The stochastic equilibrium is characterized by a flow rate into the system which meets customer expectations on average. What are customer expectations? They believe that customers pay both in "money" and "delay" (unlike Clark's "either") and the user's expectations are in terms of total cost they are going to suffer. Some advantages that this approach has are:

  • It reduces the excess load from the customer end (if it is too expensive, say during the peak load)—this is a desirable property as highlighted by Wei Deng and should not be interpreted as reducing the access.
  • It takes the future arrivals into account.
  • It is coarser than packet level pricing and thus easier and less-costly to implement.
  • Prices go down as the load decreases and increase as the load increases.
  • It can be implemented in a completely decentralized manner thus not requiring any central governing body.
  • It results in an effective load management.
  • Multiple priority make sure that customers pay in the form they like, i.e., in terms of delay or money as , for example, David Clark suggested.
  • Multiple priorities can also be used to discriminate between real-time applications and the ones which can suffer delay without loss of value.
  • Multiple priorities can be used to make the pricing system incentive compatible.

They have performed extensive simulation experiments to explore and show that the prices to support stochastic equilibrium can be computed in real time. The key element in these computational experiments is the "goodness" of prediction, i.e., how well are the customer expectations formed. With extensive tests we have gathered valuable insights in the optimization process for real time systems and we believe that these will be of utmost importance in any usage based pricing approach. Their simulation platform can be used to test other pricing schemes as well as the effect of different market structures and the effect of regulation. One of the interesting problems from our perspective is to establish the tradeoff between the theoretical results and the computational costs and limitations imposed by the real-time nature of the problem.

Indeed, there is a danger of abusing the system as pointed out by one of the questions, i.e., the system can be artificially overloaded to extract higher prices. However, the approach resists this abuse since if a provider does that, they are going to loose customers; similar argument can be made for not providing enough bandwidth when there is excess demand. One point which no one seems to realize is that competition for services will not only be on the network but on alternative channels. However, they do believe that competitive markets should be encouraged to exploit full potential of electronic commerce.

Gupta, Stahl, and Whinston believe more concrete work has to be done at the conceptual level as well as in evaluating what economic framework can work. They have done extensive work on showing the validity and implementability of our approach and we would like to see similar studies using other suggested pricing mechanisms. They strongly recommend that the quantitative testing of different approaches be presented in future to evaluate their relative strength and weaknesses. Without having a concrete basis for comparing different approaches in this untested environment little insight can be gained.

13.3. David Kristol, AT & T

David Kristol suggested making an analogy with usage-based pricing to clarify how difficult it is to get users to adopt it as a possible pricing policy. While he isn't fully happy with this, it has gotten a favorable response from Hal Varian (see comments later).

Perhaps, David hypothesizes, the evolution of the Internet resembles a small town or settlement. Initially a few people coalesce on a spot. They are mutually dependent, they trust each other (for the most part), and they cooperate. They don't need to lock their doors—it would be pretty obvious who broke in, because who else is around?

Eventually the town gets larger. As the numbers increase, people no longer know one another directly. After awhile they don't know everyone in town. They start to rely on second-hand recommendations or opinions of others. The spirit of cooperation declines—anonymity has a way of reducing how much people care about one another. It's time to lock the doors, but the doors don't need heavy locks—we're just trying to discourage people with errant thoughts.

The larger town starts to need rules (laws) to regulate everyone, because, when relative anonymity sets in, moral-suasion becomes less effective. The growing community applies its sanctions not at a personal level but at a bureaucratic one. Admonishments from a stranger have less effect that the same words from a neighbor. So we call on a "higher authority", a sheriff, to punish transgressors.

When the community becomes city-sized, anonymity is much more common. Few people know more than a small fraction of the people in their community. The community's culture, its rules, become codified, and there's a police force to (attempt to) enforce them and punish transgressors.

David then uses this story to demonstrate what is happening to the Internet. The recent growth in the Internet community has taken it well past the small town stage. Where once it was sufficient to gently correct transgressors of the Internet culture, because most people on the Internet understood the culture, now we find ourselves in a community where fewer people are aware of the old rules. In some cases they won't accept those rules, and we have no way to kick them out of the community (not that he suggests this remedy). It's a rough-and-tumble immigrant community. Welcome to the Internet's Lower East Side!

Kristol thinks we can no longer depend on cooperative use of the Internet to apportion bandwidth fairly. The Internet has reached, or will soon reach, the point where we can no longer educate all new users to obey "our rules". We will reach the point where, when Van Jacobson informs someone that they're hogging the MBONE, for example, they'll ask who he thinks he is to be scolding them!

We can mourn the passing of the original Internet culture, but we must acknowledge that the Internet has changed, and move on. We must find a way to build incentives into the network such that they encourage wise use of the resources.

Hal Varian agreed with some of David's comments. He commented that anthropologists have studied how various "primitive" tribes manage common property. A standard procedure is that elders of the tribe (in the case of Internet, this may be the IETF or Van Jacobson, perhaps) apportion the commons among competing uses with the (implicit) threat of exclusion if the rules are not followed. This is pretty much the way the Internet tribe used to do things. But now the tribe has grown awfully big and the threat of exclusion is no longer enforceable, so the old system is likely to break down.

Hal suggests that a price system of some sort is a likely replacement. The debate seems to center around what kind of system makes sense: centralized or decentralized, spot markets or reservation in advance, etc. The big weakness of bit counting schemes is that someone actually has to count the bits and this accounting overhead can be substantial. On the other hand there is accounting overhead attached to reservation-in-advance schemes too. Right now no one seems to have a good handle on the cost of the accounting, which is why there is room for lots of disagreement.

Appendix A. Workshop Agenda

Research Program on Communications Policy

Massachusetts Institute of Technology

March 9 & 10 1995 AGENDA

MIT Campus

Marlar Lounge 37-252

70 Vassar Street

Cambridge, MA

March 9, 1995
8:00am Registration
9:00am Welcome by Jack Ruina, RPCP, MIT
9:10am A Framework for Internet Technology & Economics; David Clark, MIT

Information Technologies, their Interoperability and Productivity

Lee McKnight, MIT

Erik Brynjolfsson, MIT

10:30am Break

Panel: Internet Engineering and Economics

Marjory Blumenthal, NAS—moderator

Scott Shenker, Xerox

Hal Varian, University of Michigan

Steve Deering, Xerox—respondent

12:00pm Lunch with speaker Linda Garcia, OTA, "An Inquiry Into the Nature and Wealth of Networks"

Panel: Internet Resource Allocation: Congestion and Accounting

Hal Varian, University of Michigan—moderator

Frank Kelly, University of Cambridge

Nick McKeown, Stanford

Jeffrey MacKie-Mason, University of Michigan

Nevil Brownlee, University of Auckland

Van Jacobson, LBL—Respondent

3:00pm Break

Panel: Internet Pricing: Interconnection, Settlements, and Resale

Priscilla Huston, NSF—moderator

Padmanabhan Srinagesh, Bellcore

Joe Bailey, MIT

Bob Collet, CIX

David Clark, MIT—respondent

5:15pm Open Discussion: Implementation Directions
6:00pm Cocktails
6:30pm Dinner with speaker Richard Solomon, MIT, "Broadband and Shifting Paradigms in Telecommunications"
March 10, 1994
8:30am Coffee & donuts

Commerce and Information Security on the Internet

Steve Squires, ARPA—moderator

Marvin Sirbu, CMU

Clifford Neuman, ISI

Shoshana Loeb, Bellcore

Dan Schutzer, Citibank

Roger Callahan, NSA—respondent

10:30am Open Discussion
11:00am Break

Panel: Future Directions for Internet Economics

David Reed, CableLabs—moderator

Ketil Danielsen, University of Pittsburgh

Wei Deng, MCI

Louis Fernandez, NSF

Sandy Merola, LBL—respondent

1:00pm Lunch with speaker Mitch Kapor, "Democracy and the Internet Economy"

Setting an Agenda for Action

  1. What is our direction? (reach consensus)
  2. Recommendations for industry and federal government action
  3. Discussion email list creation
  4. Identifying areas of further research
3:15pm Break
3:45pm Consensus Reached & Draft statement prepared summarized by Lee McKnight and concluding discussion
4:30pm End

Appendix B. Workshop Participants

Workshop Chairmen
Joseph Bailey MIT Research Program on Communications Policy
Lee McKnight MIT Research Program on Communications Policy
Organizing Committee
David Clark Laboratory for Computer Science, MIT
Deborah Estrin University of Southern California
Jeffrey MacKie-Mason University of Michigan
Hal Varian University of Michigan
Scott Behnke DynCorp ATS
Marjory Blumenthal National Academy of Sciences
David Brown Sterling Software
J. Nevil Brownlee University of Auckland
Erik Brynjolfsson MIT Sloan School of Management
Roger Callahan National Security Agency
Bob Collet CIX Association
David Crawford University of Arizona
Ketil Danielsen University of Pittsburgh
Steve Deering Xerox Palo Alto Research Center
Wei Deng MCI
Tice DeYoung ARPA, Electronic Systems Technology Office
Maria Farnon Tufts University
Luis Fernandez National Science Foundation
Bob Frankston Microsoft
Jiong Gang Bellcore
Linda Garcia Office of Technology Assessment
Wenling Hsu AT&T Bell Labs
Farooq Hussain MCI
Priscilla Huston National Science Foundation
Van Jacobson Lawrence Berkeley Laboratory
Mitch Kapor Kapor Enterprises, Inc.
Raph Kasper The Alfred P. Sloan Foundation
James Keller Harvard University
Frank Kelly University of Cambridge
David M. Kristol AT&T Bell Laboratories
Chris Lefelhocz MIT
Shoshana Loeb Bellcore Communications Research
Sandy Merola Lawrence Berkeley Laboratory
Nick McKeown Stanford University
Liam Murphy Auburn University
Clifford Neuman University of Southern California, C ISI
David Reed Cable Television Laboratories
Paul Resnick MIT Sloan School of Management
Greg Ruth GTE Labs
Mitrabarun Sarkar Michigan State University
Daniel Schutzer Citibank
Scott Shenker Xerox Palo Alto Reserach Center
Marvin Sirbu Carnegie Mellon University
Ann Sochi MIT Press
Stephen Squires ARPA
Padmanabhan Srinagesh Bellcore
Marshall Van Alstyne MIT Sloan School of Management
Qiong Wang Carnegie Mellon University
Coralee Whitcomb Bentley College
Walter Wiebe Federal Networking Council
Jim Williams FARNET
John Wroclawski Laboratory for Computer Science, MIT
Jinhong Xie University of Rochester