Throughput in Computer Networks

Throughput in Computer Networks

PCBWay

In addition to delay and packet loss, another critical performance measure in computer networks is end-to-end throughput. To define throughput, consider transferring a large file from Host A to Host B across a computer network. This transfer might be, for example, a large video clip from one peer to another in a P2P fie sharing system.

JAK Electronics

The instantaneous throughout at any instant of time is the rate (in bits/sec) at which Host B is receiving the file. (Many applications, including many P2P file sharing systems, display the instantaneous throughput during downloads in the user interface – perhaps you have observed this before!) If the file consists of F bits and the transfer takes T seconds for Host B to receive all F bits, then the average throughput of the file transfer is F/T bits/sec. For some applications, such as internet telephony, it is desirable to have  a low delay and an instantaneous throughput consistently above some threshold (for example over 24 kbps for internet telephony applications and over 256 kbps for some realtime video applications). For other applications, including those involving file transfers, delay is not critical, but it is desirable to have the highest possible throughput.

To gain further insight into the important concept of throughput, let’s consider a few examples. Fig 1.19(a) shows two end systems, a server and a client, connected by two communication links and a router. Consider the throughput for a file transfer from the server to the client. Let RS denote the rate of the link between the server and the router; and RC denote the rate of the link between the router and the client. We now ask, in this ideal scenario, what is the server-to-client throughput?  To answer this question, we may think of bits as fluid and communication links as pipes. Clearly, the server cannot pump bits through its link at a faster rate than RS bps; and the router cannot forward bits at a rate faster than RC bps. If RS < RC, then the bits pumped by the server will “flow” right through the router and arrive at the client at a rate RS bps, giving a throughput of RS bps. If on the other hand , RC<RS, then the outer router will not be able to forward bits as quickly as it receives them. In this case, bits will only leave the router at rate RC, giving an end-to-end throughput of RC. (Note also that if bits continue to arrive at the router at rate RS, and continue to leave the router at RC, the backlog of bits at the router waiting for transmission to the client will grow and grow – a most undesirable situation!)

Thus, for this simple two-link network with N links between the server and the client, with the transmission rates of the N links being R1, R2,…………RN. Applying the same analysis as for the two-link network, we find that the throughput for a file transfer from server to clients is min {R1,R2,R3,………RN}, which is once again the transmission rate of the bottleneck link along the path between server and client.

Now consider another example motivated by today’s Internet. Fig 1.20(a) shows two end systems, a server and a client, connected to a computer network. Consider the throughput for a file transfer from server to the client. The server is connected to the network with an access link of rate RS and the client is connected to the network with an access link of rate RC. Now suppose that all the links in the core of the communication network have very high transmission rates, much higher than RS and RC. Indeed, today, the core of the internet is over-provisioned with high speed links that experience little congestion. Also suppose that the only bits being sent in the entire network are those from the server to the client. Because the core of the computer network is like a wide pipe in this example, the rate at which bits can flow from source to destination is again the minimum of RS and RC, that is, throughput = min{RS, RC}. Therefore, the constraining factor for throughput in today’s internet is typically the access network.

For a final example, consider figure 1.20(b) in which there are 10 servers and 10 clients connected to the core of the computer network. In this example, there are 10 simultaneous downloads taking place, involving 10 client-server pairs. Suppose that these 10 downloads are the only traffic in the network at the current time. As shown in figure, there is a link in the core that is traversed by all 10 downloads. Denote R for the transmission rate of this link R. Let’s suppose that all server access links have the same rate RS, all client access links have the same rate RC, and the transmission rates of all the links in the core – except the one common link of rate R – are much larger than RS, RC and R.

Now we ask, what are the throughputs of the downloads? Clearly, if the rate of the common link, R, is large – say a hundred times larger than both RS and RC– then the throughput for each download will once again be min{RS, RC}. But what if the rate of the common link is of the same order as RS and RC? What will be the throughput in this case? Let’s take a look at a specific example.

Suppose RS = 2 Mbps, RC = 1 Mbps, R = 5 Mbps, and the common link divided its transmission rate equally among the 10 downloads. Then the bottleneck for each download is no longer in the access network, but is now instead the shared link in the core, which only provides each download with 500 kbps of throughput. Thus the end-to-end throughput for each download is now reduced to 500 kbps.

The examples in figure 1.19 and figure 1.20(a) show that throughput depends on the transmission rates of the links over which the data flows. We saw that when there is no other intervening traffic, the throughput can simply be approximates as the minimum transmission rate along the path between source and destination. The example in figure 1.20(b) shows that more generally the throughput depends not only on the transmission rates of the links along the path, but also on the intervening traffic. In particular, a link with a high transmission rate may nonetheless be the bottleneck for a file transfer if many other data flows are also passing through that link. We will examine throughput in computer networks more closely in the home-work problems and in the subsequent chapters.