Non-Persistent and Persistent Connections

Non-Persistent and Persistent Connections

PCBWay

In many internet applications, the client and server communicate for an extended period of time, with the client making a series of requests and the server responding to each of the requests. Depending on the application and on how the application is being used, the series of requests may be made back-to-back, periodically at regular intervals, or intermittently. When this client-server interaction is taking place over TCP, the application developer needs to make an important decision – should each request/response pair be sent over a separate TCP connection, or should all of the requests and their corresponding responses be sent over the same TCP connection? In the former approach, the application is said to use non-persistent connections ; and in the latter approach, persistent connections . To gain a deep understanding of this design issue, let’s examine the advantages and disadvantages of persistent connections in the context of a specific application, namely, HTTP, which can use both non-persistent connections and persistent connections . Although HTTP uses persistent connections in its default mode, HTTP clients and servers can be configures to use non-persistent connections instead.

DRex Electronics

HTTP with Non-Persistent Connections

Let’s walk through the steps of transferring a web page from server to client for the case of non-persistent connections. Let’s suppose the page consists of a base HTML file and 10 JPEG images, and that all 11 of these objects reside on the same server. Further suppose the URL for the HTML file is

http://www.someSchool.edu/someDepartment/home.index

Here is what happens:

  1. The HTTP client process initiates a TCP connection to the server www.someSchool.edu on port number 80, which is the default port number for HTTP. Associated with the TCP connection, there will be a socket at the client and a socket at the server.
  2. The HTTP client sends an HTTP request message to the server via its socket. The request message includes the path name /someDepartment/home.index
  3. The HTTP server process receives the request message via its socket, retrieves the object /someDepartment/home.index from its storage (RAM or disk), encapsulates the object in an HTTP response message, and send the response message to the client via its socket.
  4. The HTTP server process tells TCP to close the TCP connection. (But TCP doesn’t actually terminate the connection until it knows for sure that the client has received the response message intact.)
  5. The HTTP client receives the response message. The TCP connection terminates. The message indicates that the encapsulated object is an HTML file. The client extracts the file from the response image, examines the HTML file, and finds references to the 10 JPEG objects.
  6. The first four steps are then repeated for each of the referenced JPEG objects.

As the browser receives the web page, it displays the page to the user. Two different browsers interpret (that is, display to the user) a web page in somewhat different ways. HTTP has nothing to do with how a Web page is interpreted by a client. The HTTP specifications ([RFC 1945] and [RFC 2616]) define only the communication protocol between the client HTTP program and the server HTTP program.

The steps above illustrate the use of non-persistent connections, where each TCP connection is closed after the server send the object – the connection does not persists for other objects. Note that each TCP connection transports exactly one request message and one response message. Thus, in this example, when a user requests the web page, 11 TCP connections are generated.

In the steps described above, we were intentionally vague about whether client obtains the 10 JPEGs over 10 serial TCP connections, or whether some of the JPEGs are over parallel TCP connections. Indeed, users can configure modern browsers to control the degree of parallelism. In their default modes, most browsers open 5 to 10 parallel TCP connections, and each of these connections handles one request-response transactions. If the user prefers, the maximum number of parallel connections to be set to one, in which case the 10 connection are established serially. As we’ll see in the next module , the use of parallel connections shorten the response time.

Before continuing, let’s do a back-of-the-envelope calculation to estimate the amount of time that elapses from when a client request the base HTML file

Until the entire file is received by the client. To this end, we define the round-trip time (RTT) , which is the time it takes for a small packet to travel from client to server and then back to the client. The RTT includes packet-propagation delays, packet-queuing delays in intermediate router and switches, and packet-processing delays. Now consider what happens when a user clicks on a hyperlink. As shown in figure 2.7 , this causes the browser to initiate a TCP connection between the browser and the web server; this involves a “three-way handshake” – the client sends a small TCP segment to the server, the server acknowledges and responds with a small TCP segment, and, finally, the client acknowledges back to the server. The first wo parts of the three-way handshake take one RTT. After completing the first two parts of the hand-shake, the client send the HTTP request message combined with the third part of the three-way handshake (the acknowledgement) into the TCP connection. Once the request message arrives at the server, the server sends the HTML file into the TCP connection. This HTTP request/response eats up another RTT. Thus, roughly, the total response time is two RTTs plus the transmission time at the server of the HTML file.

HTTP with Persistent Connections

Non-persistent connections have some shortcomings. First, a brand-new connection must be established and maintained for each requested object. For each of these connections, TCP buffers must be allocated and TCP variables must be kept in both the client and server. This can place a significant burden on the web server, which may be serving requests from hundreds of different clients simultaneously. Second, as we just described, each object suffers a delivery delay of two RTTs – one RTT to establish the TCP connection and one RTT to request and receive an object.

With persistent connection, the server leaves the TCP connection open after sending a response. Subsequent requests and responses between the same client and server can be sent over the same connection. In particular, an entire web page (in the example above, the base HTML file and 10 images) can be sent over a single persistent TCP connection. Moreover, multiple web pages residing on the same server can be sent from the server to the same client over a single persistent TCP connection. These requests for objects can be made back-to-back, without waiting for replies to pending requests (pipelining) .Typically, the HTTP server closes a connection when it isn’t used for a certain time (a configurable timeout interval). When the server receives the back-to-back requests, it send the objects back-to-back. The default mode of HTTP uses persistent connections with pipelining.