TCP – Connection Oriented Protocol
TCP is said to be connection oriented because before one application process can begin to send data to another, the two processes must first “handshake” with each other – that is, they must send some preliminary segments to each other to establish the parameters of the ensuing data transfer.
As a part of TCP connection establishment, both sides of the connection will initialize many TCP state variables associated with the TCP connection.
The TCP “ connection ” is not an end-to-end TDM or FDM circuit as in a circuit switched network. Nor is it a virtual circuit, as the connection state resides entirely in the two end systems. Because the TCP protocol runs only in the end systems and not in the intermediate network elements (routers and link-layer switches), the intermediate network elements do not maintain TCP connection state.
In fact, the intermediate routers are completely oblivious of TCP connections; they see datagrams, not connections.
A TCP connection provides a full-duplex service : If there is a TCP connection between Process A on one host and Process B on another host, then application layer data can flow from Process A to Process B at the same time as application layer data flows from Process B to Process A. A TCP connection is also always point-to-point, that is, between a single sender and a single receiver. So-called “multicasting” – the transfer of data from one sender to many receivers in a single send operation – is not possible with TCP. With TCP, two hosts are company and there are a crowd!
Let’s now take a look at how a TCP connection is established. Suppose a process running in one host wants to initiate a connection with another process in another host. Recall that the process that is initiating the connection is called the client process, while the other process is called the server process. The client application process first informs the client transport layer that it wants to establish a connection to a process in the server.
A Python client program does this by issuing the command:
where serverName is the name of the server and serverPort identifies the process on the server. TCP in the client then proceeds to establish a TCP connection with TCP in the server.
Thus, the process starts by the client first sending a special TCP segment; the server responds with a second special TCP segment; and finally the client responds again with a third special segment. The first two segments carry no payload, that is, no application-layer data; the third of these segments may carry a payload. Because three segments are sent between the two hosts, this connection-establishment procedure is often referred to as a three-way handshake.
Once a TCP connection is established, the two application processes can send data to each other. Let’s consider the sending of data from the client process to the server process. The client process passes a stream of data through the socket (the door of the process). Once the data passes through the door, the data is in the hands of TCP running in the client. As shown in the figure below (3.28), TCP directs this data to the connection’s send buffer, which is one of the buffers that is set aside during the initial three-way handshake.
From time to time, TCP will grab chunks of data from the send buffer and pass the data to the network layer.
Interestingly, the TCP specification [RFC 793] is very laid back about specifying when TCP should actually send buffered data, stating that TCP should “send that data in segments at its own convenience.” The maximum amount of data that can be grabbed and placed in a segment is limited by the maximum segment size (MSS). The MSS is typically set by first determining the length of the largest link-layer frame that can be sent by the local sending host (the so called maximum transmission unit, MTU) and then setting the MSS to ensure that a TCP segments (when encapsulated in an IP datagram) plus the TCP/IP header length (typically 40 bytes) will fit into a single link-layer frame. Both Ethernet and PPP link-layer protocols have a MSS of 1,500 bytes. Approaches have also been proposed for discovering the path MTU – the largest link-layer frame that can be sent on all links from source to destination [RFC 1191] – and setting the MSS based on the path MTU value. Note that the MSS is the maximum amount of application-layer data in the segment, not the maximum size of the TCP segment including headers.
TCP pairs each chunk of client data with a TCP header, thereby forming TCP segments. The segments are passed down to the network layer, where they are separately encapsulated, within network-layer IP datagrams. The IP datagram are then sent into the network. When TCP receives a segment at the other end, the segment’s data is placed in the TCP connection’s receiver buffer as shown in the figure above.
The application reads the stream of data from this buffer. Each side of the connection has its own send buffer and its own receive buffer.
We see that a TCP connection consists of buffers, variables, and a socket connection to a process in one host, and another set of buffers , variables, and a socket connection to a process in another host. As mentioned earlier, no buffers or variables are allocated to the connection in the network elements (routers, switches, and repeaters) between the hosts.