UDP Connectionless Protocol
UDP Connectionless Protocol
UDP Connectionless Protocol does not involve handshaking between sending and receiving transport layer entities before sending a segment. Hence the name – connectionless.
So as to understand the concept more clearly just suppose that you are interested in designing a no-frills, bare-bones transport protocol. How might you go about this? You might first consider taking the messages from the application process and passing them directly to the network layer; and on the receiving side, you might consider taking the message arriving from the network layer and passing them directly to the application process. But as we learned earlier, we have to do a little more than nothing!
At the very least, the transport layer has to provide a multiplexing/demultiplexing service in order to pass data between the network layer and the correct application level process.
UDP , defined in [RFC 768], does just about as little as a transport protocol can do. Aside from the multiplexing/demultiplexing function and some light error checking, it adds nothing to IP. In fact, if the application developer chooses UDP instead of TCP, then the application is almost directly talking with IP. UDP takes messages from the application process, attaches source and destination port number fields for the multiplexing/demultiplexing service, adds two other small fields, and passes the resulting segment to the network layer. The network layer encapsulates the transport-layer segment into an IP datagram and then makes a best-effort attempt to deliver the segment to the receiving host. If the segment arrives at the receiving host, UDP uses the destination port number to deliver the segment’s data to the correct application process. Note that with UDP there is no handshaking between sending and receiving transport layer entities before sending a segment. For this reason, UDP is said to be connectionless.
DNS is an example of an application layer protocol that typically uses UDP. When the DNS application in a hot wants to make a query, it constructs a DNS query message and passes the message to UDP. Without performing any handshaking with the UDP entity running on the destination end system, the host-side UDP adds header fields to the message and passes the resulting segment to the network layer. The network layer encapsulates the UDP segment into a datagram and sends the datagram to a name server. The DNS application at the querying host then waits for a reply to its query. If it doesn’t receive a reply (possibly because of the underlying network lost the query or the reply), either it tries sending the query to another name server, or it informs the invoking application that it can’t get a reply.
Now you might be wondering why an application developer would ever choose to build an application over UDP rather than over TCP. Isn’t TCP always preferable, since TCP provides a reliable data transfer service, while UDP does not?
The answer is no, as many applications are better suited for UDP for the following reasons:
- Finer application level control over what data is sent and when
- No connection establishment
- No connection state
- Small packet header overhead
Finer application level control over what data is sent and when
Under UDP as soon as an application process passes data to UDP, UDP will package the data inside a UDP segment and immediately pass the segment to the network layer.
TCP on the other hand, has a congestion-control mechanism that throttles the transport layer TCP sender when one or more links between the source and destination hosts become excessively congested. TCP will also continue to resend a segment until the receipt of the segment has been acknowledges by the destination, regardless of long reliable delivery takes. Since real-time applications often require a minimum sending rate, do not want to overly delay segment transmission, and can tolerate some data loss, TCP’s service model is not particularly well matched to these applications’ needs. As discussed below, these applications can use UDP and implement, as part of the application, any additional functionality that is needed beyond UDP’s no-frills segment-delivery service.
No connection establishment
TCP uses a three-way handshake before it starts to transfer data. UDP just blasts away without any formal preliminaries. Thus UDP does not introduce any delay to establish a connection. This is probably the principal reason why DNS runs over UDP rather than TCP – DNS would be much slower if it ran over TCP. HTTP uses TCP rather than UDP, since reliability is critical for web pages with text. However, the TCP connection-establishment delay in HTTP is an important contributor to the delay associated with downloading web documents.
No connection state
TCP maintains connection state in the end system. This connection state includes receive and send buffers, congestion-control parameters, and sequence and acknowledgement number parameters. This state information is needed to implement TCP’s reliable data transfer service and to provide congesting control. UDP, on the other hand, does not maintain connection state and does not track any of these parameters. For this reason, a server devoted to a particular application can typically support many more active clients when the application runs over UDP rather than TCP.
Small packet header overhead
The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8 bytes of overhead.
The figure (3.6) below, lists popular internet applications and the transport protocols that they use. As we expect, e-mail, remote terminal access, the web, and file transfer run over TCP – all these applications need the reliable data transfer service of TCP. Nevertheless, many important applications run over UDP rather than TCP.
UDP is used to RIP routing table updates. Since RIP updates are sent periodically (typically every 5 minutes), lost updates will be replaced by more recent updates, thus making the lost, out-of-date update useless. UDP is also used to carry network management data. UDP is preferred to TCP in this case, since network management applications must often run when the network is in a stressed state – precisely when reliable, congestion-controlled data transfer is difficult to achieve. Also, as we mentioned earlier, DNS runs over UDP, thereby avoiding TCP’s connection establishment delays.
As shown in the above figure, both UDP and TCP are used today with multimedia applications, such as internet phone, real-time video conferencing, and streaming of stored audio and video. These applications can tolerate a small amount of packet loss, so that reliable data transfer is not absolutely critical for the application’s success. Furthermore, real-time applications, like internet phone and video conferencing, react very poorly to TCP’s congestion control. For these reasons, developers of multimedia applications may choose to run their applications over UDP instead of TCP. However, TCP is increasingly used for streaming media transport. According to a research, nearly 75% of on-demand and live streaming used TCP. When packet loss rates are low, and with some organizations blocking UDP traffic for security reasons, TCP becomes an increasingly attractive protocol for streaming media transport.
Although commonly done today, running multimedia applications over UDP is controversial. As we mentioned above, UDP has no congestion control. But congestion control is needed to prevent the network from entering a congested state in which very little useful work is done. If everyone were to start streaming high-bit-rate video without using any congestion control, there would be so much packet overflow at routers that very few UDP packets would successfully traverse the source-to-destination path. Moreover, the high loss rates induced by the uncontrolled UDP senders would cause the TCP senders to dramatically decrease their rates.
Thus the lack of congestion control in UDP can result in high loss rates between a UDP sender and receiver, and the crowding out of TCP sessions – a serious problem. Many researchers have proposed new mechanisms to force all source , including UDP sources to perform adaptive congestion control.
Before finishing this tutorial, we would like to mention that it is possible for an application to have reliable data transfer when using UDP. This can be done if reliability is built into the application itself (for example, b adding acknowledgement and retransmission mechanisms). But this is a nontrivial task that would keep an application developer busy debugging for a long time. Nevertheless, building reliability directly into the application allows the application to “have its cake and eat it too.” That is, application process can communicate reliably without being subject to the transmission-rate constraints, imposed by TCP’s congestion-control mechanism.