What Does a Data Center Do

What Does a Data Center Do

PCBWay

In this tutorial you’ll learn what does a data center do.

In recent years, Internet companies such as Google, Microsoft, Facebook, and Amazon (as well as their counterparts in Asia and Europe) have built massive data centers, each housing tens and thousands of hosts, and concurrently supporting many distinct applications (e.g., search, mail, social networking, and e-commerce). Each data center has its own data center network that interconnects its hosts with each other and interconnects the data center with the internet. In this section, we provide a brief introduction to data center networking for cloud applications.

The cost of a large data center is huge, exceeding $12 million per month for a 100,000 host data center . Of these costs, about 45 percent can be attributed to the hosts themselves (which need to be replaced every 3-4 years); 25 percent to infrastructure, including transformers, uninterruptable power supplies (UPS) systems, generators for long-term outages, and cooling systems; 15 percent for electric utility costs for the power draw; and 15 percent for networking, including network gear (switches, routers, and load balancers), external links, and transit traffic costs. (in these percentages, cost for equipment are amortized so that a common cost metric is applied for one-time purchases and ongoing expenses such as power). While networking is not the largest cost, networking is the key to reducing overall cost and maximizing performance.

The worker bees in a data center are the hosts: They serve content (e.g web pages and videos), store emails and documents, and collectively perform massively distributed computations (e.g. distributed index computations for search engines). The hosts in data centers, called blades and resembling pizza boxes, are generally commodity hosts that include CPU, memory, and disk storage.

The hosts are stacked in racks, with each rack typically having 20 to 40 blades. At the top of each rack there is a switch , aptly named the Top of Rack (TOR) switch, that interconnects the hosts in the rack with each other and with other switches in the data center. Specifically, each host in the rack has a network interface card that connects to its TOR switch, and each TOR switch has additional ports that can be connected to other switches. Although today hosts typically have 1 Gbps Ethernet connections to their TOR switches, 10 Gbp connections may become the norm. Each host is also assigned its own data center internal IP address.

The data center network supports two types of traffic : traffic flowing between external clients and internal hosts and traffic flowing between internal hosts. To handle flows between external clients and internal hosts, the data center network includes one or more border routers, connecting the data center network to the public internet. The data center network therefore interconnects the racks with each other and connects the racks to the border routers.  The figure below (5.30) shows an example of a data center network.

5.3

Data center network design, the art of designing the interconnection network and protocols that connect the racks with each other and with the border routers, has become an important branch of computer networking research in recent years.

Load Balancing

A cloud data center, such as Google or Microsoft data center, provided many applications concurrently, such as search, email, and video applications. To support requests from external clients, each application is associated with a publicly visible IP address to which clients send their requests and from which they receive response.

Inside the data center, the external requests are first directed to a load balancer whose job is to distribute requests to the hosts, balancing the load across the hosts as function of their current load.

A large data center will often have several load balancers, each one devoted to a set of specific cloud applications. Such a load balancer is sometimes referred to as a “layer-4 switch” since it makes decision based on the destination port number (layer 4) as well as destination IP address in the packet. Upon receiving a request for a particular application, the load balancer, forwards it to one of the hosts that handles the application. (A host may then invoke the services of others hosts to help process the request). When the host finishes processing the request, it sends its response back to the load balancer, which in turn relays the response back to the external client. The load balancer not only balances the work load across hosts, but also proves a NAT-like function, translating the public external IP address to the internal IP address of the appropriate host, and then translating back for packets traveling in the reverse direction back to the clients. This prevents clients from contacting hosts directly, which has the security benefit of hiding the internal network structure and preventing clients from directly acting with the hosts.

Hierarchical Structure

For a small data center housing only a few thousand hosts, a simple network consisting of a border router, a load balancer, and a few tens of racks all interconnected by a single Ethernet switch could possibly suffice. But to scale to tens to hundreds of thousands of hosts, a data center often employs a hierarchy of routers and switches, as shown in the figure above.

At the top of the hierarchy , the border router connects to access routers (you can see only two in the figure, but there can be many more). Below each access router there are three tiers of switches. Each access router connects to a top-tier switch, and each top-tier switch connects to multiple second-tier switches and a load balancer. Each second-tier switch in turn connects to multiple racks via the racks’ TOR switches (third-tier switches). All links typically use Ethernet for their link-layer and physical-layer protocols, with a mix of copper and fiber cabling. With such a hierarchical design, it is possible to scale a data center to hundreds of thousands of hosts.