Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Making the business case for data center-class switches

Choosing the right data center-class switch depends on your needs. This article breaks down the benefits and uses for data center-class switches in emerging data center architectures.

A new generation of switches -- classified here as data center-class switches – are engineered with the ability to support both data and storage requirements, as well as provide the reliability enterprises need to handle mission-critical applications. These switches accommodate virtualized environments and provide the underlying network infrastructure to support highly complex and often distributed enterprise-class applications, databases and data storage. But they're not appropriate for every enterprise and are currently used primarily by large enterprises.

The key is to know when the time is right to replace previous-generation local area network (LAN) and storage area network (SAN) technologies with newer and more versatile data center-class switches. While a series of bullet points can provide comparisons of data center-class switches to traditional LAN/SAN switches, the real-world benefits remain cloudy until you conceptualize specific instances where data center-class switches can be best used within your network.

Scenario 1: Data center-class switches and increased virtualization

Despite what you may have heard, a great number of enterprise infrastructures still rely on bare-metal servers that haven't yet been virtualized. But eventually, the cost savings and ease of management will win, and virtualization will take hold. Once this happens, standard LAN switches won't be capable of handling the port speeds required in modern virtualized data centers. At a minimum, you need to have multiple 10 Gbps connections to connect to today's blade servers and storage arrays. Often, higher capacity Ethernet ports are necessary, with speeds of 40 and 100 Gbps.

Virtualization reduces individual port capacity needs in the data center while increasing throughput requirements.

With large numbers of 40/100 Gbps ports, a new breed of data center-class switch fills this need. Virtualization reduces individual port capacity needs in the data center while increasing throughput requirements. More and more, individual 1 Gbps ports in the data center are becoming unnecessary and obsolete. Instead, 10, 40 and 100 Gbps connectivity is becoming the standard. If you need consolidated, high-throughput interfaces to connect your blade servers, a robust data center-class switch will best fit your needs because it offers the highest density of high-bandwidth interfaces that blade servers and storage arrays now require.

Virtualization at the server level consolidates multiple virtual servers and storage arrays onto a single hardware platform. This makes the need for added redundancy techniques like virtual port channels and overlay transport technologies critical. These types of technologies provide an additional layer of safety in the event of a network hardware or software failure. The failure of a single blade server operating dozens or hundreds of virtualized servers can be catastrophic to an enterprise organization. Therefore, additional fail-safe mechanisms often found only on data center-class switches (like those described above) must be introduced.

Scenario 2: Converging LAN and SAN networks

In the early 2000s, the only way to deploy a storage area network was to create a separate, dedicated network purpose-built for transporting data at the block-level. Fibre Channel was the ideal storage transport medium due to limitations inherent in the Ethernet protocol. Ethernet was designed as a best-effort protocol, which is not sufficient when dealing with storage transport. While operating a dedicated storage network independent of the LAN technically works, it doubles the administration costs since you are managing two separate network infrastructures.

The advent of data center bridging (DCB) or converged enhanced Ethernet provided a more elegant solution to this problem by eliminating the potential for packet loss across Ethernet networks. By removing this weakness, a DCB-enabled data center collapses separate LAN and SAN networks into a singularly implemented and managed hybrid network. But these hybrid networks require data center-class switches since they use storage over Ethernet protocols like Fibre Channel over Ethernet and Internet Small Computer System Interface. Consolidating two networks into one through the use of data center-class switches will dramatically decrease capital expense by reducing the amount of physical hardware needed to operate a LAN and SAN. It also lowers operations expenditures because enterprises can manage a single, consolidated network rather than two independent networks.

Scenario 3: Increasing east-west data center traffic

The dramatic increase of east-west traffic -- traffic that runs between and among data center servers -- catches many IT network administrators off guard.

Most previous-generation data centers are fully capable of handling a substantial amount of north-south traffic -- that is, client-server communications that come in and out of the data center. But because of advances like virtualization and big data, east-west traffic that stays within the data center is growing at a rapid pace. More than 75% of all data center IP traffic is now east-west, according to Cisco's global cloud forecast, and remains switched within various data center components. To handle such large transfers of information, an infrastructure may need data center-class switches. Only high-capacity and multipurpose data center-class switches offer a backplane that can handle such a massive increase in east-west traffic.

As your legacy applications continue to age and are replaced with modern enterprise applications, keep an eye on your current switch backplane capacity; it may be reaching its limit. When east-west traffic begins to overtake north-south traffic in the data center, it's time to move toward a more comprehensive data center switch architecture.

Scenario 4: Top-of-rack switching with centralized management

Managing multiple network switches is a huge task for large data centers. Traditional data centers that operate using standard LAN switches adopt a physical end-of-row switching architecture. This essentially means that autonomous switches are placed at the end of each row of racks, and all servers within that row are cabled back to that switch for connectivity. This design results in routing a considerable amount of copper and fiber cabling between servers and switches. Each switch must be independently configured and managed. This can often cause configuration mismatches on the switch, which results in suboptimal switching for servers.

A new data center-class switch, on the other hand, can be deployed in a physical top-of-rack design, where small 1 rack unit switches are deployed in each individual rack and cabled back to an aggregation switch via high-speed uplinks. By doing so, the servers racked below have only a few inches or feet to travel before they are connected to the network. This eliminates a considerable amount of cabling expense and burden by distributing the switch hardware more evenly throughout a data center. Another key consideration is in the fact that these smaller, distributed top-of-rack switches are merely extensions of the upstream data center aggregation switch, which offers centralized management.

Using this architecture, a massive enterprise data center can be operated using only a handful of data center-class switches managed from a central switch. Using data center-class switches this way presents a high-scalability, high-flexibility model that can't be replicated using traditional switch architectures.

Weigh these things when considering data center-class switch purchases:

  • The amount of virtualization in your data center today and expected in the future
  • The number of 1 Gbps ports used versus 10 Gbps ports and higher
  • The increasing need for improved redundancy and fault tolerance
  • The potential cost savings found in consolidating separate LAN/SAN networks into a single unified network
  • An anticipated increase in east-west data center traffic
  • The need to eliminate large amounts of data center network cabling
  • The need to centralize data center switch management

If you can justify one or more of the above considerations, it may be worth exploring the deployment of higher-capacity data center-class switches. Even if you have no need for a data center-class switch right now, perform a similar evaluation next year. Eventually, server, storage, application and user demands will require them.

Next Steps

Need agile data center network consolidation? Data center-class switches are for you

What data center-class switches need to work for today and tomorrow

Reading the tea leaves for data center operations trends

Eight technologies to try for more efficiency in your data center

This was last published in June 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Buyer's Guide

Data center-class switches: A buyer's guide

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do you think that increasingly virtual data center environments require new data center-class switches to accommodate higher throughput and increased port density?
Cancel

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close