- Data centers
- Distributed offices
- Remote offices and users
Each has unique requirements for bandwidth, latency and availability, and often the optimal architecture for linking all three site types incorporates a range of services.
By definition, data centers include massive amounts of computing resources that serve up large volumes of data to the rest of the organization. These sites require massive amounts of bandwidth (and, generally, very low latency). Other sites that may fall into this category include contact centers or very large administrative sites.
Along with classifying sites into categories, it's important to baseline the bandwidth requirements of each category of sites. For some organizations, a "high-bandwidth" data center may require speeds of 100 Mbps or T3/E3 connectivity (45/34 Mbps); for others, nothing less than OC-192 (9.6 Gbps) or 10 GB Ethernet connection will do. Similarly, assess the latency requirements for each site.
Once the network team categorizes the sites and assesses the bandwidth requirements, sites can be matched to services. In some cases, a single service type can meet the needs of multiple site types. In others, network managers must procure three distinct sets of services.
Data center consolidation
Most organizations are in the process of consolidating data centers, meaning the number of data centers needing interconnection equals four or fewer. As they consolidate, their bandwidth requirements skyrocket, and applications increasingly reside far away from their users, making latency a critical issue.
A good compromise is carrier Ethernet, which offers bandwidths from 10 Mbps through 10 Gbps. Carrier Ethernet generally is simpler to implement than MPLS, and though it lags behind on inherent support for quality of service (QoS), class of service (CoS capability is less relevant in scenarios with very high bandwidth.
Regardless of service type selected, backup services are critical for data centers. The best practice is to work with carriers to ensure physically diverse circuits. Regardless of whether there's a single carrier or multiple providers, circuits from different carriers are often provisioned across a common physical cable run.
Distributed computing strategies
The "core" WAN is the network that connects the dozens to thousands of sites that house computing resources. These sites typically require lower bandwidth than data centers. Given the wide geographic spread of the typical WAN, service availability is usually the most critical requirement.
For these reasons, 76% of organizations use MPLS services for the core WAN. They cite any-to-any connectivity, CoS capability and cost as the top justifications for deploying MPLS.
There's also a growing amount of interest in deploying Carrier Ethernet for core WAN services. A big reason is simplicity: Companies are more likely to report that Carrier Ethernet deployments are "extremely successful" than MPLS deployments (67% for MPLS vs. 79% for carrier Ethernet). But the overwhelming driver for considering carrier Ethernet is cost. As with data-center sites, backup for distributed sites is critical. Companies rely on multiple local loops provided by the same carrier, use diverse carrier services, or use other fixed-line services.
Remote access demands
Remote-access services can constitute anywhere from 25% to 40% of the typical organization's WAN budget, and upwards of 60% of troubleshooting and support costs. And there's no single recommendation for supporting these remote sites. Instead, there are several options, including wired Internet-based VPNs (typically over DSL or cable modem services), MPLS and broadband wireless Internet.
The best approach when evaluating WAN services, then, is to consider a range of options, from optical circuits to MPLS, Ethernet and wireless services, and reach out to a range of providers.
About the author: Katherine Trost is a research analyst with Nemertes Research. She has expertise in hosted, managed and professional IT services, as well as channel strategies.