The incredible upward trajectory of enterprise demand for fast and cost-effective metro network services to connect...
data centers, branch offices and remote workers is good news for telecom service providers looking to increase revenue. All signs point to Carrier Ethernet as the changing enterprise WAN landscape pressures providers to migrate away from legacy SONET/SDH infrastructure and deploy carrier-grade Ethernet services to meet the demand for high bandwidth services with guaranteed performance levels at lower price points.
In this Telecom Insights guide, Nemertes Research analyzes the "why" behind the increasing enterprise demand for next-generation Ethernet networks, as well as the standards and technology needed to architect them. Beyond the research, looks at how carriers can position Ethernet and VPLS services effectively to speak to business needs.
Metro Ethernet service deployment eased by Carrier Ethernet standards choices
by Irwin Lazar
Ethernet service deployment is skyrocketing, and specifically, metro Ethernet services for enterprises are in highest demand. To prove the point, 73% of Nemertes Research benchmark participants are deploying Ethernet-based services, up from just 52% a year ago. Of the various types of Ethernet services, metro Ethernet is the most popular: 63% of research participants deploy it, typically linking to large sites and data centers.
Why the specific metro Ethernet interest? Two main reasons. First, bandwidth: Ethernet services typically deliver some of the highest-available bandwidth in the WAN. Second, simplicity: Ethernet services are often plug-and-play.
The single biggest drawback to Ethernet services? Lack of availability. "I'd use more of it if I could get it," is the common refrain. One reason providers are slow to deploy Carrier Ethernet (relative to its popularity) is that for carriers, it often represents a radical departure from their existing architectures. Service providers continue to depend on traditional SONET/SDH-based access, metro and transport technologies, even as they watch demand increase for IP and Ethernet. That means they're managing separate transport hardware and provisioning systems to handle both their legacy networks and the new generation of packet-transport protocols that include Ethernet.
The current crop of Ethernet services is defined to run over multiprotocol label switching (MPLS). But from the carrier perspective, turning up new customers on MPLS-based services like Ethernet requires a complex set of steps involving multiple different operational support systems (OSS).
Carrier Ethernet standards designed to create single control plane
To streamline deployment and management of these new services, carriers are seeking a way to merge their Layer 1, 2 and 3 operations and management infrastructures so they can operate a single control plane for provisioning. Two emerging specifications seek to do exactly that:
- MPLS-transport protocol (MPLS-TP) is an approach that started life as T-MPLS (Transport-MPLS) within the within the International Telecommunications Union (ITU). Due to concerns about interoperability between the ITU's T-MPLS proposal and existing MPLS standards, the ITU turned over T-MPLS development to the Internet Engineering Task Force (IETF).
- Provider Backbone Bridge -- Traffic Engineering (PBB-TE ) is the main competitor to MPLS-TP. This approach is based on leveraging existing IEEE 802.1 standards to enable carriers to natively deploy Ethernet services using existing Ethernet technologies.
The benefits of MPLS-TP. MPLS-TP is essentially an MPLS extension based on the concept of extending MPLS resiliency and provisioning mechanisms to Ethernet via a new transport-focused profile. With MPLS-TP, carriers can mix and match circuit or packet-based services in the same network, using a single control plane and operational support system (OSS) for service provisioning.
Perhaps MPLS-TP's most important quality is that it applies circuit-switching-like functionality to MPLS, treating MPLS label switch paths as dedicated circuits. This approach enables operators to define bi-directional paths (same path forward and backward), eliminating the LSP (label switch path) merging capability of MPLS, whereby packets going to the same destination can be merged into a single LSP. By eliminating LSP, MPLS-TP enables providers to isolate customer traffic into separate end-to-end virtual circuits.
In addition, MPLS-TP eliminates the need for IP at the end of the LSP by extending the label all the way out to the end device in a path. This allows service providers to eliminate the need to configure IP services on edge devices, instead allowing MPLS-based provisioning of lower-layer services such as Ethernet connections at Layer 2.
Proponents of MPLS-TP also tout the following benefits for carriers and service providers that already have MPLS cores:
- MPLS-TP leverages existing MPLS standards, providing a smooth migration path, adding only extensions for Layer 2 forwarding, provisioning and management. That means a more seamless deployment for service providers already using MPLS.
- MPLS-TP can support a mix of traditional circuit-switched Layer 1 and 2 services (like SONET/SDH or WDM), as well as packet-based services like Ethernet, enabling service providers to protect existing customer revenues while modernizing their own transport infrastructures.
- Re-use of existing MPLS services means additional services can be provisioned using existing OSS with minimal modifications.
PBB-TE advantages. A competing proposal is PBB-TE, based originally on Nortel's proprietary Provider Backbone Transport (PBT) but now undergoing standardization within the IEEE's 802.1Qay working group. Unlike MPLS-TP, PBB-TE only supports Ethernet, meaning that other Layer 2 services must be tunneled within MPLS or converted via a gateway. Proponents of PBB-TE tout the following benefits:
- Simplified infrastructure based on using existing 802.1 protocols such as VLAN tunneling (also known as double-tagging or Q-in-Q) to enable a provider to tunnel customer VLAN within provider VLANs
- Elimination of Ethernet inefficiencies by replacing MAC learning and spanning tree with a new protocol (Provider Link State Bridging -- PLSB) a link-state protocol that uses the common IS-IS routing protocol to calculate optimal and redundant paths.
- Re-use of existing ITU and IETF standards for resiliency such as resilient-packet-ring (RPR) and IEEE 802.1ag carrier OAM standards
It's too soon yet to say which approach will ultimately win out, but the existence of both specs spells good news for users. Both standards offer carriers the opportunity to achieve lower operating costs while improving service delivery. And that means that Carrier Ethernet services users are growing to love will be more widely-available than ever in coming years.
Carrier Ethernet demand rises as enterprise WAN landscape changes
by Johna Till Johnson
The enterprise appetite for Carrier Ethernet services appears to be insatiable. After a slow start in 2004 to 2006, use of various flavors of Ethernet services has skyrocketed to a whopping 73% across all industries (see chart below). Usage of Ethernet services is up even more dramatically in some sectors, including higher education, and state and local government.
Source: Nemertes 2009 Benchmark (Ethernet Deployment Trends)
What's driving the demand? In a nutshell, telecom service providers understand that enterprises perceive Carrier Ethernet to be a "cheap and cheerful" service: low-cost, high-bandwidth, easy to manage, and the ultimate in flexibility.
Specifically, enterprises continue to grapple with flat or declining IT budgets. Eighty percent of companies benchmarked by Nemertes Research in the spring of 2009 (see chart below) say their IT budgets are flat to declining, and 70% said Ethernet's cost justified its deployment.
Source: Nemertes 2009 Benchmark (Ethernet Deployment Trends)
Yet at the same time, bandwidth requirements continue to rise (an average of 34% year-over-year in 2009). What's driving the growth? Increased reliance on collaborative applications across a distributed user base, for one -- meaning that users increasingly share data across the WAN. These are often multimedia applications, including video conferencing and video streaming, and thus require extremely high bandwidth and low latency.
Data center consolidation driving Ethernet services
Another Ethernet services driver is data center consolidation. As companies continue to consolidate data centers from a dozen to two or three, servers that used to be down the hall from users are now across the WAN, which increases requirements for both bandwidth and performance.
Another consequence of data center consolidation is an increased need for reliability and redundancy. As the Mark Twain quote goes, "Put all your eggs into one basket -- then watch that basket." As enterprises increasingly move servers and applications into consolidated data centers, they need to ensure that those data centers are adequately backed up and available, which drives a need for high-bandwidth, low-latency links between data centers.
So one clear driver for Carrier Ethernet services is the need for low-cost, high-quality bandwidth, both to link data centers together and to link larger branch offices to the data centers.
Metro Ethernet resources
Carrier Ethernet: Big picture issues for carrier deployment
MPLS and Carrier Ethernet: Playing together to ensure quality of service
Metro Ethernet services: Switching and service attributes
Carrier Ethernet planning: Two distinct dimensions
Carrier Ethernet: PBT and beyond
Carrier Ethernet offers enterprises QoS shortcut
Another driver is perceived simplicity. One of the benefits of most flavors of Carrier Ethernet is that they don't require enterprises to share routing information with the carrier. In fact, some enterprises don't even bother with routing across the WAN: They hook up switches directly to the Ethernet pipe, although this approach doesn't scale particularly well.
It's also worth noting that enterprises sometimes seek to ensure quality of service (QoS) by simply throwing bandwidth at the problem. Carrier Ethernet makes that easy by providing high-bandwidth connectivity. Fully half of all multi-protocol label switching (MPLS) users don't use any QoS whatsoever -- despite the fact that the ability to ensure QoS is part of MPLS's raison d'etre. Why do folks avoid using QoS? Quite simply, it's complex. Most of the companies we work with require multiple attempts before they get the correct mappings among applications, users, sites and QoS. So short-circuiting the challenge of QoS complexity is yet another advantage of Carrier Ethernet.
The Carrier Ethernet services opportunity
Here's how carriers can capitalize on the growing interest in Ethernet services.
- Marketing services based on applications, not geographies or architectures. Too often, carriers offer a "metro Ethernet service" or a "virtual private LAN service" (VPLS). Both definitions are meaningless to users. At what point does a metro Ethernet service become a WAN Ethernet service? And what is the significance of VPLS? In short, these are inside-baseball terms that impart no real meaning to end users. Instead, carriers should consider offering "data center interconnect" services or "high-speed distributed office interconnect" services -- offerings that focus on user requirements rather than arcane carrier definitions.
- Peering with other providers. One of the biggest concerns users have about Carrier Ethernet services, particularly internationally, is that they simply can't get them everywhere. If carriers lack physical plant in a particular geography, they should structure peering arrangements with in-country providers, enabling them to offer end-to-end service-level agreements (SLAs) -- which brings us to the next point.
- Offering end-to-end performance guarantees. One reason Carrier Ethernet has been slow to take off (until recently) has been the lack of service provider guarantees. There's a perception that all traffic is "best-effort" -- despite the fact that it's traversing high-speed links, often across a carrier's MPLS backbone. Carriers should stress their ability to offer end-to-end performance management and guarantee that high-bandwidth, latency-sensitive traffic can make it across the Net. SLAs that stress end-to-end performance guarantees can become a go-forward differentiator.
In summary, carriers should capitalize on the thirst for Ethernet services by making their offerings even easier to use and more widely available.
Carrier Ethernet meets new enterprise metro data center needs
by Johna Till Johnson
Metro area networks have come a long way since the leased lines and SONET rings of yore. True, they are still widely deployed and extremely versatile technologies, but as user applications increasingly feature voice and data convergence and high-bandwidth/low-latency requirements, carriers are changing their metro area networks to support these applications. To understand how metro area networks are evolving, it makes sense to examine enterprise network architectures and the applications they need to support.
Enterprise WANs connect three distinct types of sites, according to Nemertes analyst Katherine Trost:
- Tier 1: Data centers
- Tier 2: Distributed offices
- Tier 3: Remote offices and users
Metro area networks are most commonly used to connect sites at the "top tier" of the WAN, which includes data centers, contact centers, administrative headquarters and some (but not all) distributed offices. These tier 1 sites are typically geographically close, and the applications located there generally move massive volumes of data. As a result, they need very low latency and very high reliability. Tier 1 WAN sites are perfect for metro area networks based on technologies including dedicated fiber, dense wave-division multiplexing (DWDM) and, more and more often, Carrier Ethernet.
Source: Nemertes Research
Carrier Ethernet suited for data center replication and call centers
Data center storage replication is one of the most common applications at this tier of the WAN. Enterprises continue to consolidate multiple data centers down to a handful, then use data center replication between two or three data centers to ensure reliability and redundancy. Often, two data centers will replicate synchronously over the metro area, typically using Fibre Channel as the core communications protocol. The theoretical maximum length of a synchronous Fibre Channel connection is on the order of 120 miles (depending on the bandwidth of the link). But latency is typically the primary gating factor, and the maximum practical distance for synchronous replication is roughly 30 miles.
Options for synchronous connection include Fibre Channel over SONET, Fibre Channel over DWDM and, potentially, Fibre Channel over Ethernet (FCOE). But future deployment of FCOE across the WAN will depend largely on the degree to which FCOE achieves acceptance within the data center, and here a large question mark remains. We found near-zero adoption of FCOE within data centers, with low planned usage for the next 24 months.
Another common metro-area application is connectivity into contact centers (call centers), which may handle hundreds of thousands of phone calls simultaneously. As enterprises move toward a converged voice and data architecture, incoming calls are carried across the WAN rather than across dedicated voice private lines, as they were previously. But ordinary WAN services such as Multi-protocol Label Switching (MPLS) may suffer from route-convergence problems: If a logical link fails, it may require multiple seconds for the network to re-establish connectivity. For most traffic, this isn't a problem, but an outage of several seconds is long enough to cause callers to a contact center to hang up in frustration.
With Carrier Ethernet, however, the connections can be engineered at layer 2, avoiding the route-convergence problem altogether. So service providers are increasingly turning to Carrier Ethernet and other technologies that support both voice and data and also can provide real-time redundancy.
The growth of high-bandwidth applications in the metro
Administrative headquarters often require metro-area connectivity, particularly in organizations (such as higher education institutions and state and local government) where many offices are in close proximity in a campus environment. Here, too, we see a greater-than-typical use of Carrier Ethernet (virtually all of the organizations Nemertes works with in both verticals have some degree of Ethernet in use in their metro-area networks). The use of Carrier Ethernet is likely to grow as high-bandwidth applications like video conferencing, telepresence, streaming video and distance learning increase.
The use of these applications is rising steeply today, driven by several related (but not identical) trends:
- Increasing acceptance of the "virtual workplace." Nearly 90% of organizations consider themselves "virtual workplaces" -- meaning that they actively encourage collaboration among employees or workers who are geographically separated.
- Travel restrictions (79% of organizations say travel restrictions have increased the use of video conferencing).
- Increased deployment of streaming video, particularly for training applications and distance learning.
The common theme across all these applications and WAN architectures is Ethernet. As voice and data converge, and as Ethernet becomes as widely deployed within data centers as Fibre Channel, Carrier Ethernet becomes the logical way to achieve high-bandwidth, low-latency links across the metro area. And those enterprises that deploy it, love it: Seventy-nine percent say they're "extremely happy" with their Ethernet deployments, and the vast majority say they expect to deploy more Carrier Ethernet in the near future (see graphic).
The bottom line is that we have seen the future of metro-area networks, and it's increasingly Carrier Ethernet. Low-latency, high-bandwidth, "cheap and cheerful" Ethernet services meet the needs of those tier 1 WAN sites that are close enough together to be served by a metro-area network.
About the authors:
Irwin Lazar is the vice president for Communications Research at Nemertes Research, where he develops and manages research projects, develops cost models, conducts strategic seminars and advises clients. His background is in network operations, network engineering, voice-data convergence and IP telephony. He is responsible for benchmarking the adoption and use of emerging technologies in the enterprise in areas including VOIP, unified communications, Web 2.0 initiatives, social networking, and collaboration.
Johna Till Johnson is the president and senior founding partner of Nemertes Research. She has decades of experience in IT structure, processes and organizations, and has worked closely with senior IT executives at leading organizations across a broad range of industries. A highly regarded expert, Ms. Johnson regularly speaks at trade shows, conferences and seminars.