Rethinking data center network connectivity for new architectures

Liberty Mutual networking head Jeffrey Gray found that rethinking data center network connectivity was key to achieving successful data center network redesign.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Network Evolution: Unraveling the secrets of converged storage networks:

When Jeffrey Gray, Liberty Mutual’s director of network and communications operations, set out to interconnect three disparate data centers and begin provisioning applications on-demand from the storage area network (SAN) to remote users on the wide area network (WAN), he focused his network redesign on data center network connectivity starting with the outside connection going into the data center.

While other IT managers in Gray’s position might start by grappling with server virtualization and centralization of applications, or by converging storage and data center networks, Gray decided that to make his private cloud work, he would first need to ensure speeds and feeds into the data center and make sure that each of the facilities were interconnected and functioning as one source. In order to achieve this data center network connectivity, his team first worked to standardize access into each of the data centers and interconnect them through a nationwide carrier-provided Ethernet network on which capacity could be spun up and down on demand. The team also standardized the inside design of each data center so each could be mapped and managed the same way for performance optimization.

With that taken care of, his team will now move on to expand its virtualization and private cloud strategy as well as to converge storage into the data center network so applications can flow freely end-to-end with visibility along each arm of the journey. Gray sat down with SearchNetworking.com to explain the transition of his data center and storage networks.

How have you transitioned Liberty Mutual’s data center network so far?

Jeffrey Gray: Prior to getting to this data-center-as-one strategy, Liberty Mutual had grown through acquisitions, and we picked up data centers and network infrastructure through several acquisitions. So we worked diligently to maintain the processing capabilities … and essentially go through the infrastructure and normalize it to our computing standards. For instance, from a connectivity perspective, a number of years back we made the transition from a traditional frame relay hub-and-spoke network to a fully meshed MPLS network. In addition to that, from our remote office locations we looked to standardize our connectivity in our offices from a speeds and feeds perspective. Then from a data center perspective [the back-end access network], we standardized access into the data centers. Prior to that we had multiple access methods into the data centers, looking at everything from traditional OC-48 192 technology to now operating using DWDM (Dense Wavelength Division Multiplexing). So in the last four or five years we’ve drastically transformed our infrastructure to a world-class network.

How did you standardize access into the data centers?

Gray: There are different vendor offerings based on what part of the country you’re in, but essentially, using DWDM technology allows for high-capacity, high-speed switching into our data centers. It can scale, and that’s the key component for the ability to scale in short order. In a traditional build for carrier-made access to a data center, you have a huge computing environment, and adding incremental capacity sometimes takes as long as building it from scratch. To be able to standardize on DWDM technology gives you the scalability in a much shorter period of time.

With that comes the next tier, which is looking at what computing requirements you need to push data from data center to data center, and then standardizing that connectivity. This past 18 to 24 months we’ve had some success in standardizing connectivity across three data centers using optical mesh technology that gives you, from an industry perspective, bandwidth on demand.

Can you explain optical mesh?

Gray: Based upon where you’re located geographically, you have the capability to take advantage of high-speed Ethernet connectivity [leveraging multiple carriers], in some cases across the country where it exists. Most people in the networking industry grew up on Ethernet, so they understand the failing conditions of it. It can be monitored and it’s very predictable, and the cost is right as well.

At Liberty Mutual, why is it necessary to be able to provision bandwidth on demand across the data center network?

Gray: For instance, if Liberty Mutual were to run an ad campaign during the Super Bowl, and we were expecting additional demand on the network from our call center’s perspective, or just transactions to be at a higher peak, now we would have the capability to provision the bandwidth on demand across the data centers to handle that increased capacity.

What kind of hardware and software do you need for this type of provisioning?

Gray: If I had to put a wrapper on it, it’s in the cloud. We’re leveraging service provider technology. It’s an offering that AT&T and Verizon and others have today. But in some cases we’ve taken a different approach than some other enterprises. Where historically network managers looked at techniques like WAN optimization [to deliver application and data performance], we’ve built out a data center-to-data center infrastructure and also in parallel taken a look at the remote office capabilities to access the data centers. If you start to centralize and virtualize those applications, but you haven’t covered the data center connectivity to access those applications in the data center, you have a gap.

We also continue to invest in internal cloud infrastructure that is more central to our data centers, and that’s the real sweet spot for us. Having uniformity across the data centers and our computing environment becomes simpler, and the more we take the complexity out of it, we gain efficiency from a processing perspective.

So we can have clones of the data center in the Northeast looking like one in the Midwest, looking like one on the West Coast. We put the hardware in the same rack on the floor so we know what the configuration looks like; we know what the speeds are from data center to data center since they are uniform. It’s pretty powerful for positioning ourselves for the future.

How is the data center LAN architecture changing?

Gray: We have a traditional multi-tiered processing environment. We are in the process now of trying to look at how we want to evolve the core LAN from a lifecycle perspective. [We are considering] what the true speeds and feeds and requirements are. Also, as we continue to evolve, we’re looking at server virtualization, reducing server footprint and reclaiming cooling and power in some of the footprint.

The [physical layer] is the biggest challenge. [We’re considering] how to transform the environment so instead of having 1,000 servers on the floor, we’ll go down to 300 or even to 100. It’s kind of a two-part piece working closely with business partners and rewriting their applications to marginalize and consolidate the infrastructure while giving them on-demand computing capability across multiple environments -- not just from production but test, development and performance.

The dynamic virtualized environment has made it difficult to manage the network. How do you approach that?

Gray: Traffic management is right up there with the efforts of increasing utilization for virtual servers. I say traffic management as opposed to network management because essentially with the adaptive network that we’ve created comes tight partnership with our application developers, with uniquely how they are developing their applications to leverage all of the benefits of QoS and classes of service on the network.

So for instance, you have VoIP and video and SharePoint on the network. When there are a lot more collaborative applications deployed on the network, it comes down to traffic management, and that to me is more about looking end-to-end and having visibility of the transactions that take place on the network -- not at the traditional network management where we take five minute samples of what’s happening. In order to protect the user’s experience, the network management applications need to evolve and have the capability to do one-minute-or-less samples on the network in order to capture the traffic.

How does storage play into all of this? Are your networks converged?

Gray: We have a hybrid solution where we have some network attached storage (NAS) and some storage area networks (SANs) deployed, and our roadmap is taking us to more convergence and more centralization of storage because of what I described earlier about the data center network and the flexibility that it gives us to essentially [use] our own storage network across multiple data centers. So, backup and recovery can ride on its own uncontested network. We are positioning ourselves to drastically reduce recovery times.

Is that storage network Fibre Channel?

Gray: We are leveraging the MPLS network. We have premier sites that have increased needs from a storage perspective. So how do you get that data from premier sites housing 200-plus people that have a particular need back to the data centers? We have some SAN [with some Fibre Channel and some traditional SAN switches] and some NAS, so from a convergence perspective, we see huge value in investing in both NAS and SAN technology. We also have host bus adapters (HBAs). We’ve got a little bit of everything. We even have some traditional tape hanging around.

I think you have to look at data usage patterns to see what transactions look like end-to-end, and then classify your data to make sure you are applying the right tier of storage to the right business requirement.

If you’re working toward convergence and you already have Fibre, will you eventually opt for an overall FCoE strategy?

Gray: The long-term plan would be to look at the Ethernet as a long-term solution. In looking at how to get data in those remote locations back to the data center leveraging one cable - it’s that Ethernet connectivity - and there’s a lot of efficiency that can be gained by having that same cable to get the data from a premier office onto the Fibre Channel infrastructure, so I think that’s the way to go.

As you implement convergence, are you finding a disconnect or cultural difference between networking, storage and server teams?

Gray: There’s no disconnect. It’s actually kind of unifying to some degree because everyone is now having the benefit and the opportunity to share information in a common language. Also, from an individual perspective, there’s some career growth and some progression; and I say this about convergence of network and security, voice and data, storage and networking. There is a unique skill set to be a solid network professional, and there is more collaboration than ever before.

Dig deeper on Network Design

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close