Building a private cloud computing infrastructure gives enterprises more control than they would have with a public cloud and allows them to get a greater return on investment from their server virtualization efforts. But moving resources away from users into a private cloud computing facility also attracts an enemy familiar to WAN managers who have consolidated data centers -- latency.
As it continues to expand its private cloud computing infrastructure from its main facility in Jasper, Ind., MasterBrand Cabinets Inc., a large furniture manufacturer, has deployed Riverbed Technology's Steelhead WAN optimization appliances to its 21 sites around the world in addition to 30 Steelhead Mobile software licenses to minimize latency and improve performance between its sites and its private cloud.
An internal monthly report on network performance showed that MasterBrand's headquarters and branch offices saw on average a 675% increase in network speeds in May with the Steelheads in place, according to Jadd Miller, senior network engineer at MasterBrand. Mobile clients yielded a threefold speed increase on average, he said.
An enterprise resource management (ERP) application that once took eight hours to run at a remote site now takes an hour, Miller said. But not every private cloud-based computing application hits light speed. Databases, file transfers and email do "extremely well," he said. Backups don't hit those 700% marks, but the appliances still improve on the system's built-in compression algorithms.
"If I were to replace this raw bandwidth at each of my locations, that would cost me over a $100,000 a month in connection fees," Miller said. "The financial guys have monthly close meetings with the different teams that are scattered throughout the country, and [applications from the private cloud have] run faster at a remote site with a 3-megabit bonded T1 than they do sitting at a PC here on the Gigabit network. And the executives here don't understand [that WAN optimization is the reason]."
Building and optimizing private cloud computing infrastructure
After a period of growth in the 1990s driven by mergers and acquisitions, the networking team "ended up having some very disparate systems -- disparate IT, disparate resources, disparate ERP systems," Miller said.
Shortly after joining the company three years ago, MasterBrand's CIO pressed the IT organization to consolidate with server virtualization. The company's main data center, which once housed 70 physical servers, 20 blade servers and 100 virtual machines for noncritical applications, now supports five physical servers in what will eventually be a completely virtualized environment, Miller said.
I think it would be overly optimistic of me to say that we'll ever be done [building our private cloud computing infrastructure]. We've got to move and evolve with it.
Jadd Miller Senior Network EngineerMasterBrand Cabinets Inc.
He plans to augment the company's private cloud computing infrastructure by leasing space with several collocation data center providers so that his central facility can "spawn off into redundant data centers across a Gigabit network connection [to keep] the resources scattered throughout the world," he said.
"The underlying infrastructure and philosophy is most of the way done," Miller said. "I think it would be overly optimistic of me to say that we'll ever be done. We've got to move and evolve with it."
Just as building a private cloud computing infrastructure required a shift in thinking, accelerating that traffic across the WAN called for its own cultural change, Miller said.
"Application performance in our organization has historically been viewed from a bandwidth perspective -- if the application is not performing, then throw bandwidth at it," he said. "That's not particularly scalable or cost-effective and sometimes masked what the actual underlying problem was."
After speaking to Riverbed in late 2007, Miller was "pretty skeptical about their claims" but agreed to a 30-day trial with the Steelhead appliances. At the end of the pilot, he was unsure whether users had noticed any difference and if the controllers were worth the investment.
"As we were discussing it with the fellow at the remote site, we said, 'Let's shut it off and see if the users notice,'" he said. "We expected to leave it off for approximately a half-hour to see if anybody noticed. It wasn't 10 minutes when we started getting angry phone calls: 'What did you do to break the WAN?' We said, 'Oh, sorry, didn't mean to. Just a glitch,' and turned it back on."
Let us know what you think about the story; email: Jessica Scarpati, News Writer