Discover why bandwidth management, comprehensive monitoring, and service-oriented architecture management are critical...
to ensure your wide area network (WAN) is functioning optimally and to its fullest capacity, in part five of the e-Book, The Ultimate Guide to Gaining Control of the WAN. Navigate the table of contents to read other sections of this e-Book or skip below to learn how to use WAN optimization for bandwidth management and monitoring.
Table of contents
Part 1: Save WAN costs with branch office server consolidation
Part 2: How to accelerate encrypted traffic using WAN optimization
Part 3: Virtual desktop infrastructure problems solved by WAN optimization
Part 4: Using the WAN for telepresence, video conferencing
Part 5: Using WAN optimization for bandwidth management and monitoring
Part 6: Update network security architecture during server consolidation
Part 7: Wide area network optimization: Do it in-house, or use a WAN service provider?
Bandwidth needs to continue to increase exponentially as enterprises add more applications to their WAN traffic. And while bandwidth management is not as expensive as it once was, adding bandwidth will be a last resort when budgets are tight. In many cases, additional bandwidth will not help applications perform better, so proactive management of the WAN, and the applications traveling on it, is a much better option. Careful planning and prioritization during a server consolidation will ensure that the WAN is functioning optimally and that its capacity is used to the fullest.
Learn to prioritize bandwidth for WANs during data center consolidation in this tip.
In part four, we discussed the addition of real-time video over the WAN. Although WOCs in this case may not actively compress or accelerate video traffic, WAN optimization can play a critical role in making sure video does not adversely affect other traffic. It also can ensure that voice traffic meets latency requirements and protects critical traffic from non-business traffic. To do this, WAN optimization controllers (WOCs) use a combination of bandwidth management and quality of service (QoS).
Few businesses want to prohibit employees outright from accessing recreational sites such as YouTube and using peer-to-peer protocols. If bandwidth utilization is low when an employee is visiting these sites, the best policy may be to allow it. Bandwidth management allows WAN managers to set this as a realistic policy. When bandwidth is tight, bandwidth management can throttle recreational traffic or block it, preventing it from adversely affecting business traffic. Combined with QoS, policies can guarantee that even when recreational traffic is allowed, it receives the lowest quality.
There are issues involved with effective bandwidth management. The first is that the WOC must be able to classify traffic at the application level. Business and consumer Web traffic all use the same ports, and some non-business applications constantly change the port they use to hide themselves. Therefore, bandwidth management must look beyond the port to understand the application. This requires efficient deep-packet inspection (DPI) combined with the ability to process results to determine what the application is. This function must also be updated constantly. New applications are constantly being invented, especially in the consumer arena. If the WOC is not updated, it will not know how to limit or block a new bandwidth-hungry non-business application.
Classification without policies is meaningless. Knowing what the application is doesn’t help if the WOC does not know how to treat the traffic. WAN managers should look for an optimization solution that provides a good set of common rules for managing traffic. The ability to easily customize these rules and policies is also important.
Managing a WAN is always important, but with more traffic and smaller budgets it becomes even more important. It is critical that managers understand which applications are using the WAN. Not knowing means that the manager will not be able to apply the right rules for bandwidth management or QoS. WAN monitoring must evolve from simply reporting on traffic at the port level to reporting which applications are using a port. Knowing how much traffic is flowing over port 80 (HTTP) does not tell the manager very much. Is the traffic business or non-business? What are the business applications? Even knowing that it is SAP or Oracle traffic is not enough. Managers need to know the specific SAP application and, in many cases, the type of transactions within the application in order to ensure that the important ones get priority. Existing monitoring and reporting schemes must be upgraded to meet this challenge.
Application focus is not the only change WAN managers should make to their monitoring. Service-oriented architecture (SOA) and the increase in the number of gateways and security appliances are complicating the existing monitoring architecture. The change is best demonstrated by looking at response-time monitoring.
In the past, understanding response time was conceptually easy. A monitor was placed at the WAN router. When a response-time problem occurred, the WAN manager determined which side of the router it was on. If it was on the WAN side, the problem was on the WAN; if it was on the server side, it was time to call the application group. While this approach was not 100% accurate, it identified the main problem area in the majority of cases.
Today, there are complications upsetting this scheme. The first is the increasing number of appliances and gateways. The list of security services and appliances that data must pass through is growing. The security list includes a Web firewall, application firewall, XML firewall, data loss prevention appliance, antivirus device and intrusion prevention systems (IPS). Adding to the list are application gateways for XML, email and other gateways, plus the networking group’s WOC. If all these devices can affect how applications perform, and many are managed by different groups than the servers and applications they protect, determining where the response-time problem is requires knowing whether one of these devices was the culprit and often requires the application, security and networking teams to communicate about performance issues to pinpoint the problem.
Further complicating monitoring life are applications written using an SOA framework. One of the important ideas behind SOA is that a single application does not have to perform all the processes required to generate a response. An example best illustrates the concept. A user generates a request to the application server. This master application was written to call on three other applications that respond with information that the master application combines.
With SOA, the master application does not have to know where the other applications are being performed, it just issues the call. The other applications can be in the same server, another server in the same data center, or a server in a data center on the other side of the world. For this example, assume that one of the applications is on the same machine, but the other two are on servers in data centers that are reached across the WAN. When the master application issues the call, SOA generates messages to the other two servers. When they respond, the master application puts together a response.
The user in the example calls Operations and complains about response time. Operations can check the WAN link and the server in the data center where the master application is running and resolve any issues that are found. If nothing shows up, the problem could be in the secondary servers or the WAN links between those servers and the master application.
Because the master application could not respond to the user until it got back all the information, the slow response from either the WAN between the data centers or the servers would cause the entire transaction to be slow. The problem is that Operations is unaware of the secondary transactions. Solving this problem requires greater coordination between network operations and the application groups, plus monitoring tools that can put it all together.
Continue reading part six of this e-Book to learn why wide area network security architecture must change during server consolidation projects.
About the author:
Robin Layland is President of Layland Consulting. As an industry analyst and consultant, Robin has covered all aspects of networking from both the business and technical side, and has published over 100 articles in leading trade journals including NetworkWorld, Business Communication Review, Network Magazine and Data Communications. Prior to his current role, Robin spent a combined fifteen years at American Express and Travelers Insurance in a wide range of jobs including network architect, technical support, management, programming, performance analysis and capacity planning.