Three trends are changing the way organizations must think about application traffic on the network:
- The growing virtual workplace.
- The interconnectivity of IT systems.
- The rise in server virtualization and service-oriented architectures (SOAs).
However, this spread of users and work groups across physical space has not had a significant impact on user expectations for application performance. Regardless of location or how they're connected, geographically dispersed users have come to all expect the same performance and reliability they would get when sitting at a well-connected headquarters. With money tight, keeping applications functioning well over the WAN not only helps keep costs down, but provides the collaborative benefit of fully supporting a virtual workplace as an alternative to expensive and time-consuming travel.
It's also wise to pay attention to the gradual federation and interpenetration of IT systems between an organization and its partners, customers, and suppliers. The "perimeter" is coming to be more Swiss cheese than Swiss bank vault. The more one organization depends on another as a part of data and transaction flows, the more important it is that performance on those flows be fast and reliable.
As performance demands increase across a WAN, a lot of attention is being given to server virtualization and SOAs. Server virtualization, in addition to its money-saving and environmental benefits, creates dynamism in the back-end systems that lets the components of an application move around, even from data center to data center, over very short time frames.
SOA applications may reach across the WAN or Internet for services hosted by partners, suppliers, customers, or software-as-a-service (SaaS) providers. However, both virtualization and SOA can make it a lot harder to reliably optimize the performance of an application because they make it harder to find the traffic in the first place.
Enterprise application traffic, both end-user-facing and generated as part of applications' internal workings, is increasingly flowing across the WAN and the Internet. In the face of these changes, the network must become more intelligent and more active in ensuring application performance.
Application-sensitive performance management and optimization must therefore extend beyond traditional port/address/protocol awareness and actions to embrace the dynamic, complex, and dispersed infrastructure that is taking shape now.
In a virtualized data center, performance monitoring can feed directly into active performance management in new ways. Orchestration tools, alerted to increasing loads and decreasing performance, can start up new application or Web servers to improve performance again. Citrix and VMware are among the companies now providing functionality along these lines.
This kind of performance enhancement, just like security, will have to extend into all parts of the network. It could be global, with route optimization for traffic traveling across a complex international WAN; or it could be local, by prioritizing traffic based on what application it is associated with right at the edge switch port. It can take action at the lowest levels of the network stack, with traditional address and port-based sensitivity; or it can pick apart packets and work at the highest level, prioritizing not just an application's traffic but specific subsets of that traffic based on content, making performance management sensitive to specific actions taken within applications.
One can easily imagine, for example, that for an application allowing both entry of financial transactions and reporting on those transactions, the enterprise might want to prioritize traffic associated with doing transactions over traffic created by reporting on them.
At the moment, most organizations start with the pieces that are readily available and that can have the greatest impact on performance with the least impact on IT. These include WAN-optimization tools that compress and accelerate traffic, wide-area file services to replicate non-centralized content, and quality-of-service tools to prioritize critical traffic and mitigate the impact of non-critical traffic.
Inevitably, as the pool of performance vendors grows, and as IT shops come to need and deploy various tools, they will be confronted with a whole new layer of policy coordination and management integration. IT will need to coordinate silos of optimization across data centers, WAN, and remote offices. (And there will of course be the standard consolidation among vendors and evolution of standards for interoperability.)
But, the finer-grain, more application-fluent, dynamic, and pervasive performance management these technologies provide ultimately will be necessary to allow IT to deliver the services users require.
About the author: John Burke is a principal research analyst with Nemertes Research, where he focuses on software-oriented architectures and management isues.