Guiding principles behind application delivery optimization solutions

When evaluating application delivery optimization solutions, it is important to address principles to help guide you through a potential deployment.

In the fifth part of this series, we explore the guiding principles behind undertaking an ADO deployment, including

optimizing where optimal, knowing what kinds of change you can affect, and coordinating deployment and management appropriately and effectively.

Principle 1: Optimize where optimal

Given all of the technological and delivery options available, the first principle of making an application delivery optimization (ADO) acquisition is to optimize where optimal. IT can architect or intervene to improve delivery in at least five major areas. Here's how:

Application development: This is not part of ADO proper, but the architecture and implementation of applications play a vital role in defining the parameters for application performance because this is where many performance problems are born. For example, client-server communications can be frequent or infrequent, terse or "wordy" and can use highly optimizable (TCP) or slightly optimizable (UDP) protocols. Applications can be made caching friendly or not.

Read more on application delivery optimization

Integrating performance management and ADCs

Application delivery and security causing VDI worry

Next-generation application delivery controllers

Server platform choice and provisioning: Again, this is not a part of ADO. IT's choices can be limited here. Often, developers or application vendors determine platforms in part or completely. Platforms with highly performant TCP/IP stacks, however, can make a significant difference in application delivery. Striking the balance between performance and cost can push IT to load physical servers as heavily as possible without harming performance. This makes the choice of platform (hypervisor, OS or application servers) a key component of overall performance. Similarly, changes to hypervisor environments can significantly improve I/O performance, again boosting overall application performance.

Data center network: IT has a lot more control over technology added in relation to the server, so generally IT can add appliances to provide services like load balancing and caching and server offload for processor-intensive tasks, including session management, storage deduplication and encryption.

Wide area network (WAN): Again, IT has a lot of control of platform, provisioning and optimizations. It can shift from point-to-point connectivity to an MPLS cloud to allow fully meshed connectivity among data centers and branches, or even use an Internet link for WAN access. IT can add QoS to prioritize some traffic, or layer on full optimization via appliances or cloud services to do finer-grain prioritization, acceleration, compression and traffic conditioning.

Desktop/endpoint: IT can adjust the networking infrastructure on user computers, for example, adding in the OS prioritization of applications, or an optimizer soft client.

IT needs to balance where it is best to optimize something (in the development process for each application, for example) against where it is easiest and most practical (in the network to help applications where it has no say in the development).

Principle 2: Know thyself

In making an ADO decision, principle two has two practical implications in this context: Know what's going on, and know what kinds of change you can affect.

To know what's going on, IT has to implement and integrate management and monitoring across the infrastructure, Layers 1 through 7. This will involve integrating in-house management and monitoring systems with carrier and provider management data (i.e., from a SaaS provider) wherever possible to get a comprehensive picture of how each piece of the infrastructure, from server software through the networks to client software, contributes to performance. It is critical that IT staffs watch for patterns in user complaints. Which application delivery optimization solutions to test and deploy will depend on what application or group of users is suffering.

No amount of WAN optimization will fix a performance problem rooted in traffic between an application server and the database it talks to in the next rack. No amount of application delivery controller magic will fix a problem caused by too many round trips across a high-latency network link. Knowing where a problem is lets IT pick the right solution from among those that can address the problem. If the actual problem is not identified, the wrong solution might be deployed and the problem not fixed or even made worse.

Read more from this series

Application delivery optimization strategies for a changing workforce

IT staffs should cast a wide net as they consider what they can and should do to fix a performance problem. The choices range from in-house, appliance-based solutions to carrier network solutions and carrier-managed premise-based solutions using physical or virtual appliances. Managed solutions offload the burden of running the appliances to someone else, and have recently become more popular across many different technologies. Cloud-based solutions eliminate appliances altogether. Whatever the delivery method, make sure to evaluate application delivery optimization solutions from several vendors, and, if possible, test more than one to see what it can do beyond what you need to meet your immediate optimization needs.

Principle 3: Coordinate, don't duplicate

Wherever possible, IT should seek to solve many problems at once with a single tool. Where a targeted solution is needed, its deployment and management should be coordinated with other ADO measures. ADO sits at the convergence of network, security, desktop and data center planning. It is critical that decisions being made about optimization do not adversely affect other areas. Depending on the situation, one solution may be more desirable than another. For example, cloud solutions look more attractive if optimization is a short-term solution. Similarly, if implementing an ADC to offload the encryption and decryption work from servers, security architecture will be affected, which could lead to a necessary update of the architecture.

About the author
John Burke is a principal research analyst with Nemertes Research, where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John leads research on virtual enterprise, focusing primarily on the virtual and mobile desktop, Application Delivery Optimization (ADO), and management and orchestration tools for the virtualized data center and the cloud. He also covers the security aspects of all of these topics, as well as server and storage virtualization, network and application performance management and monitoring, branch office IT and SOA.

This was first published in April 2013

Dig deeper on Network Performance Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close