News Stay informed about the latest enterprise technology news and product updates.

The future of application delivery appliances

The Application Delivery 2.0 track at Interop Las Vegas will explore how technologies like virtualization and cloud computing are putting pressure on the wide area network and forcing enterprises to rethink the future of WAN optimization controllers and application delivery controllers. This Q&A with track chair Jim Metzler offers a preview of what's in store for attendees.

The Application Delivery 2.0 track at Interop Las Vegas will explore how technologies like virtualization and cloud computing are putting pressure on the wide area network and forcing enterprises to rethink the future of WAN optimization controllers and application delivery controllers. This Q&A with track chair Jim Metzler offers a preview of what's in store for attendees.

You are chairing an Interop Las Vegas track titled Application Delivery 2.0. What exactly is Application Delivery 2.0? Also, what was Application Delivery 1.0?

Jim Metzler: I'm almost embarrassed because people put 2.0 behind everything. But I really think this is justified. Let's start with Application Delivery 1.0. In my mind, application delivery has always been important. For years, at every close of quarter, an application wouldn't perform well. We'd all run around trying to figure out why. And usually we would throw more bandwidth at it and the problem might go away. Many times it wouldn't.

Around four or five years ago, we began to see more of an effort to deal with chatty protocols. That's the issue that really brought Riverbed to life. To get some sense of how applications were performing end-to-end and not just as a stovepipe. We began to focus on those types of problems with the first application performance management tools and the first physical appliances for ADCs [application delivery controllers] and WOCs [WAN optimization controllers]. In the 1.0 era, when we had a 4-tier application with a Web browser, Web server, application server and database server, when the performance of that application was degrading, it was almost always detected by the end user first. And when IT does find out about it, they have no idea about it in many cases because it's so complex. That's Application Delivery 1.0. And those problems haven't gone away. I'm not saying everyone has solved those problems.

With Application Delivery 2.0, things like virtualization and cloud computing kind of changed a whole lot of things. They put more pressure on existing Application Delivery 1.0 [products]. With cloud computing, more people are going to be accessing resources over the wide area network. The WAN comes into play more. You've got more heavily loaded servers, not just running one application but 10 to 20 VMs [virtual machines].

At the same time, you get some new challenges. Virtual desktops are one example of what we didn't have before. And you have a new generation of solutions, the virtualized [application delivery controllers] and [WAN optimization controllers] that we didn't have before. In Application Delivery 1.0, we moved people out to branch offices, and that drove up WAN traffic. In Application Delivery 2.0, those workers are now mobile. And with their smartphones, they're not just doing email and surfing the Web, they're also doing business applications and putting more pressure on the WAN.

In the new environment, say a hybrid cloud environment, we're going to have end users in branch offices on their cell phones coming over a variety of WANs, maybe the Internet and maybe MPLS. The Web server is going to be hosted by some third party, and application and database servers are going to be in corporate headquarters because they want to keep that data confidential. We now have to gather management data over three or four different organizational domains—and within each one of them, over multiple technology domains—and piece that together to do troubleshooting on a per-VM basis. That's an order of magnitude more challenging than where we are today. That is Application Delivery 2.0. 

In that transition to Application Delivery 2.0, what sort of products and technologies are going to emerge to meet these new challenges? Will a new generation of WAN optimization controllers and application delivery controllers emerge? Will these be discussed at Interop Las Vegas?

Metzler: I'm really jazzed by the emergence of virtualized appliances. I have a session on virtualized application delivery appliances. Most of IT is still physical and static. You can fire up a VM quickly and you can move it around, but everything else is static. One of the reasons I like these virtualized appliances [is that] if you're shipping a VM to a different data center, if it doesn't have a WOC or ADC, you can ship that along in roughly the same amount of time.

From a management perspective, you're going to need some kind of metadata schema that is going to describe the management data that you're going to gather from the enterprise and the network service provider and the cloud computing providers—some kind of agreement on that. Even in some enterprises, you're going to have laptops, wired and wireless LANs, two or three different types of smartphones. The application and database servers are virtualized, with maybe one on VMware and one on Hyper-V. Getting that information just from within the enterprise is difficult, but then you have to get that info from other service providers. 

Why are virtualized application delivery controllers and WAN optimization controllers helpful in managing all this complexity? And is there any downside to taking them out of the physical appliance?

Metzler: A lot of vendors say these are just as fast as appliances. Traditionally in our industry, conventional wisdom was that if you want it really fast, you do it in hardware. If you're less concerned with speed, you can do it in software. One of the challenges here is just how fast these virtual appliances can be. That's still to be played out because they're still somewhat new.

Having said that, in most branch offices you're not terribly worried about real high-end performance, and I could also argue that Moore's Law doubles the performance of processing every 18 months. You might say, "What are you concerned with?" A VM running on a screaming server gives these WAN optimization controllers quite a bit of power in the branch. But that's still one of the big question marks out there.

There are certain areas in the data center where you really need a screaming hardware appliance with lots of I/O. But other places, deep into that four-tier architecture, maybe a virtual one makes sense. Or maybe you want to put a virtual one inside a server where you've got 15 VMs, 12 of them running applications. You might want to have an ADC inside the server on another VM just to do load balancing across the VMs in that device.

The architecture of how this plays out is still to be determined, but that doesn't concern me because the architecture of the data center in general is still to be determined. Everything is in play right now.

You have two sessions at Interop Las Vegas specifically aimed at cloud computing: optimizing performance, and how to manage a public cloud computing environment. I think these topics are very relevant to WAN managers. What do you want them to come away from these sessions having learned?

Metzler: When you think of cloud, obviously you think of public and private. I'm gearing those sessions toward the public cloud. There certainly are challenges in the private cloud, too. At least initially, with the first generation of virtual switches, it was incredibly simplistic relative to its management capability. You suddenly lost any insight into that inter-VM traffic from a management, optimization and security perspective. Well, there are ways to get around that now. You can put a virtual probe in there. You can use Cisco Nexus 1000v.

In the public cloud, I really want to raise awareness of the challenges of a hybrid cloud solution, and I want to see [WAN managers] begin to drive the conversation with their vendors, whether it's their management vendors or cloud vendors. They should say, "Hey, I'm looking at a cloud solution, and I realize there are some issues there. Let me do more embedded due diligence and ask questions [of] cloud providers and network providers." I want to equip the network person to know what to look for in these cloud solutions from a management and performance perspective. 

Finally, you have an Interop Las Vegas session entitled "What does virtualization mean to the branch office?" What will that session focus on?

Metzler: Ever since Application Delivery 1.0 started three or five years ago, we've been talking about getting resources out of branch offices. OK, but you've got to have something there. So what is the design, configuration, and physical equipment for the branch? What do you leave in the branch office in terms of functionality—DNS and DHCP? What kind of device do you leave there—a switch, router, WAN optimization controller, branch office box—and why? You can have Cisco in there saying it's all about routing and we have the ISR with VMs in it. Riverbed is saying WAN optimization is the way to go. And others say we have this screaming server we can put in there. We can put VMs with a virtualized router from Vyatta, virtualized WAN optimization controllers and best-of-breed virtualization products for a relatively low cost, and that's good enough for 95% of our branches. So this session will be a conversation on which functionality you want to leave there, what type of form factor, and why.

Dig Deeper on Network Infrastructure

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close