What are the drawbacks of using protocol analysis?
One of the big issues is that, if you pull encrypted information out of the network, it needs to be decrypted, analyzed, re-encrypted and sent on its way. That could create latency.
Also, if you only use the protocol approach, you might not see what happens within applications. That becomes important when dealing with J2ME or .NET. The more detail you want, the less likely that only protocol analysis will give you everything you want. Some management tools already have business process containers that network managers can use to relate infrastructure to business process. How is this different?
You have to put those items in the container manually. Within a week, it could be wrong. It doesn't take into account the dynamics of the network. [Protocol analysis] can give you a real-world vision of the network. Will this be cost-effective?
I think so. Today, if you want to get an accurate view of the network, you have to hire a bunch of admins whose sole job is to keep the maps updated. That can quickly get expensive. Those companies will want to put in a solution. How are applications changing networks?
The way applications use network hardware has changed dramatically. It is crucial that application traffic perform well. For a lot of businesses, critical applications are not just on a server, but are also distributed across the network and connected to business partners.
It is becoming critical that you manage how traffic flows across the network from a business perspective. You need to know more than whether the network is up or down. The question is not that simple anymore. If an e-business application is not performing well, is the network or database or application server [to blame]? How do you figure that out? Does the IT department really understand how different pieces of infrastructure and the network are related? How are today's network management tools inadequate?
With a traditional network management solution, as soon as you create a map of the network, something pops up that changes that map. Network management vendors struggle with how to deliver an accurate understanding of the network in near-real time. They don't see or monitor how traffic is flowing between different servers and databases. They need to deduce the relationship in a more automatic way. What are the evolving criteria for network management?
Three or four years ago, I needed a tool to tell me if a router is crossing a certain threshold and deliver that information to a console, where I can grab all that data and know what router is having trouble. That is nice, but it doesn't give you a sense of how things work together to deliver the service. When an application is slow, how do you fix it when it depends on five servers and two subnets?
There is a lot of finger-pointing going on. The question is, 'How do we make that finger-pointing disappear?' How can we make service level management feasible? We have tons of data but don't really have anything that does analysis, or that relates between these things from a business service perspective. How are business trends affecting networks?
The economy is chugging towards a recovery, but the pressure is still on IT to keep budgets in line [and] to reduce costs wherever they can. But you can't just stop on a dime and focus on cost. They also have to focus on how to make business more flexible and make sure the infrastructure supports moneymaking. Those are the pressures. You have pointed to protocol analysis tools as one approach that may be able to provide an answer. What are protocol analysis tools?
Basically, they are sniffers on steroids. They go out there and try to understand how traffic flows across the network and then pull that information back. They use some statistical analysis to figure out how they relate to each other. How are they traditionally used?
Say there is a problem on the network. You pull out your sniffer and monitor the network for a while. It really is designed as a troubleshooting tool. It is not designed to be on the network full time, nor is it designed to discover the network. What companies are involved in rethinking how to use these, and what are some of their approaches?
Companies like Hewlett-Packard Co., IBM Tivoli and Computer Associates International Inc. are working to take this technology and turn it into something that can be connected [to the network] all the time, or a good chunk of the time. They need [additional] software to put on top [of packet analysis tools] that does the analytics. This will require using the technology in a very different way than it is being used now.
Companies have experimented in tagging traffic, inserting some sort of ID card in all the traffic packets. For example, traffic going to one location is red, another yellow and green, and so on. Then you can take that and build a map from those tagged transactions.
Instead of tagging, another way to approach it is to have sample transactions and just look at those. It will be a year or so before we start seeing some case studies on these various approaches to see what works best. But instead of making a map of the network and then assuming the traffic follows the map, you are following the traffic and then developing the map based on where the traffic actually goes.