The next suite of applications is used in network engineering and analysis activities. These tools are used when...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
a high-level approach is needed for understanding how the network system as a whole operates, both within each individual device and between the internal network and any externally connected networks. These tools are necessary as they provide the ability to mock-up and analyze potential networking configurations, which enable the administrator to identify where performance bottlenecks and bad designs could impact the network before any purchases are made.
Any network design activity involves some measure of on-paper engineering to ensure that the correct level of connectivity is ensured to support the needs of its hosted applications and users. These tools also assist the designer in validating the correctness of their designs. The three tools we'll look at in this section are protocol analyzers, traffic generators, and network simulation tools.
Most systems and the applications they run are like "black boxes," meaning that they internally perform some function while visibility into their inner workings is relatively limited. Because of this behavior in most applications, troubleshooting them when they're not working is difficult. The administrator has to rely on the status messages sent to the system for information on the health of the application.
One way in which some applications reveal a little about their inner workings is in how those applications' individual servers communicate between each other and between server and client. Often, a savvy network administrator can gain a lot of knowledge about an application by watching the packet-by-packet traffic flow going in and out of an application's host server. A protocol analyzer is the tool that enables this capability.
Protocol analyzers are configured to use network interface cards (NICs) in "promiscuous mode" to watch all the traffic along a particular link. Typical NICs only process the data that is addressed to them, but a NIC in "promiscuous mode" will process all data no matter which device it is addressed to. In this manner, the administrator can watch all the traffic coming out of the problematic server and get a good understanding of the inner workings of the failed application.
Figure 4.4: In a switched network, for a protocol analyzer to monitor traffic across a link, it is often necessary to mirror that link's traffic to the protocol analyzer.
Protocol analyzers are useful also in finding problems between network devices as well. When network devices are interconnected, they need to communicate with each other to maintain network routing tables ("convergence") and nearest-neighbor information. By using a protocol analyzer to monitor this network device-to-network device communication, a trained network administrator can track down problems between network devices.
The problem with protocol analyzers is that they produce huge amounts of data, and parsing that data for useful information is a difficult task. A good protocol analyzer will be designed in such a way to categorize, group, and isolate that traffic into flows that are useful for the administrator. Good protocol analyzers also include display filters that convert the binary packet data into human-readable information. Some features of an effective protocol analyzer are:
- Color coding
- Display filters for common protocols/applications
- Traffic graphing and tree mode
- Flow, packet, and protocol analysis
- Low system resource use
- Capture save and replay
There are two big gotchas with protocol analyzers and the process of capturing a packet stream. First, setting a NIC into promiscuous mode and completing a capture is extremely resource intensive for the machine doing the capture. Most protocol analyzers will drop packets when the processor cannot keep up with the flow of incoming data. This can invalidate a capture because of the missing packets. Thus, a good idea when doing a capture is to limit the capture to just the hosts and the protocols for which you need data. Gathering more data than that also adds unnecessary "noise" to the useful data you're trying to gather.
Second, most modern networks are switched these days, which means that packets are routed by the switching and routing infrastructure only to their ultimate destination and not to every host on the switch. If you're in a switched network and you notice you're not seeing any data, you're experiencing this feature. To get the correct data to the protocol analyzer, you may need to mirror the network port in question to the port where the protocol analyzer resides. The mirroring process should be a feature of your network hardware.
The logical opposite of protocol analyzers, traffic generators push out volumes of traffic rather than gather them. The intent with a traffic generator is to simulate load on a network link so that performance metrics can be obtained during periods of known load. Also, traffic spike situations can be simulated to give the administrator a perspective of the network and link behavior during periods of high use. These tools are handy for application testing for applications that will be used over latent network links, like those that span continents or satellite connections.
Good traffic generators have the capability of configuring the amount of traffic to be sent across the connection, the type of traffic to send, and a concurrent measurement of the latency of the connection during the period of use. Network conditions such as jitter, loss, latency, and drop rate can be simulated by configuring them in the generator. An effective traffic generator will include some of the following features:
- Dynamic load adjustment
- Estimated circuit bandwidth entry
- Graphical interface
- Adjustable load percentages
Network simulation tools
Network simulation tools allow the administrator to build a mock-up of potential network configurations for purposes of functional and data flow diagramming, pre-purchase functionality engineering, and logical-to-geographical mapping. Some network simulation tools have the capability to map to existing network connections and devices to administrator-defined geographical maps. This functionality allows the administrator to easily see green and red indicators that tell which locations in the extended network are experiencing problems.
This is especially handy in larger networks than span multiple sites. By converting device hostnames and/or IPs into geographical representations, it is easier for the network administrator to triage events as they occur. Network simulation tools typically include some of the following feature sets:
- Green/red indicators
- Administrator-configurable mapping
- Web page support
- Real-time NMS updates
Troubleshooting involves good technique and good tools
Throughout this guide, we've discussed a number of ways that implementing good proactive measures into an SMB or mid-market network can improve uptime, monitor fault and performance issues, and generally keep the network humming along. As you can see, good network management involves implementation of good technology to keep an eye on the bits and bytes as they pass through the network. It also involves good practices by the IT department in ensuring that notifications are set up correctly, devices are configured and updated as according to policy, and performance is watched carefully. It is of critical importance that you develop your own skills to take the data you receive from this technology and turn it into something useable and useful for your network.
NETWORK TROUBLESHOOTING AND DIAGNOSTICS
Developing good troubleshooting technique
Tool suites for identifying the problem
IP address management
Network engineering applications
About the author:
Greg Shields is a Principal Consultant with 3t Systems in Denver, Colorado - www.3tsystems.com. With more than 10 years of experience in information technology, Greg has developed extensive experience in systems administration, engineering, and architecture specializing in Microsoft, Citrix, and VMware technologies. Greg is a Contributing Editor for both Redmond Magazine and Microsoft Certified Professional Magazine, authoring two regular columns along with numerous feature articles, webcasts, and white papers. He is known for his abilities to relate highly technical concepts with a drive towards fulfilling business needs. Greg is also a highly sought-after instructor and speaker, teaching system and network troubleshooting curriculum for TechMentor Events, a twice-annual IT conference, and producing computer-based training curriculum for CBT Nuggets on numerous topics. Greg is a triple Microsoft Certified Systems Engineer (MCSE) with security specialization and a Certified Citrix Enterprise Administrator (CCEA).