Can you explain what "full-stream reassembly" means and why it is important for network-based monitoring?
Do you have a question for Jesse?
Submit your question directly to our editors at firstname.lastname@example.org
If there is a single "secret ingredient" that differentiates some network-based monitoring products from others, it is full-stream reassembly capability. While some network-based monitoring tools claim to yield L7 metrics such as HTTP status codes based on deep packet inspection (DPI) or multi-packet analysis, that analysis only works in a laboratory environment where every packet is in order and there is no loss or IP fragmentation.
Read more of Jesse's expert advice
Should application performance management include packet capture?
What IT organizations benefit from application performance management?
The benefits of non-agent-based application performance management tools
To provide real-time transaction analysis in real-world environments, network-based monitoring tools need to be able to reassemble IP fragments, reorder out-of-order segments and compensate for packet loss on the monitoring links. These obstacles are overcome with a full-stream reassembly process. By completely reconstructing conversations occurring in the data center, network-based monitoring systems can then analyze those conversations to extract health and performance metrics for various protocols. Therefore, it is very important for purchasers to ask their vendor(s) whether or not the proposed solution performs full-stream reassembly.
Purchasers should also ask their vendor(s) about scalability. Full-stream reassembly should be performed on all transactions that pass over the network, not just a sample of those transactions. Only network-based monitoring tools that can perform this level of analysis at wire speed can provide the visibility needed to conclusively answer questions about performance. Many network-based monitoring vendors that have offline analysis capabilities have added limited real-time analysis by passing a subset of traffic through their analysis engine. Although you can configure these tools to analyze several thousands of flows because they were originally designed for offline analysis, they are unable to scale up to analyze all the transactions that pass over today's 10 GB network links.
This was first published in January 2013