If you think throwing hardware alone at network latency will address the problem, think again.
Where latency plays in 10 Gigabit Ethernet performance
Most network engineers and administrators do agree that system and application performance tuning is all about reducing latency.
It could be said that regardless of the interface speed, network performance tuning on a ‘Nix server revolves around the same factors:
• L2 driver
• IP driver processing overhead
• TCP driver processing overhead and congestion control algorithm
• Disk I/O performance
• BUS I/O architecture
But the bottom line is that system performance (and in turn application performance) is improved by minimizing latency.
Latency in the simplest terms is the performance impact any application or system imposes on the delivery of a service to an end-user. Latency is almost always network based.
For years, computer engineers, end-users and management have lived with the understanding that if you want better performance, you need to upgrade. This understanding has been largely shaped and fueled by the following concepts:
Moore’s Law: Formulated by Gordon Moore of Intel in the early 1970s: The processing power of a microchip doubles every 18 months. Corollary: computers become faster - and the price of a given level of computing power halves.
Gilder’s Law: Proposed by technologist George Gilder: The total bandwidth of communication systems triples every 12 months.
Metcalfe’s Law: Attributed to Robert Metcalfe, originator of Ethernet and founder of 3COM: The value of a network is proportional to the square of the number of nodes. So, as a network grows, the value of being connected to it grows while the cost per user remains the same or even reduces.
If we follow these laws, latency is addressed by adding hardware: faster CPU, faster memory, faster disk, faster busses, faster network interfaces and more network bandwidth. The great thing about this approach is that it only costs money and those efforts are generally immediately apparent to end-users in the form of faster mail and web access.
So why not just upgrade the hardware? The laws of thermodynamics and the speed of light are working against us on that old approach. If you look at the direction of processor architecture and network standardization development there are some big changes afoot. In the world of CPU design there has been a shift from designing vertical processors (faster and faster single processors) to horizontal processors (multiple processors in a single die package). On the network side the most recent and future standards development is all geared towards massive throughput (40G/100G Ethernet) and how nodes communicate over it (802.3bd, 10Base-LX4). But these upgrades are about expanding capacity, not necessarily improving the performance you already have.
The immediate problem with system/application performance tuning that is facing engineers and administrators is one they have created themselves: failing to properly document their application and server environments. What they need is to create an application dictionary that documents how applications function, as well as server/service dependencies, transactional processes and how those processes look on the wire.
With visibility and network mapping, networking teams are sure to see where they can use optimization techniques to maximize what they already have. Then, where those techniques won’t work, they can throw hardware at the situation.
About the author: Michael J. Martin has been working in the information technology field as a network and Unix system admin for nearly 20 years. His early experiences designing, implementing, and supporting MIS infrastructures for research and ISPs give him a unique perspective on his current pursuits in large-scale internetworking and security architecture. As a network architect, he has designed high-speed/high-availability LAN/MAN networks for companies such as ANS/AOL, Philips, and the Edgix Corporation, and has provided network consulting for a number of businesses and regional ISPs. Michael also writes and provides training on networking and security related issues. His book Understanding The Network: A Practical Guide to Computer Networking has been translated into Russian and Chinese.