Problem solve Get help with specific problems with your technologies, process and projects.

Networking on the edge

Once upon a time, a NIC was just a NIC. See what's changed.

Once upon a time, a NIC (network interface card) was just a NIC -- it put packets on the wire at best effort, sometimes as fast as megabits per second (Mbps) -- which was usually good enough. That was then.

Today, a NIC isn't just a NIC anymore.

The typical workstation's Fast Ethernet NIC is barely good enough for most end-user applications. And there is growing awareness that the 100 Mbps label on the packaging hardly guarantees packets will actually reach the network at that rate -- underperforming NICs are responsible for the lion's share of sub-optimal network performance.

But the real problem, as stated by Gilder's Law, is that network capacities are growing at roughly three times the rate of CPUs, roughly doubling the achievable rate every six months. By 1999, networks had already outstripped the ability of CPUs to fill them. Today, it would require a 20 GHz CPU operating flat out to saturate a 10 Gbps link -- and that is assuming everything else, particularly the bus, is up to the task.

For high performance servers and clients, it becomes imperative to find ways to get around the growing chasm between the CPU and the network -- and the NIC has begun to evolve under considerable pressure. Several strategies have appeared recently to varying degrees of success.

TCP-Offload-Engines (TOE):
In this approach, a variety of basic functions are offloaded from the operating system (OS) to an engine on the NIC itself. Effectively this is a special-purpose CPU placed on the NIC to pre-process packets before the OS itself gets to see them. The intention is to reduce the relatively high cost of processing IP and TCP, particularly when the network connection is not critical to any running applications. Functions like checksums and segmentation control can be partially or fully handled by the NIC without invoking the CPU.

The TOE is effectively invisible to the network -- it doesn't require any special implementation on the network side. However, it requires special drivers and so far integration with the OS is still immature. Adoption industry-wide has been limited. Subsequently, the cost has remained high and the relative value has not yet reached a critical point.

Bottomline: While this approach isn't likely to disappear entirely, it has yet to show itself as an effective solution for general high-performance networking.

Teaming NICs:
NIC teaming involves connecting two or more NICs in parallel. As a consequence, the host can use several NICs at once for the same network connection. This allows hosts to see incrementally greater network capacity without jumping all the way to the next available technology (i.e. Fast Ethernet to Gigabit Ethernet, or GigE to 10 GigE). For example, a 2 Gbps network interface can be created for the cost of two 1 Gbps cards (and two ports on the switch) without resorting to 10 Gbps.

Again, this technique requires special drivers that treat multiple NICs as a unit. And while it offers some benefit to machines experiencing the NIC itself as the chokepoint, it doesn't help much with the chasm of Gilder's Law. Most CPUs simply won't be able to take full advantage of the additional capacity.

Bottomline: There has been some adoption in the industry with sporadic reports of integration issues.

Front-End Aggregators/Accelerators:
Most commonly used on WAN connections and in front of load-balanced services, TCP aggregators are typically appliances that sit between one or more hosts and the network. They are purpose-built systems that offer means of handling large numbers of TCP connections. Across WAN connections, the overall flow can be optimized for the limited bandwidth using proprietary compression techniques, non-standard transport protocols, or both. Ahead of critically performing servers, an aggregator can obviate the issues of network bottleneck by redirecting connections to hosts as they are available.

These technologies are gaining favor, particularly for expensive WAN connections supporting critical applications. A number of vendors are offering effective solutions that work well point-to-point. High performance Web services have been taking advantage of aggregators for several years, particularly for reasons of fail-over, scalability, and security.

Bottomline: These are usually effective solutions but are context dependent and not scalable to individual machines.

I/O Acceleration Technology: Intel has recently announced I/OAT, offering it as an alternative to TOE for the individual network server or client. They are now focusing on integration with the CPU and the overall chipset to achieve optimal coherent performance. There are open questions as to how this approach standardizes beyond Intel -- and whether it becomes technologically sensible to break down the well-defined separation between the CPU and the NIC as peripheral.

I/OAT is not yet available and is anticipated in 2006.

Bottomline: It isn't clear if this is an incremental gesture from Intel to maintain market profile or whether it represents a significant chasm-crossing advantage for the average networked host.

Meanwhile, the pressure on conventional NIC technologies continues to build as CPU capacities fall further behind. And the shape of the network interface card is likely to change rapidly under duress.

Accelerators (NetComp)
Teaming (ZDnet)
I/OAT (Intel)

NetworkingChief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2.
This was last published in April 2005

Dig Deeper on Edge computing

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.