IP networks have lately been under tremendous pressure to evolve. In Part 1, we reviewed some of the contributing factors that are making IP networks more challenging to manage including IPv4/IPv6, Gigabit Ethernet, WLAN, and the impact of new approaches to TCP.
With an eye to the future, in this column we'll continue the litany of influences including:
- Performance metrics for IP networks
- Edge controls
- NIC augmentation
- Machine-to-machine communications
Performance metricsVoIP: The face of the new network police
However, it isn't stopping there. Both the ITU and the IETF are developing a series of standards that define IP performance and, as in the case of ITU Y.1541, specify performance requirements for categories of applications. These requirements provide the bases for network readiness assessment and for QoS mechanisms employed to support specific applications. Industry groups like the Application Index Alliance are promoting their own version of an application performance metric measuring user satisfaction. Each purports to define network performance in terms of end-user objectives and productivity.
Overall, this is will be a good thing. However, network engineers can expect to see a flurry of new "standards," measurement and monitoring features, and QoS mechanisms appearing in network hardware and software. More clearly defined by such metrics, IP networks are likely to achieve significant improvements in manageability and performance. And yet, engaging these emerging technologies will take some trial and error – and of course, there will some thrash associated with each new type of application as networks are forced to jump over evermore challenging optimization bars.
Edge controlsFix before breaking
While the details are still sketchy, the intention seems clear: Aspects of application performance, security, federation, service negotiation, and the like, will be handled by these new network infrastructures. In the case of the traditional Layer 1-3 providers, this "pushes the network stack upwards" to include some functionalities that have typically resided in Layers 4-7 and been handled by the end-hosts. A typical benefit might see the enterprise customer of a network-dependent vendor (i.e. one whose product relies on the network) connect to some internal resource on the vendor's network via some automated federation mechanism implemented at the network edges.
Another obvious benefit is in end-to-end coherence of QoS mechanisms for specific applications (or per user). But as each of these "obvious benefits" come to live in the network layers, the complexity of the configuration increases. Mid-path network devices no longer simply store-and-forward packets – correct behavior will no longer be simply a matter of appropriate queue sizes and congestion mechanisms.
Mind you – firewalls already have given us an idea of what that kind of grief will be like. So maybe this will actually offer an improvement over the current level of complexity.
NIC augmentationNetworking on the Edge Networking on the edge
The typical means are TOEs and POEs (TCP and, more generally, Protocol-Offload-Engines) that reduce the work of the bus and CPU by pre-processing for Layers 3 and 4 on the interface itself. TOEs have also been used on 1 GigE networks but haven't caught on widely, primarily due to their cost. However, as the network capacities grow, the critical servers that need access to the full capacity will need some help -- and POEs seem like the likeliest candidates for high performance systems.
Other augmentations at the NIC level include multiple-port binding, aggregation/compression/acceleration, packet coalescence, jumbo frames (see Part 1), and other minor tricks that make a big difference in the right settings. Most often these augmentations are described by vendors as transparent; however, they regularly pose deployment challenges, in some cases resisting diagnosis when they are malfunctioning (transparency at its worst). Port binding for example has yet to fully mature as a technology and customers have reported difficulties. And jumbo frames are still waiting for a coherent solution to a mixed-MTU environment.
Security fortunately, and unfortunately, comes first. So we are all too familiar with the need to support VPNs, encoding data payloads, securing files on network servers, and so on. There is a growing buzz surrounding phrases like "architecturally implemented security" that reflect InfraNet/AON influence. Again though, details are sketchy.
'Nough said. Moving on.
Complementary technologies such as RFID and GPS, as well as low-power sensors and sensor-network infrastructures, will provide all the necessary ingredients for the dramatic upsurge. Everyone and everything will be continuously linked and that sounds all very nice until you start go think about where all those packets are going.
And the impact on your networks? The usual -- more complexity required to manage yet-another-category of data traffic that is competing for bandwidth and impacting your IP performance. Only keeping track of the sources of the data will be like WLAN on steroids. OK, so that may be a matter of hyperbole – but it isn't clear how M2M is going to land or who it will affect.
Oh for those simple days of packets and routers, store and forward, Laurel and Hardy -- note that convergence wasn't even mentioned here. That was covered in another column (Making the triple play) but it shouldn't be overlooked.
Study hard. Be smart. Keep your head up and your eyes open. It's a jungle out there.