Data center networking architecture draws on machine learning, SDN, AI
A comprehensive collection of articles, videos and more, hand-picked by our editors
Network simplification is the meta-buzzword overtaking the networking world today. Take, for example, SD-WAN. Its...
value proposition is it reduces the complexity of the wide area network by replacing MPLS and private line connectivity with services that operate via plain-Jane internet transport.
Then, there is integration in the form of preintegrated stacks of compute, storage and networking in a single rack. These racks are then controlled through a single GUI -- a form of hyper-convergence. Last, but not least, there is software-defined networking, or SDN, touted as enabling IT to remove complex distributed control planes from the network.
That's all well and good. But there are two questions network engineers should ask in response to the network simplification meta-buzzword.
Easier to use, or are there pitfalls?
First, is it really simpler? Moving to SD-WAN, for instance, certainly reduces the number of moving parts involved in deploying and managing a WAN. There are no circuits to purchase, and there is little in the way of managing circuits -- such as monitoring bandwidth usage, massaging quality of service and interacting with the provider on various routing options and settings. Many SD-WAN products replace the local router at each site with a simpler, cheaper device that has far fewer configuration settings. But is the product really simpler, or is the complexity simply hidden behind a layer of pretty GUIs and automation scripts?
Second, is this a return to the locked-in black boxes of the original days of network engineering? Before the Internet Engineering Task Force and other bodies took on the task of making standards that would allow users to mix and match equipment from different vendors, very little real engineering work could take place within the bounds of the operator's network. The information, skills and facilities simply were not available to the average operator in any meaningful sense.
Very few SD-WAN products, for instance, are open in any sense of the word. Most SDN products seem to be very open at first glance, but you can quickly get lost in "might as well be proprietary" open source projects. That means handing most of the architectural work over to large providers or the community of folks who most passionately care about -- and can spend time -- working on various approaches.
Handling the complexity problem the wrong way
Perhaps we are attacking the complexity problem the wrong way. In the case of SDN, there is a real danger we are simply replacing one system with another, under the assumption that such a replacement will really reduce network complexity in a meaningful way.
In the case of hyper-convergence and SD-WAN, we are -- sometimes, at least -- assuming that if there is a GUI or automated script driving the network, then the system is simple. We have somehow gotten into the habit of piling new features and problems onto the old, rather than taking the time to think about what problems we are really trying to solve and how best to solve them.
As a result, network engineering has moved from a lean state -- where entire network operating systems could fit into tens of thousands of lines of code -- to a complex state, where it takes tens of thousands of pages of documentation to describe the feature sets available in most network products.
An example of how far we have come appeared in the case of the Ask Neighbors2 research done on the part of Team Cymru. You can also read about the discovery of this interesting distributed denial-of-service (DDoS) attack surface on the Team Cymru blog.
Old feature bites back in unexpected way
Essentially, this is what happened: A feature added many years ago to help troubleshoot multicast control planes -- a feature never really widely deployed -- turns out to be a pretty effective point from which to launch DDoS attacks. That brings up this question: In all these millions of lines of code we have deployed across millions of networks, how many more such features are lying around unused? How many of those millions of lines of code are actually being used? How many are just another thing to test? How many bugs have been caused by these unused lines of code?
I can remember pushing a feature into an operating system I was working on and having a customer call me after a crash caused by a completely unrelated -- at least I thought it was unrelated -- feature that no one even knew was still being used.
Start thinking like engineers
Network engineers need to get in the habit of asking for fewer features, not more. We need to stop treating features like a box of bandages we can use to fix any boo-boo the business might hand us. We need to start thinking like engineers and aiming for network simplification.
Engineers know how to slim things down, as well as build them up. As the old saying goes, an optimist says the glass is half full; the pessimist says the glass is half empty; the engineer says the glass is overengineered. We have a lot of really full, overflowing glasses right now in the form of products that do everything from slicing the bread to putting the peanut butter on.
It is time to start engineering to purpose for network simplification.
Granular data and simplicity
Facing complexity in the cloud
How WestJet reduced complexity and improved security