Sergey Nivens - Fotolia
Once upon a time, back in the dark ages of IT, people would sit around the table and talk about "the black box" while casting dark stares in the direction of the network team. The network, whether it was ARCNET, DECnet, Ethernet or even LANTastic, seemed inscrutable to them. They just connected their stuff to the wires and hoped for the best.
And in moments of honest reflection, we network engineers would admit that, to them, it was a black box -- an opaque network, if you will. We could see packets coming and going, identify broken routers, buffer overruns and collapsed spanning trees, but without access, others had little more to go on than gut, intuition and anecdote.
Back in those early days, conversations often went something like this:
Them: "I think the network is slow."
Me: "No, it's not."
Them: "Look, I checked the systems and they're fine. I think it's the network."
Me: "It's never the network."
Them: "Well, I still think..."
Me: "Then you'll need to prove it."
It was incredibly difficult for others to pierce this veil. In fact, if urban legends on the subject are to be believed, one reason the image of a cloud is used to signify a network is that those who didn't understand how networks worked labeled them TAMO -- for "then a miracle occurs." The cloud graphic reinforced the divine and unknowable nature of bits flowing through the wire.
But we in the network knew it wasn't a miracle, though it was still somewhat of a black box even to us -- a closed system that took a certain type of input, implemented somewhat monitorable processes inside and then produced a certain type of output.
With time, though, the opaque network became much less of a black box to everyone. Devices, software and our general knowledge grew in sophistication, so that we now have come to expect bandwidth metrics, packet error data, NetFlow conversations, deep packet inspection results, IPSLA and more to be available on demand and in near-real time.
New black boxes threaten to cloak visibility
But recently, two new black boxes have arrived on the scene. And this time, we net admins are on the outside with almost everyone else.
The first of these, virtualization -- as well as its commoditized cousin, cloud computing -- has grown to the point where the count of physical servers in medium-to-large companies is sometimes just a tenth of the overall server count.
Ask an application owner if he knows how many other virtual machines (VMs) are running on the same host and you'll be met with a blank stare. Probe further by asking if he thinks a "noisy neighbor" -- a VM on the same host that is consuming more resources than it should -- is impacting his system. Chances are he'll look at you conspiratorially and say, "Well, I sure think there's one of those, but heck if I could prove it."
Still, we love virtual environments. We love the cost savings, the convenience and the flexibility they afford our companies. But don't fool yourself -- unless we're actually on the virtualization team, we don't really understand them one bit.
Storage represents the same challenge as virtualization, only worse
Storage is the other "new" black box. It presents the same challenge as virtualization, only worse. Disks are the building blocks of arrays, which are collected via a software layer into logical unit numbers (LUNs), which connect through a separate network fabric to be presented as data stores to the virtual layer or as contiguous disk resources to physical servers.
Ask that already paranoid application owner which actual physical disks his application is installed on, and he'll say you may as well ask him to point out a specific grain of sand on a beach.
The storage environment becomes even more challenging due to its blended nature. Virtualization, for all the complexity, is a binary choice. Your server is either virtualized or it's not. Storage isn't that clear cut -- a physical server may have a single traditional, platter-based disk for its system drive, connect to a storage area network for a separate drive where software is installed and then use a local array of solid-state disk drives to support high-performance database I/O.
OK, so what does all this have to do with the network? Well, what's most interesting about these new black boxes -- especially to us network folk -- is how they are turning networking back into a black box as well.
Think about it--software-based "virtual" switches distribute bandwidth from VMs to multiple network connections. You jump onto an interface that exists only within the software of the virtualization host or chassis and map virtual LANs (VLANs) to their real-world cousins running on actual switches just a hop away. At the same time, you take four or eight physical interfaces on that physical chassis and assign individual ports to server blades, or you bond two or more together and then assign the combined bandwidth to a group of blades. That's networking, to be sure. But it's happening in systems that were out of bounds for network professionals just a few short years ago.
Also, consider that SAN fabric is often more software than hardware. Yes, there are switches, but what really makes up the fabric of the SAN are the management devices -- and the software they use, which present certain disks in certain arrays as a coherent collection, or LUN, for use by servers across the enterprise. That's not much different in its basic essence from a VLAN. SAN fabric is not networking, but to make it work, the SAN team has often had to fall back on the expertise of network engineers.
Advent of SDN contributes a new era of challenges
And then there is the rise of software-defined networking (SDN), a promising new technology to be sure, but one that still needs to have some of the rough edges smoothed away. How will software define networking? Will it only be for security logic? Will SDN create on-demand subnets, VLANs and routes along with the rules-based access control lists we all read about? In this amorphous just-in-time-networking world, how will networking professionals ever keep up or prove that a network was or wasn't configured correctly an hour, day or week ago?
The good news is that, like our original, inscrutable networking from the good old days, the ongoing drive toward maturity and sophistication will crack the lid on these two new black boxes.
Even now, it's possible to use the convergence of networking, virtualization and storage to connect more dots than ever before. Because of the seamless flow from the disk through the array, LUN, data store, hypervisor and on up to the actual application, we're able to show -- with a tip of the old fedora to detective Dirk Gently -- the interconnectedness of all things. With the right tools in hand, we can now show how an array that is experiencing latency affects just about anything.
Keep that up, and that paranoid application owner might even stop using his "They're out to get me" coffee mug.
About the author:
Leon Adato is a head geek and technical product marketing manager at SolarWinds, an IT management software provider based in Austin, Texas. Adato boasts more than 25 years of IT experience, including 14 years working with systems management, monitoring and automation systems for servers, networks and the Web. Adato is also a Microsoft Certified Systems Engineer, Cisco Certified Network Associate and SolarWinds Certified Professional.
Top 10 virtualization companies that will disrupt the market
The new virtualization engineer and what's required
NFV needs new management models