Scrolling though a network news portal the other day, I zipped past an app stack management product ad, paying just enough attention to read the lead message, but little else. I realized I've become almost colorblind to the nuances of the word "application" when used to describe service delivery. The continuing theme of virtualized business makes the idea of the classic N-tier application sitting on a traditional infrastructure woefully inadequate.
So, how then do we succinctly describe all the layers of application delivery and ensure comprehensive monitoring and management in a converged environment? After all, it's not only the app itself; now it's all the app's service layers, the virtual machines (VMs) in which it runs, the extended hypervisor platform, network services, and finally storage. By the way, storage was really the last great holdout. With it, the last "too special to converge" technology is now firmly on the cart, as well.
Hands-on, hybrid cloud refresher course
Although I'm working with plenty of customers running hybrid cloud networks, I stepped away from a day-to-day admin role before hybrid designs went mainstream. Last month changed that, though, when I was asked to build out an elastic training infrastructure to support a few hundred student VMs. I needed a back end with dozens of 24/7 source applications for sample data, but I also needed a flexible platform to ramp up and down as many student servers as required. What I ended up with was a complex stack, combining lots of infrastructure, applications and critical network links, and with plenty of opportunity for breakage.
The dark secret of cloud is what makes me believe we'll never be pure cloud: Being 100% cloud can be expensive for many enterprises. Cloud providers, or at least Infrastructure as a Service (IaaS) providers, set pricing by calculating typical enterprise costs for setting up data centers, as well as the costs of required related support systems and staff, and then undercut that, slightly. But if you already have a data center -- and especially if due to regulatory, privacy or other factors you must maintain some on-premises data and can't move fully to cloud -- then you already have the sunk cost of the physical infrastructure. Thus, adding a few dozen Capex'd machines to the racks is less expensive than paying $2,000 per VM per year in almost every case. Decreasing pricing, it must be noted, may change this.
Following a well-worn analysis path many admins have taken, I ended up building out a virtual private cloud of training VMs tied back via VPN to a handful of boss hypervisors in my data center. Using the fantastic pets versus cattle analogy, my pets are in my building with names like Exchange2012CL-01, and I tend to them carefully. My student VMs, meantime, are numbered cattle, and I discharge them by the hundreds every day. But here's the big question: Why did this fall to me, a network guy?
Convergence up close
The walls between network, compute, storage and apps are becoming so blurred, and the tools so powerful, almost any experienced admin may just be able to manage the whole lot. Also, as network admins, we've been managing the complete stack the hard way for years, so we tend to understand it pretty well. We've also fought budget wars and have given decent presentations, at least to IT management. For this project, I actually spent more time creating a magic calculator spreadsheet than any other single part. Side note: Charting declining cloud costs using old 2010, 2011 and 2012 pricing doesn't hurt either.
So, as the admin and platform owner for the project, I of course also need to monitor the whole pile, but strangely there's really no non-vendor-hype industry term for that. So, I'm calling it "Everything, Everywhere Monitoring," or EEM for short. Perhaps a bit grandiose, but it's at least incisive enough for all the elements I'm now responsible: apps, guest OSes, hypervisors, storage, networks, VPN, WAN, cloud instances, user service-level agreements, backchannel traffic contention and more. I'm monitoring NetFlow, IP SLA, vSphere, the cloud platform, application component monitors, LUN IOPS and the list goes on. It's not app stack, it's not SysOps or DevOps, and it's not infrastructure. It's a big Venn diagram of all of that, and it's nice to have a simple, non-overloaded, spin-free name for it.
So what do you think? Deep Stack? Application Infrastructure Nexus? Network Component Multiverse? Comment on the article here, tweet or email me. What term have you come up with or heard others use to encapsulate the whole app stack as we now think of it, all the way down to the bits on disk? I still like EEM.
About the author:
Patrick Hubbard is a head geek and senior technical product marketing manager at SolarWinds. With 20 years of technical expertise and IT customer perspective, his networking management experience includes work with campus, data center, storage networks, VoIP and virtualization, with a focus on application and service delivery in both Fortune 500 companies and startups in high tech, transportation, financial services and telecom industries. He can be reached at Patrick.Hubbard@solarwinds.com.
Patrick Hubbard asks:
What term would you use to describe Hubbard's all-in-one role?
0 ResponsesJoin the Discussion