It's no surprise that cloudification is sweeping IT infrastructures. There's Software as a Service (SaaS), Platform as a Service (PaaS) and even Security as a Service. We have public, private and hybrid clouds, with organizations demanding an increasingly faster spin-up of resources and services. What does this mean for in-house IT departments with large infrastructure teams, aging hardware and often-sluggish change management processes? Will the cloud make enterprise jobs evaporate?
In a recent conversation with a network architect with whom I had previously worked in the academic realm, he told me that he planned to hand over daily data center networking tasks to the systems group. His reasoning was that he wanted to improve responsiveness and customer service. When we had started out together over a decade ago, there were no separate teams within IT at the university where we were employed, only a group of people referred to as coordinators because HR couldn't really figure out how to categorize us. We all had root or admin access on systems and devices. This worked because we were a fairly small team and in those days, there wasn't that much at stake for the business. The really critical system was on an IBM mainframe maintained by an outside company.
Unified approach disintegrates as separate groups begin to form
Ultimately, this approach didn't scale as the department grew, and separate groups formed when more staff was hired. At first, I found my way into systems work, then network security. Other colleagues went into storage or voice. This is where the problems began. With the silos came difficulties in communication. Chaos ensued.
As the infrastructure became more complex, it became easier to break applications. So change management processes were instituted as a safeguard, but this slowed the organization's responsiveness. Eventually it felt like there was more writing and talking about work than actually doing any.
Unfortunately, this tale isn't unique. There are many stories about IT departments becoming mired in bureaucracy. Some blame the Information Technology Infrastructure Library, others blame unmanaged expectations with regards to risk. The end result: unhappy users implementing "shadow" IT services either on-site or in the cloud.
But when did the cloud shift from being IT's dirty little secret to being a legitimate competitor with the infrastructure? It began with cost, with one of the first casualties being email. Email is an application increasingly being outsourced to the cloud because of the low return on investment. It's a storage pig, and requires high maintenance with little perceived direct value to the business. Sure, there are security issues, which are inevitable with any cloud service, but the savings from using Google or Microsoft are simply too attractive for most organizations to pass up.
Revealing the hidden war between infrastructure and devops teams
Then there are developer environments. The hidden war that exists between infrastructure and development teams can eat away insidiously at the productivity of an organization. Developers demand speed and responsiveness, while the infrastructure team tries to keep up with their demands while maintaining standards. Often the result is a mass exodus of DevOps devotees to the public cloud, with senior management questioning if they actually need an infrastructure at all.
It doesn't have to be this way. There can be benefits -- in security, application resilience and even cost -- in staying in-house. But infrastructure teams will have to reinvent themselves and their processes to stay competitive with the public cloud, tapping into enhancements like automation and orchestration in order to meet business needs. This means redefining the traditional roles we've developed over the last decade, understanding that those old divisions of sysadmin vs. developer vs. network engineer simply don't work anymore.