With all of this flexibility and change happening, how on earth can service-level and application management vendors assume that IT will continue to have well-known applications with well-known problems that can have automated resolutions cast in concrete? Any solution that makes this assumption will ultimately fail. It will not be able to deal with a dynamic infrastructure stack.
Instead what is needed is an application solution that embeds learning in every part of the problem resolution process. For monitoring we need to learn the performance baselines, not set static thresholds. For analysis we need to learn the current infrastructure dependency map, not rely on a manual map created a month ago. For planning we need to learn the actual configuration and capacity allocation in real time, not rely on the original deployment specifications. For execution we need to learn whether the applied fix provided a systemic resolution or only temporary relief.
When solutions can provide that level of learning out of the box then we will have truly automated management for dynamic application environments.
Large management vendors are busy working on their next generation solutions that provide learning. Last week, Tivoli upgraded its Provisioning and Orchestration solutions to include storage devices. HP recently demonstrated beta software that links performance analysis with real-time capacity planning. BMC announced its strategic roadmap to deliver similar capabilities from acquired technologies. Smaller startups such as Vieo, Quantiva and n-Layers are betting on statistical analysis to provide the missing link between performance analysis and real-time capacity provisioning. Ptak, Noel & Associates believes all of these efforts are strong steps in the right direction. This is a good thing, because flexibility without control is worse than useless -- it is downright dangerous.