Virtualization, cloud computing and big data projects are pushing distributed computing to new levels in the enterprise....
The benefits of using distributed application architecture are mounting, while the drawbacks continue to melt away.
Yet, all this progress doesn't mean there aren't areas of concern. In this article, we're going to look at the application performance benefits of distributed computing and why there's so much interest in this technology throughout enterprise IT. Additionally, we'll point out lingering problems with distributed application performance, as well as highlight how application performance tools and techniques are being used to counteract those issues.
Distributed computing and why it works so well
There are plenty of reasons why enterprise IT departments are migrating many of their applications from a traditional, centralized computing architecture to a distributed model. The most commonly referenced benefit is scalability. The architecture lets organizations simply tack on additional computing resources in a horizontal fashion when they are required. And just as easily, computing resources can be downsized, as well. For applications that may require the ability to scale in terms of numbers of active users and performance, distributed models are a great choice.
High availability is another major benefit when you choose to go with distributed application architecture. Front-end application code, databases, web services and analytics can all be dispersed across private data centers, in a hybrid scenario or as a multicloud deployment. And, if designed properly, these architectures can provide improved performance and redundancy far beyond what a centralized application deployment could ever achieve.
Lastly, a major advantage that distributed architectures hold over legacy app models has to do with geographic availability. Thanks to enterprise computing trends, such as mobile computing and remote workforces, being able to provide access to applications on a national or global scale has become a priority. And thanks to distributed computing technologies, including application delivery controllers, applications can be deployed in strategic locations and intelligently load-balanced to achieve uniform network latency performance numbers, regardless of where the end user resides.
Checkpointing and APM
Some techniques developers use to speed up the DevOps process can also hinder distributed applications. Transparent checkpointing is a great example of this. A checkpoint is a snapshot of a virtual machine. A checkpoint allows the DevOps team to roll back to a previous state if new changes to code cause unforeseen issues that require reverting.
The drawback of standard checkpointing is developers are forced to write new checkpoints for each application -- often, multiple times for the same application. For large, distributed applications, this can be very time-consuming.
Transparent checkpointing is an application-agnostic tool that can be used and reused on multiple applications -- which can save a great deal of time. However, because the applications and infrastructure are in a constant state of flux, relying on transparent checkpointing tools for the creation of save states can bump into situations where security, storage and licensing cease to function properly on some parts of the distributed architecture -- all of which can have negative impact on performance.
Still, not everything works perfectly
Application performance benefits -- and the consequential application performance tools needed -- of distributed computing are far more understood when compared with the drawbacks. The reason for this is problems arise largely due to the inherent complexities of distributed platforms and application code when compared with a centralized application deployment model.
For example, distributed models -- especially ones that leverage third-party cloud service providers -- are built on disparate network infrastructure components. And despite software developers that attempt to make the underlying infrastructure components abstract to upper layers, there are going to be performance differences from one network to the next.
Designing both a front-end and back-end application that work together is a continuous struggle. To properly maintain balance, these applications have to share data and control information between networks in terms of execution scheduling, managing response times and data exchange.
Finally, identifying whether the observed performance problem is due to the infrastructure or the application itself becomes a major headache. Coding problems, including database deadlocks and resource exhaustion, can contribute to major performance problems on many distributed applications. The problem is distributed applications leave plenty of blind spots in terms of end-to-end application and data flow visibility. It's quite possible that critical information is missed during the troubleshooting process. This, in turn, can create a situation where an insignificant bug can turn into a proverbial "needle in a haystack" situation for your DevOps team to try and identify.
APM to save the day?
Distributed performance software, including application performance management (APM) tools, are becoming increasingly popular in helping to eliminate some of the hang-ups of distributed applications. Application performance tools help identify and diagnose the types of complex application performance problems you're going to find in a distributed application. This includes end-to-end metrics gathering, application interdependence mapping and in-depth health metrics that are focused on the application itself. While APM isn't a silver bullet that will automatically fix issues still inherent with distributed applications, it does help to provide much-needed application-level performance visibility.
Complex apps need robust APM tools
Why you need APM tools