Sometimes, when end users experience application trouble, response times degrade slowly. Other times, applications seem to turn off as if a power switch has been flipped. In many cases, the causes of end user application woes can be identified and preempted before there is perceivable degradation; other times, they cannot. But in all cases of poor end user application experience, the cause needs to be identified and remedied swiftly. That is easier said than implemented.

The ability to deliver a consistent, quality end user experience and to fix problems is both growing more important and more complex than ever. More important because workers today are more sophisticated, expect a superior experience and are less patient, while dependence on IT never has been higher. The challenge is more complex because the move to virtualisation, cloud computing and SOA makes for more intricate application and infrastructure dependencies.

That in turn makes troubleshooting much trickier. In fact, business applications are growing more complex by the day. Seemingly simple transactions encompass the end user client, web server and application server, databases, mainframes, message busses and increasingly services from web service providers, such as public and private clouds.

When managing application performance, from the eyes of the user, IT teams need light shed onto the actual trouble spots. That's the only way SLAs are met, application degradation calls to the help desk are minimised and web sites are not abandoned due to poor response times. All this requires a solid understanding of how the health and capacity of every system and device, across all tiers of the infrastructure, supports the application, from the network through the servers and databases to the end user itself. Unfortunately, the way this information is typically gathered is no longer effective, or efficient, for today's rapidly changing and dynamic environments.

Most organisations have been relying on point products instead of gaining the view of the entire application transaction lifecycle. They are monitoring the end user experience, but in isolation. The same is true for their database performance, and the latency of the network layer, as well as how servers, mainframes and other systems are performing. These tools provide slivers of insight, like a flashlight in a dark field, when what they need is the shine of a spotlight. The result is siloed information; the data necessary to fix the problem is scattered among a variety of monitoring and troubleshooting tools.

In order to determine the causes of problems, members of each respective IT group need to assemble together with their respective reports. They need to compare their numbers and try to figure out the cause of the problem. It is manual correlation and it should have gone extinct years ago. While these specialised tools are necessary, and often provide deep insight into the areas on which they focus, they don't help manage the entire business-technology infrastructure, in real time. When problems can start to reveal themselves with shifts in performance measured in milliseconds or fractions of a percentage of CPU utilisation, and then impact the end user minutes or hours later, it becomes clear that organisations need the ability to detect end user issues before they occur.

That means not only merely monitoring the application, but also tracking the entire transaction flow and monitoring each step of every transaction by measuring response times, latencies, protocol and application errors and all of the associated dependencies, on every tier from the end user through the data centre.