Sales & Marketing

Drive Renewals and Avoid Customer Attrition by Closing the Visibility Gap in the End User Experience

  • author image

Software vendors strive to deliver key capabilities and enhancements that provide greater flexibility and value to their customers while also making their software more attractive to prospects. However, they often lack visibility into the End User Experience (EUE) and how their application is being used. This visibility gap hinders the ability to ensure customer satisfaction and “stickiness” that drives maintenance and  subscription renewals.

Industry reports vary on the exact percentage (some have it as high as 74 percent); however, a staggering number of end user problems are not detected with current IT monitoring tools, and the first time they are made known is when a user calls the help desk. In fact, the top three reported application performance management challenges are:

  • Lack of visibility into the end user experience
  • Need for proactive problem detection before users call the help desk
  • Ability to measure the business impact of poor application performance.

According to analyst firm Enterprise Management Associates, large companies report that:

 ”Downtime can cost in excess of $15,000 per minute for technology-dependent organizations, as applications drive revenue, productivity, and brand value.”

Selecting end user experience management tools

The EUE Management market is heating up. EUE Management technologies ensure that customers’ needs are appropriately addressed. In order to ensure a successful purchasing decision and to expedite the implementation of the right EUE monitoring tool, it is important to understand the “ins and outs” of EUE management.

With so much at stake on the business performance front, it’s critical for software providers across multiple functional areas – business executives, professional services teams, application performance teams, and application development professionals – to understand the inherent differences of the various EUE Management technologies. Just as critical is recognizing and understanding the associated myths.

Once fully equipped with this knowledge and appropriate questions to ask, ISVs and SaaS providers can select the most appropriate tool that will provide the comprehensive visibility needed to decrease business disruptions and increase end user productivity for customers.

This article identifies and debunks the 10 EUE monitoring myths, provides 10 questions to ask when assessing EUE monitoring technologies and explains five ways to use the insights gained from EUE monitoring.

Top 10 myths about end user experience monitoring

#1: Network appliance monitoring is enough

The first myth is that most application performance problems can be isolated to the network. According to research by Enterprise Management Associates, realistically, the network is the problem less than 25 percent of the time.

Question to ask: How can an appliance determine application render time from network transaction  metrics?

#2: Synthetic monitoring can validate end user experience

The second EUE/APM myth states that synthetic monitoring can validate end user experience.

In truth, synthetic robots provide “sanity-check” monitoring of application performance and availability from known system configurations. They do not monitor real end users on real systems; therefore, they can’t determine real user performance or monitor usage patterns. They also do not expose or reflect the influence of desktop resource consumption by competing resource consumers on good application performance.

Question to ask: How does synthetic monitoring help identify which users within an organization are experiencing the poorest performance?

#3: Web application monitoring is enough

Monitoring Web-based applications requires a focus on Web server-based performance metrics only, and there is no need to monitor Web applications from the desktop vantage point.

The truth is, the time it takes to complete an HTTP call has little to no relevance on the true render time of the Web application in the browser. This is especially relevant for rich Internet applications; so this myth is completely debunked.

Question to ask: Does the monitoring tool measure the duration of the end user request or the duration of the rendering?

#4: Infrastructure agents provide end-to-end visibility

Another often repeated myth is that agents deployed on infrastructure components such as Web servers, application servers and database servers provide end-to-end application performance visibility.

Infrastructure-based agents provide visibility into application performance only after the transaction enters the data center infrastructure. Client-side performance or availability-related metrics are simply not visible at all, and that’s why transaction-mapping vendors add a network appliance to their solution.

Question to ask: How does the infrastructure agent detect a failed application launch — since the request never entered the data center?

#5: EUE of custom applications can be monitored with zero configuration

The next myth, normally described vaguely in a vendor’s marketing materials, suggests that some technologies have a magic way to monitor real EUE of a custom application with zero configuration.

It’s important to understand that process resource consumption and process run-time metrics can be monitored with zero configuration, but these kinds of metrics are only important if they can then be correlated with their impact on the performance of business transactions – and there is no way to monitor custom business transactions without defining the unique start and end point of each transaction monitored.

So, be sure to understand how a particular APM technology monitors response and availability of individual business transactions.

Question to ask: How does the APM tool monitor response and availability of individual business transactions?

# 6 Agent-based monitoring depletes resources

The most circulated and talked about agent myth is that agents consume a significant amount of local resources on the desktop that are time consuming and difficult to deploy.

With CPU utilization below 0.2 percent on average, and network bandwidth consumption at less than 8 bytes/second per monitored user, the truth is that well-designed, client-side agents consume intangible amounts of resources. Installing agents on desktops is probably the simplest and least time-consuming activity of any monitoring project if using a standard software distribution tool like SMS or Altiris. In fact, it takes less time and effort to deploy a desktop agent than any other technology.

Question to ask: With an intangible impact on resources, why rely on network sniffers or synthetic transactions when comprehensive visibility into application, desktop and user performance with client-side, agent-based monitoring can be achieved?

#7: Application performance can be monitored without direct access to the virtual desktop

This myth postulates that application performance can be monitored within Citrix or RDP sessions without direct access to the virtual desktop session itself.

The reality, though, is that only by placing agents on both the presentation server – with “access” to the transactional performance within the virtual session itself – as well as on the desktop – can complete visibility into the KPIs that define EUE in a server-based computing environment be achieved.

Question to ask: How does the monitoring tool understand the transactional performance within virtual sessions in an SBC environment?

#8: Desktop agents cannot determine probable cause of application performance issues

The claim of the eighth myth sounds very plausible: it is impossible to determine probable cause of application performance issues from desktop agents, as there is insufficient insight into the underlying performance metrics associated with each transaction.

The truth is that well-architected desktop-based agent monitoring solutions break down each business transaction into seven key measurements specifically to aid in probable-cause determination relating to components outside of the desktop. Those metrics are:

  1. Activity response: total activity completion time
  2. Network latency: average latency of all requests initiated by the process during the activity
  3. Client time: time spent executing the activity with no open network requests
  4. Total network response time: the total duration of open network requests
  5. Total network incoming traffic: total received bytes for the activity
  6. Total outgoing traffic: total sent bytes for the activity
  7. ICA/RDP latency: virtual environments

Using the metrics above, an organization is empowered to rapidly identify root cause for many of the most complex application performance issues.

Question to ask: Can the APM tool break down each transaction into the seven key performance metrics above?

#9: EUE tools can only monitor application performance

The ninth myth states that EUE tools can only monitor application performance while infrastructure tools can also monitor availability.

The truth is that EUE availability issues are rarely, if ever, detected by infrastructure tools. Real EUE tools must be able to detect four types of availability issues:

  1. System crash (blue screen of death)
  2. Process hang (duration that a process spends in a hung state)
  3. Application crash (.DLL exceptions and pop-ups)
  4. Activity / business transaction (user cannot log on/send mail, etc.)

Question to ask: Does the monitoring tool detect all four types of end user availability issues?

#10: Boot and log-on analytics: EUE is limited to the application performance domain only

The final myth to debunk is basically the notion that EUE is limited to the application performance domain only.

True EUE provides an integrated portrait of all the streams that together define a user’s experience, which is a combination of application, desktop, and end user performance.

Operating system boot and log-on profiling are examples of desktop-related performance issues that impact user experience. The actual usability of a new version of an application is an example of user performance impacting end user experience.

Question to ask: How does the monitoring tool generate boot time and log-on metrics?

Five ways to use insights from EUE management

The inability to truly understand the end user experience from the customer’s perspective has traditionally left software vendors with a serious visibility gap that can hinder positive business outcomes. Effective End User Experience Management includes the monitoring of three primary components that dynamically interact at all times to define an end user’s experience as an IT consumer: application performance, device performance and user productivity.

The resulting insights gained through the real-time aggregation, analysis and correlation of all of these metrics enable software vendors to transform the quality of delivery and adoption level of their software while maintaining and increasing their competitive edge and ultimately driving positive business results.

Once armed with user-centric monitoring and management capability, enterprise software vendors gain the ability to:

  1. Drive SLA management – ISVs can address the need for managing SLA compliance by validating performance and availability of all transactions for all customers and their end users, verifying if contractual commitments have been met, and identifying outliers.
  2. Enhance product performance – As ISVs build and test their applications, EUE software provides precise visibility into the impact that performance and scalability will have on their end users’ experience, ultimately improving product quality and reducing service delivery costs.
  3. Enhance product functionality and application “stickiness” – ISVs realize that if their software isn’t engrained as part of their customer’s business culture, return on investment (ROI) is at risk, ultimately resulting in customer attrition. ISVs with visibility into end user application adoption can fully understand how their applications are actually being used, when and by whom, enabling them to make informed decisions for future enhancements and validate feature retirement decisions.
  4. Deliver superior application supportability – ISVs will have the ability to proactively detect end user application availability and performance issues dramatically reducing their customer’s business disruptions and increasing user productivity.
  5. Perform pre-emptive Upgrade and Infrastructure Testing – Understanding the potential impact of upgrades on their customers is critical for ISVs, as a single vulnerability can severely impact a customer’s business continuity. The same applies to testing configuration management of infrastructure components and various operating systems. Comprehensive before-and-after comparisons validate performance and fnctionality prior to rolling out customer-wide.

Donna Parent is Vice President of Marketing at Aternity, Inc., the industry’s technology leader in end user experience management solutions for Global 1000 enterprises. Aternity provides the industry’s first patented Frontline Performance Intelligence Platform designed to dramatically reduce business disruptions and significantly increase end user productivity. The Aternity FPI Platform arms IT and business executives with empirical evidence on how application performance and usage impacts end users’ productivity and how to optimize their technology to improve it.

Post Your Comment




Leave another comment.

In order to post a comment, you must complete the fields indicated above.

Post Your Comment Close

Thank you for your comment.

Thank you for submitting your comment, your opinion is an important part of SandHill.com

Your comment has been submitted for review and will be posted to this article as soon as it is approved.

Back to Article

Topics Related to this Article