1. Insights
  2. Customer Experience
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

Out with average handle time and in with customer effort score

Posted May 27, 2021
White arrow showing old way to represent average handle time and yellow arrow showing new way to represent customer effort score.

Organizations have long relied on average handle time (AHT) to measure customer service success. But in an age defined by customer experience and engagement, more and more brands are wondering: Is it really sufficient to measure success in seconds?

New metrics like customer effort score (CES) are growing in popularity as a way to assess customer satisfaction across their interactions with a brand.

This shift is unsurprising, given that consumers are increasingly looking to troubleshoot and answer their own questions via self-service tools. The pandemic is contributing to these growing adoption rates with American chatbot usage rising from 16% prior to the pandemic, to nearly a quarter (22%) during, according to a survey by TELUS International.

It’s now critical to find the right key performance indicators (KPIs) to measure the quality of customer experiences, especially as consumers switch between channels such as virtual assistants, live agents and social media. Contact center metrics should be determined with consideration of the channel they’re measuring, while also aligning with the success factors that matter to your organization.

Here’s a look at how AHT and CES stack up.

Average handle time

AHT measures the average duration an agent spends “handling” a customer’s question or issue on channels like voice, live chat or social media. Handling encompasses everything from the time the customer initiates the interaction, to when the agent completes post-interaction work. This can include looking up customer data, drafting a response and making notes after the customer interaction. It can also encompass touchpoints across multiple agents if more support, or escalation, is needed for a specific issue.

Pros: As far as measures of success go, you can draw a clear line between having a shorter AHT and reduced customer wait times. A speedy resolution is a growing expectation for customers. AHT can be used to assess organizational costs and performance, giving you insight into how adept particular agents are at handling common issues or how quickly you’re getting the customer to a representative to answer their query.

Cons: The downside of using AHT as a key performance indicator is that it can incentivize customer service representatives to prioritize speed over quality. Naturally, a customer is going to appreciate a quick resolution, but rushed interactions can hurt engagement and show a lack of empathy. AHT also doesn’t allow much room to account for unexpected circumstances during an interaction. For example, suppose the customer who initiated the service ticket gets distracted by a knock at the door while on a call or doesn’t see a social media response by the agent for a long period of time. These factors can contribute to a longer than desirable AHT while not actually impacting the quality of the customer experience.

Service interactions, in general, are getting more complex. Consider an agent trying to help a customer troubleshoot an IoT device unable to connect to the WiFi. The intricacy in the subject matter can stretch out the AHT even if the customer is pleased with the overall interaction. New metrics, like CES, can make more sense for a digitally driven customer engagement.

Customer effort score

CES measures the level of perceived effort a customer has to put forth in order to remedy their service issue, purchase a product or retrieve an answer to their query. It’s often determined through a follow-up survey that asks customer to rate their service experience from “very difficult” to “very easy.”

Pros: One of the key benefits of CES is that it allows you to zoom in on a specific interaction and quickly discern the degree in which customer expectations were met. From an organizational perspective, this helps identify the types of interactions that work as well as those that don’t. If, for example, customers constantly rate their interaction with a chatbot as difficult, it could flag that the self-service tool needs improvement. CES also acknowledges how seamless the issue resolution process was, unlike AHT, which doesn’t indicate if a customer needs to open multiple support tickets.

Cons: The ability of CES to hone in on specific interactions can be enlightening, but also limiting if incorrectly used as a catchall metric. It doesn’t measure loyalty or retention, and any results gleaned from a CES won’t shed much insight on customer engagement. It’s also reliant on consumer participation and there is no guarantee that every customer will respond to the post-interaction survey.

Don’t debate, diversify

Although the inclination may be to say one metric is better than another, it’s short-sighted to hitch your wagon to only one KPI. By isolating your successes, you can negatively impact other important parts of the customer journey. Instead, there’s a clear benefit to tracking multiple customer service metrics.

Perhaps AHT is better suited to social media responses, which customers typically use to get quick answers. CES, meanwhile, can help track the success of live agent interactions. When these metrics are combined with other KPIs like customer satisfaction score (which measures how satisfied a customer is with an interaction) and the customer loyalty index (a survey which includes “how likely are you to recommend…”) your organization gets a more robust understanding of the customer journey.

After all, that’s the point of KPIs: to learn what you’re doing well and what you can do better. Understand that and you can better pivot to meet the constantly evolving demands of your customers.


Check out our solutions

Make lasting connections that inspire your customers to love and share your brand.

Learn more