The Latest Trends in Quality Assurance


I hated receiving quality assurance (QA) evaluations when I was an agent. Even though I got a good score, the process was based on strict adherence to procedures. Everything was black and white: Did I use the customer’s name three times? Did I verify their password? Did I remember to use the proper call identification code? It was the perfect example of old-fashioned, compliance-based scoring.

How has QA changed?

I recently interviewed contact center experts Neal Dlin and Sharon Oatway for their insights regarding quality assurance. Dlin is Chief Customer Obsessed Officer for the customer and employee experience consulting firm ChorusTree, and Oatway is President & Chief Experience Officer at VereQuest Inc., a customer experience consulting and outsourced QA firm. Their responses below have been lighted edited for clarity.

MIKE AOKI: How can you ensure that your quality assurance (QA) program increases employee engagement rather than decreases it?


NEAL DLIN: Many QA forms are designed to make it easier to score from an evaluator perspective, reduce variance between evaluators and set clear expectations as to how agents can achieve a high score. With automation and/or product improvements leading to the elimination of simpler issues, not all customer contacts fit the same mold as they may have in the past. While a rigid form is less subjective to score and less subjective for agents to understand, exceptional customer service in this new world may involve “breaking from the script.” That means getting away from the rigidly prescribed approach that a QA form may be reinforcing. While a rigid QA form seems like it would boost engagement—since agents know exactly what is expected—agents may feel restricted from actually doing the right thing for customers. That can have a very detrimental impact on engagement.

In this new world, QA programs need to live in the gray area, allowing agents to navigate complex issues, make tailored decisions and empower them to make things right. Focus on behaviors rather than words, on resolutions over static flows, on customer emotions rather than scripted statements. Of course, having clear expectations is still critical but this can be achieved through storytelling, sharing great interactions, celebrating them, having more frequent quality calibrations and involving agents in that process, constantly evolving what “good” sounds like.

AOKI: How can speech analytics help QA analysts find good calls/chats to score?


SHARON OATWAY: Most companies tend to use QA as a way to measure individual agent performance. In that case, it is important to choose a randomized selection of interactions so as not to skew the results. However, when an agent is having challenges with a specific product, service or scenario, speech or text analytics can be great tools to locate a larger sample of those types of calls, emails or chats. These tools also can be used to identify trends, conduct root-cause analysis and offer a more robust coaching experience.

In addition, speech analytics is getting better at locating high-emotion calls where voices are raised or high-emotion words are used. Evaluating these calls can help to pinpoint specific agents for coaching and/or identify key areas for business improvement.

AOKI: What are the most important things a QA form should capture to ensure a great customer experience (CX)?


DLIN: Behaviors, emotions and resolutions need to be the most heavily weighted components versus compliance to phrases, scripts, statements and rigid flows. Think about a call flow as a wide river rather than a railroad. You still need to get from A to B. However, you may have to go outside the lines and navigate the river in a nonlinear fashion to get to the end. If your QA form keeps you stuck on the rails, you will not deliver exceptional CX or reward those who do.

AOKI: How can QA coaches equip team leaders with the right information to coach a specific call?


OATWAY: 1. There must be a common understanding of the evaluation criteria. That means being diligent about defining, explaining and providing examples of each standard followed by regular calibration sessions.

2. QA feedback must be timely. It needs to be as close to the actual interaction as possible. If feedback from a call handled early in the month does not arrive until late in the month, it is not as effective. Worse, an agent may have been demonstrating the “wrong” behavior all month.

3. QA coaches must provide specific feedback to prevent team leaders from speculating. For example, if you asked for a “welcoming standard greeting” and the agent did not deliver it, what specifically was missing? Was it the wrong greeting, incomplete greeting, monotone voice, speech too fast, etc.? The coaching offered for each scenario would be very different.

4. Context is important. If a caller was argumentative from the outset, it would naturally affect how an agent handles the call. Or if the call was about something the agent has no control over, that would be important to know. Categorizing the type of interaction and providing context helps the team leader better position their coaching.

AOKI: Why are some companies outsourcing their quality assurance function?


OATWAY: Just as people realized the benefits of outsourcing their contact centers, so too are they seeing the value in outsourcing their quality assurance function. We find that there are five key reasons companies outsource QA:

  • Better utilization of some of their most valuable resources;
  • To gain an unbiased, independent perspective;
  • To gain access to the providers specialized QA tools and real-time reporting;
  • To access expertise and QA specialists’ skill sets; and
  • To do it for the same/less cost than internal QA.

Importantly, by evaluating the interaction through the eyes of an independent third party, results are more aligned with those of customers. This becomes even more important when leveraging QA for valuable insight into how to improve customer experience.

AOKI: What are some best practices for live-chat QA?


OATWAY: The criteria used to evaluate live-chat interactions should be similar to calls or emails. Evaluation criteria should always align with your corporate brand and deliver a consistently great experience regardless of channel.

Given that live chat typically relies on templated responses, it is important to pay attention to how an agent initially engages the customer. That sets the stage for a great experience to follow. Is the agent using a personalized greeting, great opening question, empathy or apology early in the chat session? After that, check to ensure that every templated response aligns perfectly with the customer’s question, or that it is edited appropriately, if it does not.

When it comes to templated responses, it is not good enough to just cut-and-paste copy from your website or paragraphs from your standard operating procedures. Chat dialogue needs to be written in a more conversational manner. It should be delivered in bite-sized chunks so the response seamlessly aligns with the customer’s question and provides natural “pauses” where customers can interject with their own questions.

AOKI: How can QA capture the Voice of the Customer?


OATWAY: The best QA programs are designed around delivering great customer experiences. If the evaluation criteria used is solely compliance-oriented, you are missing a key opportunity to add value to the organization. A great QA system should be able to identify the impact of each individual behavior on the overall outcome of the interaction. Also, categorizing types of interactions—such as the customer indicating that they had tried to resolve the issue in the past, whether or not the issue was resolved or if the customer had been transferred, etc.—can provide valuable insight on customer effort and experience.

Some QA systems are able to generate a secondary, qualitative score related to the customer experience. If you do not have post-interaction customer surveys, having your QA program provide a qualitative measure of satisfaction can help you look at agent performance and the value your contact center adds to an organization in a different light.

One caution: Since this qualitative score must be answered from the customer’s perspective, many internal QA teams have challenges being objective regarding their organization and the agent. However, this customer experience evaluation puts the entire interaction into context. There will be times when an agent does not follow the criteria you have laid out and yet the customer’s experience is still a great one. It is important to acknowledge this.

AOKI: What other advice you would give contact center leaders regarding QA?


DLIN: QA was once seen as a compliance tool to score agents and “catch” them doing something wrong. It can be scary to evolve away from those black-and-white, checklist-style measures. “Living in the gray” is uncomfortable, but that is where the best growth and learning comes from. Don’t be afraid if your QA form needs constant discussion and calibration. Don’t be afraid of transitioning quality coaching from a scoring exercise into one that will continuously challenge your thoughts about how your customer is being served and what great customer experience looks like. With each challenging discussion comes new discoveries to help your operation innovate how it delivers service. Live in the gray!