testbox
Login
Products
Compare

Test drive popular B2B products side-by-side

Convert

Close more deals with live product sandboxes

Software Categories
CRMCustomer SupportKnowledge ManagementMarketing AutomationProject Management
Resources
BlogPodcasts
Login
Get started
Login
Get started

Improving your support with QA

It’s a new year, and that means it’s time to improve your support! With that in mind, our first few articles this year will be all about new processes, ideas, and tactics you can use to improve your service.
Improving your support with QA
Diana Potter
February 9, 2022
Improving your support with QA
Return to blog
Table of Contents
Share this Article
Link Copied!

It’s a new year, and that means it’s time to improve your support! With that in mind, our first few articles this year will be all about new processes, ideas, and tactics you can use to improve your service. Whether you’re a brand new support leader or one well-seasoned, there are some tips here for you! If you missed our first article on different ways to use your support data or our second article on using support data to identify product issues, be sure to give them a read.

This time around, we’re going to talk about Quality Assurance, or QA. We generally talk about QA in the context of making sure your product, be it physical or software, is made correctly. In this case, support is your product. You can use Customer Satisfaction scores (CSAT) or Customer Effort Scores (CES) to get a feel for how your customers find your support. But, you can use QA to avoid solely relying on customer opinions, and to help you see the things your customers don’t bring up. It’s there so you can set your own goals and requirements, and make sure that your team is accomplishing them.

The basics of QA

With support QA, you or someone else will look at customer communications and grade them according to a scale you set up. You’ll want to review a random mix of conversations, so that you get some from all of your agents. 

Then, you’ll use those scores to 1) ensure your support is as great as it can be, and 2) to provide your agents with valuable feedback and coaching. 

Setting up a QA program

The setup is the second most important part of a successful QA program. The first thing I suggest is deciding whether you want to roll your program out as a manual system, or use dedicated software. 

Manual can be quicker to get up and running, and is obviously less expensive, but can also be harder over time as it’ll require more manual effort. Software, on the flip side, can be a bit tougher to get started just because it requires a software search in addition to everything else. It can also be more difficult to get buy-in on the budget without a demonstrated use-case. However, it’s set up specifically for QA and will make the day-to-day operations of your program go a lot smoother. Whichever you choose will ultimately come down to your specific needs and wants. 

To help you get started with software options, here are some companies to check out:

  • Maestro QA
    Integrates with: Zendesk, Kustomer, Salesforce Service Cloud, Intercom, Freshdesk and more.
  • Klaus
    Integrates with: Zendesk, Kustomer, Salesforce Service Cloud, Intercom, Dixa, Freshdesk, Help Scout, Gorgias, Front and more.
  • Aprikot
    Integrates with: Zendesk and Help Scout.

If you go for a manual option, I recommend skipping right to your rubric and creating a spreadsheet for scoring.

Once you’ve figured out which route you’ll go for, the next big task is your rubric. Your rubric is what you’re scoring each conversation on — your ideal vision of what your support should be. 

An ideal vision can be tough to score on, so you want to work with your team and other stakeholders to create a list of actionable areas that you can score and coach on. In general, you’d be working with your manager or leadership on your rubric, or working with folks in parallel roles like education or success. You also want these to be specific areas, but not too prescriptive. 

Sample rubric

The number of questions you include can get more detailed, but can also take more time. I recommend keeping it to 10 questions or less, and sticking to the areas that are the most important to you. 

The rubric above is a little over half of the questions my team is scored on. We focus heavily on efficiency and tone, so our questions revolve around those. What your team focuses on will differ — just make sure your questions get to the root of what you care the most about.

Next, you’ll want to set up your score system. A good approach when you’re just getting started is using a true/false or yes/no system. This means you don’t need to dive into the middle ground just yet — you’re strictly looking to verify the existence of the things you care about the most. This is a fairly simple approach that you’ll likely expand on in the future, but it’s a great way to get started. 

On my end, we currently use a 5 point scale: 1 is equal to a miss, 5 is near perfection, and everything else is somewhere on the spectrum between the two. I’d suggest looking through your previous tickets/conversations to get an idea of what you’d consider great or lacking to “calibrate” your scale. Then, you can use those as examples of what each numeric score equals. 

If you can, I’d suggest pulling examples from yourself or your team leadership for negative scores. We all make mistakes and have room to improve, and using your own work is a reminder of that.

Once you have created your rubric, you’ll want to either add it to your chosen software, or re-create it in a spreadsheet using rows and columns. If you’re going the manual route, you’ll also want a few more columns:

  • Ticket/Conversation ID/URL
  • Agent
  • Reviewer (if you’re doing peer reviews or otherwise have a team reviewing)

After that, you’re going to move on to the most important part of the process: how you’ll approach the reviews. You want to decide how many tickets you’ll review and with what cadence you will review them. You also want to decide who is doing the reviews. Is this a management-only duty? Or will you take the peer review approach? If you have just one person doing reviews for a team, you’ll get consistent scoring and it’s also easier to manage. On the other hand, peer reviews can be easiest to get started as the work is shared. Peer reviews are also growth opportunities for each member of the team, as they get to learn in a collaborative environment.

From my side, we do five reviews per agent on a weekly cycle, done by the support team lead. I’d love to do more, but that’s been a good balance for us with the time required and the benefits to the team. Keeping the reviewing within management helps us keep a consistent voice as team members onboard with us. Eventually, we’ll switch to a peer review method because evaluating peers is a great way to expose team members to everyone’s work and help them reflect on their own skills. However, I think it’s best to wait until the team is more established, as multiple members of my team are new. 

Why is this the most important part? It’s the piece that can make or break success for your program. Too many reviews can become tedious and hard to keep up with, but too few can mean your team isn’t getting actionable feedback. I’d recommend starting small and increasing the number of reviews if/when you can, or find it necessary.

Test and compare customer support software before you buy


Make it a routine part of getting & giving feedback

Part of why the way you approach reviews is so important is because you need the reviews to be something that can become routine. Reviews should be easily manageable for whoever is doing them, so that it can be something they do on a very regular schedule. If it’s too complex or time-consuming, it won’t be a routine that you can keep up with.

But even if you have the perfect cadence, regular reviews are still something you’re going to need to build up to. Set yourself calendar reminders to make sure you do the reviews weekly — or biweekly, or whatever interval you decide on. I’d also recommend doing the reviews for at least 2-3 cycles before you ever start sending results to the team. You want it to be something you know you can keep up rather than starting then stopping because you got busy (guilty!).

Once you’ve got a good routine going, start working it into your feedback. You can send scorecards or just talk it over during 1:1s. Whatever way you go about it, I’d suggest giving the feedback before an actual conversation to give your team members time to digest and come with any questions they may have. If it’s part of your 1:1s, send a scorecard the day before and make sure that the feedback session is part of your agenda. 

Wrapping things up

And there you have it! Are you already doing QA reviews for your team? If so, share the way you approached the setup, or some of your rubric questions on our LinkedIn. If you aren’t, give it a try. Having a good scoring system gives your team actionable ways to improve and also gives you well-defined metrics to share, improve, or just to shout from the rooftops about how awesome your team is! 

As always, we’d love to hear what else you’d like to learn about. If you have a suggestion, leave us a comment on LinkedIn. 

SUBSCRIBE
Quality content. In your inbox. Every week.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Don’t let sales people bias your decision. Let the software speak for itself.

Start testing and comparing how software could help your team in seconds.
Oh and did we mention it’s free? Because it is.

Get Started
Testbox is free. No credit card required, ever!
testbox

Tools for creating a better B2B software buying experience.

Products

Compare

Test drive popular B2B products side-by-side

Convert

Close more deals with live product sandboxes

Software Categories

CRMCustomer SupportKnowledge ManagementMarketing AutomationProject Management

Company

AboutCareersPartner ProgramPrivacy PolicySecurityTerms of Service

Resources

BlogPodcast
Copyright © 2022 TestBox Inc.
Made with ❤ by the Arch Team