When it comes to managing a service desk, what we want to get done is to provide service that leaves customers satisfied and confident that their requests be resolved correctly and in a timely manner. So if we want to resolve customer requests correctly and in a timely manner, we should measure metrics for those two things, right?
In this post, I list and describe five service level agreement metrics that a high performing service desk should measure regularly. This is not necessarily a comprehensive list, but it is a foundational list of metrics all service desk managers should use. If you have others, please share them in the comments below. I would love to discuss how to best measure the success of a service desk.
One critical factor in running a high performing service desk is responsiveness. Customer want to know that you will be responsive to their requests even if it is understood that a complete resolution will take a few days. One way to measure this is with a Time to First Response metric. Time to First Response is the time it takes from the moment a customer makes a request to the time a help desk agent "picks up" the ticket and begins to read it. What does "picks up" mean? A ticket should be considered "picked up" when a service desk agent opens the ticket and begins reading it. The act of opening the ticket should place it in a workflow status equivalent of "In Progress." In other words, the service desk agent should understand that once she opens the ticket and begins reading it, the customer is notified.
Notifying the customer is part of what makes this metric a measure of responsiveness. A customer should be able to view the request to see it's current status, and when that status shows that the ticket is "In Progress," the customer knows someone has started working on it. The customer sees this and is immediately comfortable that someone opened the ticket and has, at least, begun. A high performing service desk places a high level of importance on communicating current status of a ticket so customers know where it stands at all times.
Yes, it is nice to be speedy in your first response to a customer. Customers do want to know that you are responsive and are communicating so. However, what the customer really wants is for their issue to be resolved in the most efficient manner possible. A high performing service desk should measure Time to Resolution, which is the total time it takes to resolve an issue. Although this metric seems quite simple, there are some nuances to consider.
What happens when an issues requires further input from a customer? Do you count the time that a ticket spends waiting for a customer to respond? What if a service desk agent requests from a customer that he provides more information and what if the customer takes four days to respond. Should those four days waiting for the customer count against the service desk agent?
This is not as easy to answer as it seems because on the surface, you would not want to count the time waiting for customers to respond against the service desk metrics. However, there are managers that believe it doesn't matter who is waiting for whom. If a ticket is opened, it is not resolved. And everything that happens in between "Created" and "Resolved" matters to how long it takes to resolve an issue. The point is to define Time to Resolution in a way that matches your culture and to find a tool that is flexible enough to handle how you want to define and measure Time to Resolution.
There is no simple answer to this question. If you think there is, share why in the comments below. I would love to discuss it. JIRA Service Desk allows you to be flexible in how you define Time to Resolution.
It is one thing to decide whether to track (or exclude from metrics) time waiting for customers, it is another to track the time a ticket spends in the hands of a service desk agent. Time Waiting for Support is a measure of how long a tickets spends in the hands of a service desk agent or more importantly, Time Waiting for Support is the time a ticket spends in some form of an open status, but not waiting for the customer to respond.
Tickets often go back-and-forth between service desk agents and customers. When most of the time is spent in the hands of the service desk, a manager may want to track whether this time is reasonable or not. A service desk manager may not have control over how responsive a customer would be while a ticket is open, she certainly should have control over how responsive a service desk agent is and how long people on her team spend working on issues.
By measuring Time Waiting for Support, a service desk manager can identify whether the team or certain individuals are being responsive to customers.
There could be customer requests that require a third party (non-service desk team member) to review, approve, or otherwise provide input on the customer request. For example, a customer could report to the service desk that their keyboard is not working. If the resolution is to purchase a new keyboard, approval may be needed from the customer's manager, procurement department, or finance department. If this is the case in your company, what if the approval process through procurement takes 48-72 hours? Do you want that to count against your service level agreement with your customer?
Without a way to place the ticket in "waiting for approval" and stopping the clock, some service desks may be tempted to close the issue because their part is done, and somehow, manually hand-off the request to procurement. This is a bad practice for obvious reasons because the workflow is broken, tracking stops, and there is no longer transparency in the process.
The way to handle this is to have a workflow stage for waiting for 3rd parties and stop the clock while a ticket is on this workflow status (of course, JIRA Service Desk allows you to define this). For starters, you can measure a true service level for your team, but you can also measure a service level for another department and provide that data to procurement.
Now this is an excellent metric. Anyone running a service desk (or any support function) would love to know whether and by what magnitude customers are resolving their own requests using knowledge base articles rather than submitting a request to the service desk. But how do you know that someone read the Password Reset article on your knowledge base and then did not put in a service desk ticket? Can you correlate with certainty the fact that requests to the service desk fell by 25% since adding 10 new knowledge base articles? Was the 25% decrease a direct result of those new knowledge base articles? At what point in time do you determine that other factors are at play: One week later? One month? One year? There is no simple answer to these questions.
It might be more useful to approach this issue in terms of providing customers with a self-service option in the interest of customer choice and preference rather than in tracking a direct link between knowledge base articles and a specific number of deflected tickets. Intuitively we can understand that if customers can easily find answers to their questions in the context of their work or directly on request forms some tickets will be deflected. It also stands to reason that customers will be more satisfied knowing that there are easy-to-find articles to help them solve their own problems on the spot rather than waiting for the service desk to get back to them.
This is an interesting topic that is worth debating. How do you measure deflected tickets? Do you measure it? Why or why not? Please share your point of view in the comments below.
By starting with these five service level agreement metrics above, you can begin to have clarify in how well your service desk does in provide excellent service to your customers.
As an authorized training partner, ServiceRocket has provided official Atlassian courses to over 50,000 Atlassian product admins, users and power users. We're ready to help your team drive software adoption and the effective use of technology.
Build expert proficiency faster with ServiceRocket's Atlassian Training