Agent Metrics Overload: Scorecards, Focus Sprints, and All-Stars Fin Analytics

Agent Metrics Overload: Scorecards, Focus Sprints, and All-Stars

In this post, we'll discuss 3 techniques for managing 'agent metrics overload' -- scorecards, focus sprints, and 'all-stars.'

The longer you run an operations team, the deeper becomes your understanding of customer needs, the types of tasks your team most frequently handles, and the most common sources of mistakes and inefficiency. On the one hand, it is important to constantly develop metrics to track discrete root causes so that as an organization, you know where the biggest opportunities for improvement lie. On the other hand, it can be overwhelming as an agent on the front lines to have dozens (or hundreds) of metrics to keep track of. In this post, we’ll discuss 3 techniques for managing “agent metrics overload” – scorecards, focus sprints, and “all-stars.”

A Monotonically Increasing Number of Metrics

Before we talk about ways to address agent metrics overload, let’s first discuss how we end up there.

When customer support operations is small–with only a handful of people (or perhaps just one person) on the team–the team might not have any metrics at all.

Probably the first metric introduced is some sort of wait time SLA–you don’t want customers to wait more than 1 business day for an email response, for example. This might be particularly important when the people on the team are also serving in other roles.

Once the team grows large enough to require a dedicated manager and trainer for the CX ops agents, you need more metrics to ensure that training is effective, to have confidence that people are learning how to get customers to the right answers in an efficient way, and to ensure that everyone on the team is doing their fair share of work.

So, you start adding quality metrics like CSAT or NPS, or efficiency metrics like tickets per day or average handle time.

As the team grows larger, you may discover metrics that are early indicators of potential mistakes or metrics that correlate to your top level quality or efficiency goals: for example, you might learn that customers are more satisfied and your team is more efficient when cases are handled in one shot. So, you add a “first touch resolution” metric for the team and a “close rate” metric to incentivize each person to finish every time they touch a case.

When you have some cases that require multiple touches by multiple people, you start needing more nuanced ways to assign responsibility: when 3 different people work on a case and a customer complains about it, who is ultimately responsible?

Before you know it, you might have dozens of metrics that you are looking at to try to understand what is going on, and there are far too many numbers for an agent on the front lines to keep in their head while their primary focus should be helping the customer at hand.

Agent Scorecards

One of the most common ways to simplify metrics from an agent perspective is with a ‘scorecard.’ This is a stack ranked list of 3-5 metrics (many would say 5 is too many) that agent performance is measured by. A sample scorecard might be:

  1. CSAT: Goal 90% (no less than 75%)
  2. Avg. Handle Time: Goal 10mins (no more than 15mins)
  3. Close Rate: 90% (no less than 75%)

There may be other things your org cares about that the best agents will score better on, but the scorecard is the official set of performance metrics that determine things like bonuses (if you do them) or performance improvement plans (for those that do not meet thresholds).

It is important to stack rank the metrics on the scorecard, because they may trade against each other (eg, quality of work vs speed of work). For an agent that is failing to hit multiple metrics, coaching would focus on the most important metrics first: eg, “Your CSAT and handle time are not yet where they need to be, but let’s focus on getting your quality to the baseline level first…”

Since there is limited space in the scorecard, depending on the needs of your organization, you can rotate what is on there to emphasize shifting priorities. In a quarter where you expect user growth to exceed your ability to hire and train quickly enough, you might, for example, choose to sacrifice a bit on quality and emphasize efficiency. Likewise, you might do this during the holiday rush, at an e-commerce company.

Perhaps in the quarter leading into the holiday rush, you are increasing your staff and focused on training, in which case you might remove or relax efficiency metrics, and focus on quality metrics more heavily.

The scorecards give you (and agents) the freedom to keep around as many metrics as you want, but get alignment across the entire organization about which metrics are the most important at any given time.

Focus Sprints

When supporting products with rapid growth or products undergoing rapid feature changes, “training” is not a static period limited to new hire onboarding, but an ongoing process. There are new bugs popping up with new policies for directing customers to the fixes, new types of customers, new customer facing product features the team needs to understand and be able to explain, or new internal tools intended to make the CX ops team more effective.

Often, simply informing the support team about the latest tools and policies is not sufficient to get the new knowledge to retain–your team will (rightfully) be focused on the performance metrics they get scored on.

One technique for driving adoption of new policies and tools is the “focus sprint,” which introduces a transitory metric that measures whatever sort of change you are rolling out.

So, for example, suppose your team just transformed the entire knowledge base for handling the top 20 types of cases into canned responses available through your CRM. Just telling people that the canned responses exist might get a few people using them, but most people will more likely than not stick to their routine style of handling cases in the way they know best. Perhaps even your most efficient people are resistant to the change, because, after all, they know the process well and are pretty efficient at it. But, if you are confident your canned responses will save time for even the best performers and help ensure consistency across your entire org, you would want them to use the canned responses as often as possible.

So, you might introduce a “focus sprint” metric of something like Percent of Cases Handled with a Canned Response, setting some sort of target adherence threshold. You could leave this metric in place for about 2 weeks, until you are sure everyone has tried out the new tool live and understood how it works, when to use it, etc.

focus sprints

In organizations where lots of change is happening within the quarterly schedule, these kinds of sprints can be crucial for ensuring the entire team is up to speed on the latest and greatest best practices.


In the score card section, we discussed the importance of stack ranking metrics from most important to least and having your team focus on the most important metrics first.

But really, all the metrics on the scorecard (as well as a bunch that are not on the scorecard) are important, and it would be nice to have the best performers incentivized to perform at a higher bar than simply satisfying the baseline thresholds for the handful of most important metrics.

A really cool concept we had when running the ops team behind the Fin Assistant service was the “All-Star.”

Each week, at the team all-hands (as well as in a weekly email that went out to the entire company), we announced the Stars and All-Stars for the week. Stars were broken out per metric category on the scorecard, and were awarded to those who exceeded the goal number (not just the baseline threshold) for that metric.

all stars

Almost everyone on the team would hit the Star level for a particular metric in various weeks, and this was a particularly nice form of positive recognition when someone who had been struggling in a category mastered a particular area of struggle. (Michael Jordan’s eventual achievement of defensive player of the year after receiving coaching feedback that defense was the week spot in his game comes to mind.)

All-Stars hit the Star level for every metric on the scorecard, as well as the focus sprint metrics for that week.

Since these metrics often traded against one another, it took a great deal of skill and judgment to achieve the All-Star score. There were only a handful of All-Stars in any given week, and they were called out one by one and celebrated by the entire company.

Finally, for those that made the All-Star list every single week of a quarter, they received an All-Star Quarter award. This was really challenging and only a handful of people in the entire history of the program received this award.


When balancing customer satisfaction with organizational efficiency, you’ll inevitably end up with tons of different metrics you use to understand and diagnose all sorts of problems. Often, these metrics can be overwhelming or seem overly focused on preventing negative outcomes to agents on the front lines. We found that using agent scorecards and focus sprint metrics help agents deal with metrics overload, and developing an All-Star recognition program is a great way to give positive, public recognition to the best performers on your team.


Fin Analytics gives your team ‘full funnel’ insights into your team’s work. Continuous live video and action logging you get the insights you need to provide better coaching and training, and the analytics you need to know where to focus process and engineering resources

We are happy to share with you industry specific case studies, and give you a custom walkthrough of the tool, or you can review our