Creating Metrics for Diverse Workstreams Fin Analytics
Fin Analytics
header_image

Creating Metrics for Diverse Workstreams


Categorizing, measuring, and optimizing knowledge work can be challenging when the workflows are diverse or not well defined. We tackled exactly that when building Fin Assistant.


With automation on the rise, we’re seeing a shift in the way knowledge work is handled. Software allows us to create tickets and measure the time taken on each. It can generate and send automatic responses for common questions, categorize and triage work, and generally enable agents to spend less time on the rote parts of their job and more time on the ‘human’ side; navigating nuanced requests and performing work that requires judgment and empathy.

The expectation to move quickly and efficiently through multiple systems, master all methods of communication, and focus entirely on more high-touch work has increased the complexity of the customer support role.

How do you measure and optimize such a diverse set of workstreams?

It can be challenging to determine the ‘right’ metrics by which to measure your operations team. CSAT, Handle Time, and Ticket Volume only provide a glimpse at the full story, leaving employees frustrated when these limited metrics don’t accurately reflect the level of work performed, which further dilutes the efficacy of 1:1s and coaching sessions as trust in ‘management’ and ‘the system’ degrades.

This problem resonates heavily with SMBs in ‘growth mode’ where their operations team is likely not yet large enough to be specialized, so agents end up taking on more roles than they imagined. Add in the challenges associated with training and retraining employees on ever-evolving systems and policies, and an operations team may quickly find itself under tremendous stress it’s unprepared to manage.

We faced this challenge when growing Fin Assistant. Running a personal assistant service that promised to do anything our users asked meant our operators were expected to handle a diverse range of tasks that might take anywhere from 5 minutes to months to complete, all while providing extremely high quality results and support throughout the entire interaction. Because our customers had very high expectations for both speed and quality, we were constantly focused on improving the quality and efficiency of our service, and finding a concise set of metrics to ‘guide’ our team was key.

How did we determine the ‘right’ metrics for speed, quality, and efficiency with Fin Assistant?

  1. It was critical to know what we were measuring before we could do anything useful with our measurements. So, operators (and eventually computers) would tag every contact session with a description of the required workflow(s) based on the context of the request and work that would be required to complete it.

  2. We measured everything. We collected a rich data set for all work performed (resources used, time-on-page, time-within-fields, clicking, scrolling, typing, etc.) and then aggregated that data into clear pictures of operator performance, task performance, and even sub-task performance.

  3. With that data we were able to generate baseline expectations for every component of work being performed.

  4. This then allowed us to normalize operator performance across the types and volumes of work they performed so we could hold everyone to the same concise set of metrics, despite operating across wildly different tasks on a given day.

For example, when creating a metric for speed, we threw out the one-dimensional model of ‘Tickets / Day’, instead looking to our historical handle time data (collected from thousands of successful repetitions across each category of work) to generate expectations that ‘Scheduling a meeting’ probably shouldn’t take more than 15 minutes, ‘Planning a weekend in Italy’ shouldn’t take more than 3 hours, and so on…

We were then able to measure the team based on the percentage of contact sessions that were completed within the unique time expectation associated with the appropriate workflow, allowing us to implement a single unifying ‘speed’ metric that could be fairly applied to any agent: % Contacts Done Under Time Estimate.

This top-level metric still holds people accountable to operating with efficiency while remaining cognizant of quality, and flexible to fluctuations in workloads, but it also replaces the need for dozens of categorically-dependant ‘speed’ metrics. (Those metrics still exist, and are useful for debugging low performance at the top-level, but they’re not useful to show to every agent on a daily basis.)

Once you understand the flow of work within your systems and across your operators, you unlock the potential to create much more realistic and engaging metrics to guide your team and decision making.

Start gathering the data you need

Fin Analytics provides the data and optics operations leaders need to completely understand the work their teams are doing and generate agreeable and effective performance metrics. To get started with a free trial of Fin Analytics, sign up here.


GET STARTED WITH FIN ANALYTICS

Fin Analytics gives your team ‘full funnel’ insights into your team’s work. Continuous live video and action logging you get the insights you need to provide better coaching and training, and the analytics you need to know where to focus process and engineering resources

We are happy to share with you industry specific case studies, and give you a custom walkthrough of the tool, or you can review our