Customer Support Is More Than High Scores
Grow Your Business, Not Your Inbox
Even great data cannot guarantee good decision-making; without context, data is meaningless.
True for many metrics within a company, but also true for the behavior of its employees. Specifically, your support department starts to look a lot like a decrepit 80’s arcade if “high scores” become the measure of success.
That is the underlying problem with customer satisfaction ratings and the gamification of employee behavior as a whole—the goal is no longer happy customers, but a better happiness report. The goal isn’t the goal; the metric is the goal.
We all know that closed tickets, average response times, and total answered emails don’t necessarily correspond to superb customer support.
What’s often forgotten is that satisfaction ratings suffer from a similar ability to be “optimized”—not for the customer but for the employee. See an easy ticket? If you quickly solve the problem, it’s likely you’ll get a positive rating—the same positive rating that another rep might earn through a 45-minute ordeal, solving a massive problem for a customer.
Even worse, the second rep may go through all that effort only to receive a poor rating, not because their execution was faulty, but because the problem was more severe, resulting in a frustrated customer. Sabotaged before they even began, so to speak.
Experienced support reps are aware of this. Jack Welch once said, “In the end, you get the behaviors you reward.” If satisfaction ratings are disproportionately rewarded, why not go for the easy win?
Soon, the whole department is scrolling through the queue trying to hunt down the quick win, tee-ball tickets first.
When the batting average reigns supreme, don’t be surprised if people stop swinging for the fences.
The Story Metrics Can’t Tell
As a customer champion over at Wistia, Dave Cole understands how happiness metrics can fall flat when used incorrectly.
He argues that above all else, a cutthroat ranking system based on false proxies can end up creating incentives for questionable behavior:
The pushback I have about ranking team members on their customer happiness stats alone is that doing so could incentivize people to work towards the numbers in an unhealthy way.
If people want to make sure they only get “happy” ratings, they might only reply to soft emails they see in the inbox — ones that they know will be super easy to handle. See a customer with a really challenging question? Or asking for a feature that we definitely won’t build? Perhaps typing in all-caps and freaking out? No thanks! Onto the next one.
It’s very much like the police example from your Growth Hacking blog post. If cops are measured on the number of arrests they get, they’ll find ways to start arresting more people. It doesn’t make the community safer. Incentivizing support reps to get “happy” ratings from individual customers doesn’t necessarily make the whole customer base happier, either.
The big problem with ratings is that they discourage reps from chasing “big problems.” As with the police example Dave mentions, an arrest is an arrest—but picking up jaywalkers isn’t the same as busting the ringleader of a gang (exaggerating the point in order to make it).
Tough tickets are often tough due to the fact that a severely unhappy customer is involved. Successfully addressing such an issue will take much more time and effort, and will have a lower chance of getting the desired pay-off—the “thumbs up” rating in the impending follow-up email.
Dave continues, saying that leadership often needs a reminder that statistics are only a jumping-off point:
I was super impressed with how the folks over at Slack approach interpreting the numbers in the team manager’s stats view. Below the area where Slack presents data about team members’ activity in Slack, Slack presents this message:
“A note on team stats: we are giving you some facts about your team’s usage of Slack. We try to carefully avoid implying any judgement of value or give a “meaning” for any particular number.
For example, someone might be not using Slack for several hours during the day because they are goofing off. On the other hand, they might be away from Slack because they are concentrating very hard on the work that you want them to do. We have no way of distinguishing those cases and we advise that anyone viewing these stats be careful not to infer anything which is not in the data.”
I think something similar could be said about happiness report in Help Scout. The team member with the lowest happiness rating might be god awful at their job… or they be the most willing to take on difficult cases and angry customers. Ultimately, the numbers don’t have all of the context, and the dangers of misrepresenting a team member's contributions and value are very real.
That last line cuts to the heart of the issue.
Numbers can paint an unfair picture of overall contribution to the team when they are taken out of context.
Stellar reps with intuitive people skills could be successfully handling elaborate demos and tackling 800 word tickets all day long. But they’d have an undeserved shadow cast on them by the “gamifiers” if their satisfaction ratings took a hit due to a willingness to work on the hard problems.
Finding a “Happy” Middle Ground
As mentioned by Dave, there are productivity and satisfaction metrics built right into Help Scout that any team can access . In light of this post, you might be wondering why they are included at all.
They exist because they serve to identify potential problems through careful interpretation. Dave shares how they would ideally be used; to get a high-level view of your team’s performance:
Don’t get me wrong though, I’m strongly in favor of happiness ratings. My belief is just that like any raw data, they need to be interpreted and used appropriately.
We use happiness ratings at Wistia not to understand who’s better than whom, but to learn from negative customer experiences so we can improve.
If an employee has very low happiness ratings, something is wrong. But the data serves to initiate the investigation. It doesn’t necessarily mean that the support rep is falling behind, just that there is a problem that needs to be fixed.
Perhaps they are exclusively tackling the toughest conversations. In that case, a one-on-one could easily decipher why and determine what might be done to create a better balance. In situations where customers left “thumbs down” feedback, where did things go wrong? Is the cited problem the interaction, the product, or something else?
In this way, the metrics serve to bring red flags to attention quickly and consistently. They help identify what is wrong, not what should be done next.
When metrics aren’t serving as a yardstick for performance, team members make better decisions. They’ll know that the recourse is to address the underlying issues, not try to climb their way back up the ladder by gaming the stats.
All of this recalls a talk from Stanford with a panel of entrepreneurs discussing product strategy. One of my favorite quotes from this session:
Every time that I’ve seen a product marketing or management person worry about the product before the customer, I think they’ve failed.
An addendum: every time I’ve seen a support department worry about the stats before the customer, I think they’ve failed.