None of us likes being judged unfairly, and that’s especially true in the workplace.
When it happens in our jobs, we are instantly transported back to the indignation we might’ve felt in the school playground. Usually, a ‘stolen’ stapler magically re-appears, a ‘missing’ report turns up, or the culprit who finished the biscuit barrel owns up to the heinous crime.
How should we respond when the unfair judgment becomes an inescapable part of how our performance is measured?
Ronald Reagan, 40th President of the United States, had a constant reminder sitting on the Resolute Desk stating, “There is no limit to what a man can do or where he can go if he does not mind who gets the credit.” Even then, we have to assume Nancy Reagan, as the first lady, needed no plaque as a constant reminder!
As customer service professionals, our legacy differs from being the leader of the free world. However, we do have inescapable metrics and mathematical indicators that judge our efforts.
Data-driven measures such as “time to respond” to customer tickets that have underlying factors relevant to your business can’t be questioned. Can the same be said for responses that are subjective and emotional such as customer satisfaction (CSAT), perception of effort (CES), and the likelihood for a customer to recommend us (NPS)?
What are CX scores, and why should you care?
The increasingly digital ways businesses engage with their customers have great opportunities for scale, but it also brings distance. ‘Customer-facing’ teams have vanishingly little time facing customers, let alone having the time to ask and register their thoughts and feedback. This has resulted in the popular rise of digital customer experience measurement tools.
This form of customer experience measurement (CXM) has developed a great deal over the past two decades, offering a broad and exciting new sector that covers strategy, technologies, and measurements to help gauge how a customer feels and what could be done to have them buy more and recommend your business to others.
The popularity of single numbers and a scientific approach within the CXM discipline indicates how satisfied a customer feels (Customer Satisfaction – CSAT), how easy or difficult their engagement was (Customer Effort Score – CES), and their likelihood to recommend (Net Promoter Score – NPS). However, it reduces these nuanced ideas to common and comparable scores that take up a few lines on a monthly business report.
So why should you care?
For larger businesses, these customer experience measures have become so integral to their organization that they drive staff bonuses and frequently form the basis for company valuations as indicators for revenue growth.
For other organizations, whether directly servicing consumers or engaging with other businesses, these measures provide a baseline to monitor overall customer satisfaction in what you do and ideally indicate where improvements can be made. This insight is invaluable as a guide to what is being done well and should be replicated to create customer loyalty and enthusiasm to recommend whilst what is not being well-received should be stopped immediately. These insights show what makes customers buy again and convert those people into ambassadors for your brand.
Satisfied customers may tell a few other people about their experience. However, deeply dissatisfied customers now have the tools to tell thousands of people. A bad review or social media post may ricochet around the online echo chamber for years to come, which is certainly something to care about.
With this in mind, the question then is – are the scores you see fair and representative of hardworking staff always striving for higher scores?
How can metrics be unfair? A score is a score, right?
When taking over a customer experience program for a new client, my team and I had been warned about discrepancies across their existing program. The leadership team had struggled to identify key things for their customer-facing service teams to focus on and improve their overall customer satisfaction.
Since this was before the rise of artificial intelligence in reading feedback, without building any complex data models and training, we decided to read the customer comments ourselves.
We started with the top and bottom ratings that drove the largest part of the scores. After many hours of eye-rubbing and updating spreadsheets, we failed to identify the root cause and so began to look at the mid-rated scores. The trend rapidly became clear. Customers responded on how satisfied they were with the service they received but reduced this score due to the impact of claims being made by a competitor.
When adjusting the scores for this discrepancy, it turned out the customer service teams were being unfairly scored by a factor of between 10%-15%. The key takeaway was the need to take immediate action and address the unfounded customer concerns.
This example shows how reducing customer experience to simple scores wasn’t a real representation of the customer’s voice.
How to fix the problem of unfair CX scores?
Three simple adjustments will make a world of difference.
- Work hard on the questions
- Account for cultural differences
- Ask for comments, and read them
#1 Work hard on the questions
Anyone who works alongside a sales team will be aware of how powerful the right questions can be. ‘How do we get this deal done?’ will garner a very different response to ‘What else can I do for you?’. This principle is even more true in surveying for feedback as it relies on the written word.
The usual response to this challenge is to ask more questions. A few years ago, I worked with a household name Hotel and Pub chain whose surveying had gone so out of control that there were 174 questions across three internal and external questionnaires. Every department had its ‘must ask’ questions, and over a decade of spiraling out of control, responses dropped off and unsurprisingly produced the same results, month-in, month out.
It is likely to have a legacy of questions and metrics for customer feedback. But be brutally honest and ask yourself how valuable the question is to help you improve next month. Here are three tips for framing great survey questions –
- Ask open questions – A simple ‘How did we do?’ will always garner more responses to ‘Tell us what you thought of the cleanliness in the bar area?’.
- Ask one question – A simple ‘How could [our company] improve?’ rather than ‘What did you think was good about your experience today and where could we do better?’
- Ask fewer questions – Focus on what the customer cares about. How satisfied were they, would they recommend you, and how much effort was it to buy from you are a pretty good place to start.
#2 Account for cultural differences
Age, culture, nationality, and background all contribute to how we approach questionnaires. For example, I generally complete questionnaires when I am happy with the service and contact the company directly when dissatisfied. However, younger members of my team would expect to use a survey response as an opportunity to lodge a complaint and find the telephone and in-person confrontation quite challenging.
Culture also has a major impact. Japanese, Korean, Australian, and Dutch respondents have been shown to score NPS low and sometimes in the negative. In contrast, South American and Indian respondents are at the top of the charts of NPS scoring for the same service or product. Being aware of this and accounting for cultural differences is crucial.
#3 Ask for comments, and read them!
As customer service and customer experience professionals, the phrase Voice of the Customer(VoC) is used a lot. However, with all the digital channels selected and controlled by the customer, it is tough to monitor it all.
Customer comments in response to open questions are hugely valuable as they tell you what matters to them. But the challenge is to read them all. Artificial Intelligence can be used to read, analyze, and report on customer feedback in real-time, making it easy to go over customer comments.
To remove any bias in the AI scores, we can use the Comparative Linguistic Analysis Score (CLAS) to analyze comments and feedback. This provides a more impartial view of the customer experience score based on how the customer “says” they feel and better represents the true Voice of the Customer.
Customers give complaints, updates, and other information in survey responses as they expect them to be read. So don’t ignore them!
Know the real ‘why’ behind CX scores
It’s important to know the true sentiment echoed in every feedback you receive before taking action on the CX scores. Opinyin, an easy CX plugin for feedback analytics, integrates with Freshdesk to give deeper insights into customer feedback.
Easy-to-use templates with NPS®, CSAT, and CES measurements are embedded directly into the Freshdesk platform that you can use every day. Integrating Freshdesk and Opinyin gives you an advanced survey App that automatically reads, analyzes, and reports on how your customers are truly feeling.
Customer experience metrics have become a crucial tool to hear and react to what matters to your customer. Programs easily morph into data collection projects that provide little valuable insight and inaccurate assessment of staff and performance. Taking conscious effort to accurately hear and act on the voice of your customers ensures success and fairness to your team.
Looking For Startup Consultants ?
Call Pursho @ 0731-6725516
Telegram Group One Must Follow :
For Startups: https://t.me/daily_business_reads