Measuring content health and knowledge satisfaction: Expert tips

In our recent webinar, The 5 Biggest KM Challenges You’ll Face, our presenters, Melissa Burch, Knowledge Strategist of Irrevo and Brian Bombard, Senior Director of Proactive Support Delivery for Oracle Applications, agreed that measuring success and content health are both critical to the success of a knowledge management strategy. During our Audience Q&A, they elaborated further:

Q: How do you measure content health?

Brian Bombard:

We have an initiative around where we’re trying to target the top 50 documents. Believe it or not, the top 50 articles represented about 8% of the total use by customers  and engineers, so it was an area we could start to dig in to. The way that we started to try to ensure that it’s healthy. We went back and started being creative in a KM session.
We were able to get the SR’s where that document is view. What we would do is we would start to look and see if there was anything that that piece of content needs. Often times it might be missing a symptom or maybe the title isn’t a representative as it could be. So we would start to do some findability aspects and then we would look at any time an engineer told us that a particular article or document solved the SR, we would go back to the SR and look at why the customer wouldn’t be able to find the end document.
There’s a lot of work that we’re doing around the top docs but then also from a  content health perspective we look at the whole process. Are we hitting our targets for creating content? Are we getting content to customers? Are we processing your comments and feedback to make sure we’re addressing them within the first seven days? We also pay a lot of attention to currency. We want to make sure that the content were giving is a fresh as possible to our user base. When you look at all of these, you can assess your content health. You get a sense of what the state of the program is when you look at it from that perspective.

Q: How do you measure knowledge satisfaction?

Brian Bombard, Oracle:

We use surveys. One type that we use is called a transactional survey. After a customer’s service request is closed, they receive a survey that asks questions about KB effectiveness and also the findability of the content. We’ve also done quarterly surveys to help us track satisfaction. Our knowledge base also has the ability for customers to provide us with a thumbs up/thumbs down response to the content. I’ll be honest, we get probably .01% of that type of feedback, so generally we rely on the other two methods.

Melissa Burch, Irrevo:

Adding to what Brian said, in order to measure satisfaction with the content, more than likely you will have to use a multi-pronged approach. Only using the thumbs up/thumbs down probably won’t give you enough data points to be effective, but you certainly wouldn’t want to eliminate that as an option. You’ll need to identify some other avenues for gathering the data points you’ll need to have a robust data set in order to be effective.

Further Insight

If you missed the live broadcast of The 5 Biggest KM Challenges You’ll Face, you can watch our presentation on-demand.

Metrics that Matter: Going Beyond CSAT and NPS

analytics_groupIf you’re behind the Customer Service Operations curtain in any capacity, you’ve got strong feelings about which metrics tell the most comprehensive, actionable story of your customer experience. It’s easy to fall into a routine of looking at only a few of the heavy-hitting metrics, but to get a clear picture of your organization’s performance, it’s important to ask yourself what stories you might be missing out on by overlooking some of the alternatives to perennial favorites like CSAT and NPS.

We asked a few experts in the customer experience industry to tell us a little about their favorite metrics:

 

Melissa Burch, Senior Knowledge Strategist, Irrevo

I am a huge proponent of using a balanced scorecard approach to measurements with a small number of critical data points to monitor.  If you talk to me long enough, you will realize that I don’t recommend narrowing your measurement scope to just one data point.  However, there is one measurement that is often left off a balanced scorecard and this is the one where you measure and calculate your self-service success score.  Our friends at Oracle recently shared their approach to this.  I’ve summarized it here and encourage you to learn and then apply the measurement approach that works for you.

To do this, you need two data points to derive your success score.  The first data point is the number of times customers visit your support site and view at least one piece of information there. Don’t forget to include content, community forums and all other self-service options provided.  These are the total number of times your customers attempted self-service for the time period of your calculation.

The second piece of information is the self-service success rate. This is calculated based on data captured during the customer support site survey which asks users if they were successful at finding the information they were looking for.  The response to this customer survey question will give you your success rate.

Once you have these two data points, you can calculate your customer self-service success score by multiplying the number of self-service attempts by the success rate to determine your overall success score.  So for example, if your customers attempt to self service 1,000 times per month and you know that they are successful 50% of the time, then you have a self-service success score of 500.  This means that your self-service support offerings provided customers what they needed 500 times during that month.

After establishing your baseline self-service success score, identify ways to improve the self-service offerings in ways that drive greater value to your customers.  Use the self-service success score to monitor your impacts.

Tola Begbaaji, Discrete ERP Customer Solutions, Aptean

I don’t know that there is one metric. For me it is a triad – a combination of customer satisfaction, time to resolve, and time to respond. Put another way – customers want Q2R.  Quick, quality, resolution.  If a team is resolving issues quickly with quality, then typically their customers are satisfied. Each element is important.  A resolution is not just an answer to the ticket.  It is something that resolves their problem. It’s not enough to be quick. Speed is irrelevant when the solution is not high quality.  Quality of course speaks for itself, but the term implies completeness and thoroughness.

These three metrics help you to evaluate whether a support team is providing quick, quality resolutions on a regular basis. If the issues are being resolved quickly with quality, then customer satisfaction will tend to be high.  You can review the trends of time of respond and time to resolve to see if they are staying steady or if they are increasing or decreasing. For example, if time to resolve and time to respond are increasing, you will expect to see a corresponding decline in CSat.  If you don’t see this CSat decline, then it is a warning sign that the team may have a huge backlog, and they are only closing primarily the newest tickets.

So, in order to make the triad into one metric, I might call it something like CSaTimeToResolveRespond.  That would make a nice hashtag, don’t you think?  #csatimetoresolverespond

Laurel Poertner, Knowledge Strategist, Irrevo

A tried and true metric that gets a lot of attention from C level executives is Customer Satisfaction.  While I think this metric has its place in the corporate world, I believe it is short sighted and customer loyalty depicts the customer landscape for an organization much more clearly.  Customer loyalty, defined as the customer’s intention to keep doing business with the company, increase the amount they spend, or spread positive word of mouth can be measured by using the Customer Effort Score (CES).  CES asks the question “How much effort did you personally have to put forth to handle your request?” The reason this is such an important metric is the power it has to predict your customers’ intentions.  The more you can see what is coming the more time you have to prepare your organization to head off future issues.  This can also spur companies to implement new systems and processes to use the CES to make improvements that directly impact the customer experience and ultimately drive higher customer loyalty.

Elias Parker, Managing Partner, OneReach

There are a lot of call center metrics out there that measure customer satisfaction—CSAT, Net Promoter Score (NPS), even First Call Resolution (FCR). The way they measure satisfaction is different, but the end result is the same—to make sure the customer is happy. But what if I told you there was a call center metric that could do nothing but measure a customer’s happiness, one that tracked their satisfaction through every interaction? There is, and it’s called the Smiley Face Index (SFI).

The Smiley Face Index isn’t meant to displace other more complex call center metrics.  Rather, it’s just the simpler way of looking at customer satisfaction that we sometimes need. It literally measures the number of smiley faces exchanged by a customer and agent during a web-chat or text message interaction. Simply put, more smiley faces = more happy customers.

Learn more about the Smiley Face Index.

Matt Berger, Content Strategist, MindTouch

The one metric that support teams can’t ignore (but so many do) is organic web traffic – the measure of how many people find your help content through Google. On the surface, this seems to fall into the marketing realm, but organic web traffic will help turn support from a cost center into a revenue generator. Support teams are always looking to lower support costs by reducing customer effort and increasing ticket deflection. Our customers see a distinct correlation between increased organic web traffic and ticket deflection: more web traffic means fewer tickets, which in turn means lower support costs. By analyzing the amount of traffic, as well as the demographic, geographic, and behavioral data around that traffic, support teams can continue to fine-tune the self-service experience.

Our customers have also noticed that opening help content to the online public has the added benefit of bringing buyers to this information. Web traffic analytics can help inform sales and marketing teams about buyer behavior. Those little FAQs you thought you had to produce are the key to understanding how to better serve both buyers and customers. Through online self-service, support teams aren’t just supporting current customers, they’re supporting the buyer’s presale experience as well.

Learn more about how organic web traffic supports your business goals.