Our financial services clients are continually struggling with how to measure success on their sites that provide online services as opposed to their prospecting sites. Online service sites very often don’t have definitive success criteria like a prospecting or e-commerce site does.
On an e-commerce site selling a product is a success. On a prospecting site getting a prospect to sign-up or enroll is a success. However, on online services sites where customers do tasks such as manage their investments, bank accounts, or credit cards it’s different. Is it better if customers to come to the site more often or less often? It depends. Is it better if customers process more transactions? It depends.
What really matters is how satisfied they are with the online service. One way to judge customer satisfaction is by customer loyalty. Do your customers continue to do their online processing over time? The time frame necessary to tell depends on the service. Depending on the purpose of a financial services account, a loyal customer may log-on daily, weekly, monthly, or just twice a year. In some cases, account balance is an important success criteria but it requires merging online with off-line information which in many cases is difficult to do.
Another method of measuring customer satisfaction is the direct method – just ask them. We have a large contingent of financial services clients who use online satisfaction surveys to see how well they’re doing with their online services and it makes good sense. It’s inexpensive, flexible and has rapid turnaround. However, it can be misleading for several reasons.
First of all there is the sample composition issue. Most online satisfaction surveys are random and reflect the composition of online visitors. That sounds good, but it is not necessarily so. Consider an online service site where 20% of the visitors generate 80% of the revenue. That means that 80% of the visitors generate only 20% of the revenue. In many cases these two segments have very different online requirements. If they do, it makes most sense to make site enhancements for the 20% of customers who generate 80% of the revenue. If you do that and the enhancements are successful it’s very possible that your online satisfaction survey rating will go down. In fact that’s what happened with one of our clients.
They made significant improvements for their most profitable customers and their satisfaction ratings dropped. Why, because it actually made it more difficult for their more numerous but least profitable customers. Fortunately, we had enough information on survey respondents so that we were able to segment the survey responses by customer profitability.
Segmented by profitability, we were able to show that satisfaction rating increased for the most profitable customers even though it decreased overall. We are now developing a measurement system that will enable us to weight each individual response by profitability so that the web site owner can determine if the increase in satisfaction of the most profitable customers is enough to offset the decrease in satisfaction of the least profitable segment.
Segmentation, segmentation, segmentation. The more we work with satisfaction survey results the more, we realize the importance of meaningful segmentation. In many cases, length of usage is an important segment in understanding customer satisfaction with online services. Long time site users often react differently than new users to site changes. This raises the question of what to do.
This is a tough question to answer. Long time customers may have larger account balances and may appear to be more profitable. However, new customers may be “checking-out” services and have considerable long-term potential and a higher life time value. If satisfaction survey results differ by length of usage, we recommend developing a “first time user experience” which is different than the “long-term user experience” and addresses first time user concerns. Other segments we have found useful in evaluating satisfaction survey results include frequency of usage and usage of other similar services.
Another important factor in satisfaction surveys is outages. It sounds obvious, the more a site is down, the lower the satisfaction survey results. However, it’s not always so clear cut. We have a client who looked at the user satisfaction survey results on the two or three days following a site outage and could find no significant correlation with site outages and user satisfaction. Did users really not care whether the site was up or down? To find the answer, we looked at outages and satisfaction over a 6 month period and made a few adjustments.
Since this site had a very low weekend usage, we discounted weekend outages. Next we allocated outages that occurred at the end of a month between the month in which they occurred and the following month. Finally, we took into account the cumulative number of outages. With these few adjustments, we found that there was an extremely high correlation between site outages and user satisfaction. In this particular case, the best use of web resources was not site redesign but improving site availability.
What’s the next step in improving the value of online? Well, I’m segmentation prejudiced. So I recommend to all of our clients that use or want to use online satisfaction surveys that we work with them to integrate their satisfaction surveys within their measurement systems so that they can segment survey results by meaningful, actionable segments. One thing to be careful about, as you start looking at different segments are sample sizes. Are the segmented sample big enough for meaningful conclusions? If not you may want to consider a larger total sample or serving the sample to random samples of specified segments.
All said and done, customer satisfaction can be measured, but as with all measurement some upfront planning, attention to details and an understanding of your customers makes the difference between meaningful and misleading results.
Recent Comments