Customer Satisfaction Pilot Studies and Analysis
Basic Descriptive Statistics
Providing a basic description of the results can mean reporting averages, frequencies, percentages, or other descriptive numbers. Among the most common approaches is to create an average score. An average describes an entire set of numbers. For example, the average ACSISAT index score of 72.08 is determined by adding the scores together for the 456 people who responded in State E and dividing by the number of people (see Table 1).
An average can be misunderstood. Although we speak of averages constantly (e.g., the average American, the average customer) we may not be carefully following a strict definition of a calculated value. This loose use of the term can dilute its meaning and lead to confusion. The average score may not be at all related to the attitude of the average customer.
An average does not tell us the range of responses. For example, the average of the following four scores 6, 6, 6, and 6 is, of course, 6. In this case, the average does reflect perfectly the attitude of the average customer. But, the average of a second set of four scores 2, 2, 10, and 10 is also 6. While the average score of each set is 6, the types of responses are not nearly the same, and the average reflects no one's attitude.
In the first set, all the responses were the same, i.e., everyone moderately agreed with the statement that they were treated with respect. In the second set, however, the responses were very different, i.e., two people strongly disagreed with the statement that they were treated with respect while two people strongly agreed that they were treated with respect. In the first set, everyone appears to have the same view of how they were treated while in the second set they do not agree. This lack of agreement raises the question, are some customer groups treated differently than others? The average alone (6) indicates that the two sets are very similar, when they are actually quite different and call for two very different management responses.
In the same way, an average can be affected by an extreme response. The average of four scores 6, 6, 6, and 42 is 15. The average alone of 15 shows an apparently high pattern of responses, when actually most responses were low. The average portrays a set very different from reality.
An average cannot always be reliably compared to other averages. The State E ACSISAT average cited above was 72.08 (see Table 1). State C's ASCI average is 71.89. A common mistake is to see the difference between the two State averages and draw conclusions (i.e., State E is doing better than State C or the two States are almost identical) when an average alone does not provide enough information to draw a conclusion about which is best. Simply put, an average does not tell the reader if the difference of 0.19 has any statistical or practical significance. That is, does a difference of 0.19 call for some program modification or additional staff training? If not, what would be the threshold for action? A difference of 2.5? 5.0? 10? An average, by itself, does not provide the level of information needed for responsible decision-making for corrective action or reward.
Table 1 Comments
Primary Audience. Despite its limitations, an average is appropriate
for all audiences with a minimal amount of explanation. It is best used
to give stakeholders and senior managers an overview of the systems relationship
with its customers. Often adding comments and anecdotes from customers
adds an appropriate type of detail for these audiences whose interest
is broad and general. Providing graphic representation of the distribution
of scores that created the average can also be extremely valuable (see
Appendices A and B).