Customer Satisfaction Pilot Studies and Analysis
Response rates are an important context for understanding the results
provided by any survey. This is based on the fact that a customer survey
tries to understand the true range of all customers' experiences. We rarely
have the resources to survey all customers, so a sample is randomly drawn
from the total customer population. Because everyone has an equal chance
of being chosen for inclusion in the sample, the survey results are assumed
to be similar to the results obtained if the total population had been
Response rates strongly affect the basic assumption of randomness and
therefore the assumption that the sample's satisfaction is similar to
the population's satisfaction. There are two possible situations in which
an individual does not respond to the survey. First, those who do not
respond are like everyone who does respond except that they are not available
or are unwilling to respond. Second, those who do not respond are unlike
everyone who does respond and are not available or are unwilling to respond.
It is the second situation that is far more likely and of great concern.
Numerous studies have shown that low response rates do alter the results
of surveys, presenting an inaccurate picture of the population. The sample
goes from being random to one that contains individuals who self-select
to either be or not be surveyed. Moreover, those who self-select into
the survey are no longer representative of the population.
All of the pilot states were asked to achieve a minimum 50 percent response
rate to guard against the problem of being representative. Nearly all
of the states achieved just below or above the 50 percent rate for participants.
For example, in State A, which had both JTPA and Job Service participants,
the response rate was 46% for JTPA respondents but only 26% for Job Service
respondents. The response rate for those with valid phone numbers was
58.5%. This second number gives a more useful sense of degree of self-selection.
Other states had similar customer response rates. Employer response rates
were somewhat lower in State A, which had 44.3% of employers with valid
phone numbers responding. 3
The response rates from the pilots may not accurately represent response
rates for WIA, however. First, the respondents were drawn from a different
program population (JTPA and in some cases Wagner-Peyser). Second, the
respondents were contacted a longer time after exit (90 days and often
much longer) in these studies than they were in WIA (within 60 days).
However, these results do point to some important considerations. Staff
in One-Stop centers must obtain good contact information and alternative
phone numbers if possible. This is particularly important for employers,
since response rates will suffer if the individual in the employer's organization
who actually received the services being addressed in the survey cannot
be contacted. Staff can also help with response rates by explaining that
customer surveys are regularly conducted and that the participant or employer
may be contacted. They can further indicate that this is done because
they are concerned with the quality of their services and the satisfaction
of their customers. Setting up the survey in this way during the final
contact with the customer is an important means of reminding both staff
and customers that customer satisfaction is important to the One-Stop