Sunday, December 13, 2009

Survey Says…

Surveys and the resulting statistics have always fascinated me. I loved my probability and statistics class in high school. My part-time job as a telephone market research interviewer in my senior year of high school introduced me to survey design including the concept of closed ended and open ended questions. We learned to prompt (“any other reasons you like this product?”) and probe (“how does it ‘work well’?”).

While my college education included classes in statistics and social research, I never took it any further…until this past October when I administered my very own survey.

Tell Me What You Think

I’ve been thinking about this for a few years now. My class in Communication Theory and Social Research prompted me to take the plunge. It was exciting to finally go through with it.

For the past 18 years, I’ve been involved in (some years, like this year, consumed by) running a fund raising event for a local animal shelter. We call it a “Bucket Auction” but it’s also known as a “Chinese Auction”, “Tricky Tray”, or “Penny Auction.” The event does reasonably well but we’ve hit a plateau in how much we raise. Even with gradual increases in admission and charging for previously free refreshments, there’s been no real increase in the proceeds.

Overall, it’s an impressive event. There are hundreds of prizes and we regularly attract participants from over an hour from the event venue. But, I’ve wondered if we what our participants think. Is there something we can do better? What do they think we do well already?

So, I developed a very short survey to assess how people feel. All of the information appeared on the front and back of a 4 by 6 index card. To encourage people to participate, we offered a drawing for a $25 gift card. The following images are the front and back of the card that was inserted in each participant’s program.

Front of Survey Card
(Click on image to enlarge.)

Back of Survey Card
(Click on image to enlarge.)

I attempted to assess the following:
  • How the participant found out about the event. This will help us in the future to focus our efforts on the most effective way to promote the event.
  • Whether or not the participant knew about the event website/blog and Facebook site. These were new efforts this year to create some excitement about the event.
  • What participants thought of the prices charged for admission, different ticket levels, and refreshments. I used a Likert-type scale to determine if the prices were too low, just right, or too high. (Remember, I had limited space.)
  • Whether or not they intend to return next year.
  • The most and least popular prizes. (A quick visual survey gave me some indication but it’s always good to ask.)
  • What improvements participants would like to see in the prize selection.
Of course, we also took the opportunity to get names and addresses to add the individuals to our mailing (paper and electronic) lists.

Our Exit Polls Show…

Based on a count of the cards compared to the tickets sold at the door, nearly 100% of participants submitted a card. However, not everyone responded to the questions. Some just wrote their names and put the cards in the box. Unfortunately, I’ve not had time to evaluate the results. (Manually inputting over 250 cards of information is quite time consuming.) This is on my list as a winter break project. But I can discuss a few issues.

A cursory review of the results (flipping through the cards) shows that we are generally on-track with these participants. They think the prices are reasonable. A few people made suggestions for improvements, but not a lot.

What’s Going On?

So, why aren’t we making more money, even after raising prices, if nearly everyone is happy? Well, this year there is the big “E” word – the ECONOMY. It could be that people have their limits when they walk in the door and they stick with them.

Another observation from the event itself is the perceived socio-economic status of our participants. I did not request income or other demographic information in the questionnaire but general appearances would have me belief that most of the participants are from lower-middle class households.

What would be interesting is to survey people who did not attend, particularly those with more disposable income. Why don’t they attend? Is it the venue – a high school cafeteria with hard benches attached to the tables? Is it the quality or value of the prizes? Is it a perception of the event as “lower class” or not socially acceptable? How can we get the bigger spenders to come out? Will that cause us to lose our “regulars”?

So What Does That Mean?

When you hear or read about reports from surveys, there are often words or phrases they throw around that typically don’t mean much to most people. The three most common are standard deviation, margin of error, and statistically significant.

The standard deviation is the average distance of set of scores is from the “mean” or average. When the standard deviation is a smaller number, most of the scores are close to the average. There’s very little diversity or variability in the responses. When the standard deviation is a larger number, the scores are fairly well spread out. This represents more diversity or variability in the responses.

Because surveys generally do not include 100% of a population being studied, researchers usually attempt to acquire results from a representative sample of the population. Researchers seek to achieve results within a 95% confidence interval – they can be 95% confident that the sample represents the entire population. The margin of error tells us how much variance from the resulting percentage we can expect in the total population. So, if the margin of error is 3 points and our result is 45, that means the result in the total population is between 42 and 48. The larger the standard deviation, the wider the range of possible scores.

Results are statistically significant if there is less than 5% chance that the results occurred randomly.

So, stayed tuned for the results from my survey. Let’s hope with we have statistically significant results with small margins of error and standard deviations.

No comments: