Business people generally believe that satisfying customers is a good thing, but they don’t necessarily understand the link between satisfaction and profits. This is partly because much of the original work was done so long ago that contradictory cases and nuances have allowed confusion to build up. Additionally, some companies have appeared successful for a time despite poor satisfaction, generally in industries where there is limited or no competition such as airlines.
I just came across an interesting issue with validation in an online survey using a Van Westendorp pricing model. Van Westendorp is one of the common ways to test pricing by directly questioning prospective purchasers. This post isn’t about Van Westendorp, also known as the Price Sensitivity Meter (you can find plenty of references online, including a starting point on Wikipedia) but you need to know a little to understand the issue. Survey respondents are asked a series of questions about price perceptions, as follows:
- At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would still consider buying it? (Expensive/High Side)
- At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
- At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
- At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)
There is some debate about the order of questions, but in this example the questions were asked in the order shown. The wording was slightly different. Researchers are sometimes concerned about whether the respondents understand the questions correctly, especially since the wording is so similar (the Expensive, Cheap etc. designations are usually not inclined in the question as seen by a survey taker). One way to address this concern is to highlight the differences. Or you might point out that the questions are slightly different and encourage the respondent to read carefully.
The other approach is to apply validation that tests the numerical relationship. Correctly entered numbers should be Too Cheap < Good Value < Expensive < Too Expensive. (We usually ask these questions on separate pages so as to get independent thoughts from the respondents as far as possible, rather than letting them see the group of questions as one and making them consistent or nicely spaced).
In this case, the research vendor chose to validate, but messed up big-time. When I entered a value for ‘Too Expensive’ that was higher than the value for ‘Expensive’, I was told to make sure your answer is smaller or equal to the previous answer. Yes, they forced me to provide an invalid response! I hope they caught the problem before the survey had gathered all the completes, but maybe they didn’t – given how fast online surveys often fill. They probably had to field the survey again because the pricing questions were integral to the research objectives.
Why did this happen, and how can you prevent a similar problem in your surveys?
My guess is that the underlying cause was that debate about question order that I mentioned earlier. The vendor probably had the questions switched when the validation was tested, and then changed the order before the survey was launched.
But the real message is that proper testing could have identified the issue in time to correct a very expensive error. There is no excuse for what happened. This doesn’t even fall into the class of problems that the pilot or soft-launch would be needed to catch.
So, test, test, and test again. In particular, test using people who aren’t research professionals or experienced survey takers.
If you are creating your own surveys, don’t let this kind of problem stop you. You can do just as good a job of testing as the big companies, and big companies aren’t immune. This survey was delivered by one of the top 10 U.S. market research firms. I won’t publish the company name here, but I’ll probably tell you if you catch me at one of my workshops (coming soon).
The Pew Research Center has tracked broadband adoption for several years; the most recent study shows that the adoption rate has dropped. As of April 2008, 55% of the adults in the U.S. have access to broadband at home, with just 10% using dial-up connections.
As you might imagine, broadband usage is unevenly distributed. People living in rural areas are less likely to have a high speed connection, as are lower income and African Americans (Hispanic broadband access is similar to the overall population). Notably, broadband adoption now increases with age, with the highest rate among those 65 and above.
All very interesting you might say, but what’s the point for me as a researcher or marketer? When I dug deeper into the report, I found some nuggets about why broadband isn’t being used that lead to implications about research and product. Here are a couple of points to ponder:
- Some people say they don’t want broadband. Of course, availability and cost are issues for some, but 19% say that nothing would convince them to get broadband. I’m sure that some of the naysayers would actually become subscribers if they were to try broadband (most marketing still focuses on speed, ignoring the benefits of an always-on connection), but there will still be some who won’t make the move of their own volition. Slowing adoption rates confirm that these people aren’t just late adopters, they are laggards, and they will probably only convert when forced by suppliers. As we know from the Technology Adoption LifeCycle model, the stages correlate to different psychographic profiles. These people are different!
- Many of those who want broadband do not have access (particularly in rural areas), or cannot afford it. This has implications for the design and implementation of market research.
- Beyond the broadband versus dial-up split, 35% of adult Americans do not have any form of Internet access at home. The most significant demographic differences shown in the Pew report summarizes are age, income, and education – truly a digital divide.
Lessons for marketers and researchers
- You still need to consider bandwidth capabilities for online surveys. Perhaps your research topic is such that you don’t care if you deter dial-up users, but often you should be concerned about non-response bias, and in any case the things that improve surveys for lower bandwidth are good practice for all. In particular,
- Combine pages when it makes sense. We’ve all seen surveys that have every question on a new page with no good reason. Every page load takes more time for a dial-up user. Sometimes your logic requires a new page (but be careful when choosing a tool that you aren’t forced into poor practices just because of the tool), but it is usually possible to put a few questions together with the result looking better for the responder. Demographics and related satisfaction questions are good candidates, but try especially hard to make the front of the survey look good when viewed over a slower connection. Note that the advice to combine pages doesn’t just mean put everything on a single page. Remember, you are trying to engage the respondent. Think of the survey like a conversation. A long single page online survey can be very daunting, and almost as frustrating as endless clicks.
- Make your graphics small files. There’s nothing wrong with some graphical elements for branding or just to make the pages more interesting, but be sure to keep the files small. That great picture of the product was probably taken with a multi-megapixel camera, meaning that the file is hundreds of kilobytes. But it doesn’t need to be very high resolution, regardless of the speed of the connection; 72dpi is probably plenty.
- Avoid gratuitous or physically large graphics, for the same reason as the previous recommendation. Your respondents are doing you a favor, so don’t show them images just because you think they make the survey more interesting.
- Is an online survey really the best approach? Usually online surveys work very well, but don’t be blind to other techniques. Is your target market online? If your product is aimed at one of the demographics that are underrepresented online you may be increasing the potential for bias. Weighting and oversampling might be helpful, but could increase costs and you may miss out on insights from some of the key targets. Even if you are surveying existing customers (rather than using a panel) be aware of the potential for problems, especially if your email coverage isn’t representative. Perhaps telephone or mail data collection would allow you to better reach the full range of psychographic profiles.
- Match your marketing to your targets. No surprise here, but remember a couple of things. Your customers may be looking at your messages online, just not at home, by using the Internet in the library or at work. [Side note – speedy access is one reason why online shopping at work is popular at lunchtime.] Don’t alienate the less technically attuned consumers. Differentiated advertising through different media is probably a good idea.
As with many other aspects of research and marketing in general, the real message here is to think, not assume. Try to put yourself in the mind of your respondents, prospects, and customers.
I asked myself this question the first time I saw a survey invitation with the following warning:
The invitation continued with instructions to copy and paste the link into an Internet Explorer window if Firefox is my default browser.
Let’s look at this in more detail. To dispose of the title question first, the only obvious logical reason why someone fielding online surveys wouldn’t provide support for Firefox users would be if they were surveying people who don’t use it. Perhaps even that isn’t exactly logical, but at least it’s a reasonable excuse. If you are creating something that requires significant development effort, and you are screening for Internet Explorer users, why bother with Firefox?
Unfortunately, that theory doesn’t fit the situation. I’ve seen invitations with this warning for over a year, covering Consumer Package Goods and Retail Stores. I have yet to come up with a good reason, and the research company hasn’t offered me one.
But why is it such a bad idea?
First, Sample Bias. Systematically excluding a segment of the overall population you want to survey is generally a bad practice. It is easy to gather results that are biased, for reasons that may be obvious or less so.
Remember the days of telephone surveys? (I know, we are still collecting data via the telephone, but many people are only familiar with online surveys.) Best practices include calling at random times of the day and night, and also letting the phone ring for quite a while. Why? To increase the chances of the respondent being a person who works, and also to increase the coverage of people who might be elderly or infirm – and who might take longer to reach the phone. Without these measures, you might end up with a disproportionately large number of fit stay-at-home respondents. Some corrections could be done with weighting, but this adds unnecessary complexity versus just improving the representivity of the sample in the first place.
In the case of eliminating surveys from Firefox users, it would probably be a good idea to understand the potential impact through browser share numbers. Unfortunately, this isn’t quite as easy as it might seem, which is probably why we see percentages ranging from 14% to several times higher for Firefox usage in the US. These differences are caused by data collection methodologies and also browser behavior, but this article isn’t about browser share so let’s just settle on an approximation of 20% user share for Firefox. So these surveys are systematically excluding about one fifth of the US population. I could easily come up with some imagined differences between Firefox users and users of other browsers, but fortunately there is some real research out there. comScore reported in 2007 on a study that looked into the differences between Firefox and Internet Explorer users. The results showed that Firefox users were more likely to be younger, higher income, and male than the average Internet user. Would this impact a project covering food items in the grocery store? You bet. comScore’s study also showed that Firefox users are more likely to have a broadband connection and that their site visitation profile varied from the average – which could impact advertising placement and content.
The other impact for concern, although probably a lesser concern in this case than sample bias, is that of Lower Response Rates. Without hard evidence we can only speculate on the impact, but it seems likely that some people who receive an invitation excluding Firefox might decide not to participate even though they could do so fairly easily by starting Internet Explorer and pasting the link. The additional steps involved are a deterrent. Unfortunately, these particular surveys don’t even work by changing to Internet Explorer rendering within Firefox (something that is common practice to allow usage of sites that are not standards compliant). Longer term, continued invitations that are less easy to use may result in more people leaving the panel.
In conclusion, make your surveys sample as representative as possible, and don’t do anything in the invitation or survey to turn people off.
One last note on this subject. The problem invitations specifically state that the surveys don’t work with Firefox. Even if Firefox is the only excluded browser, it represents over 20% of the overall market as of Dec 2008 according to Net Applications. It probably doesn’t make sense to invest in development for older browsers, but as Safari (7.9%) and Chrome (1%) usage grows the challenges for survey developers are going to increase. Overall, browsers other than Internet Explorer are currently about one-third of total usage.
5 Circles Research