Surveys can be powerful tools for public relations teams as well as product marketing teams. They can serve dual purposes, demonstrating thought leadership and/or collecting data to provide insight into a particular market segment or audience stance on an issue.
But a survey developed for PR purposes — say, to elicit “fun facts” to weave into contributed articles, blog posts, and media pitches — may not produce the quantitative and qualitative insights that a product marketing team needs for refining products and services. If you simply want a data point to validate a marketing point, it may be more effective to find publicly available data or pay a third-party for reuse rights.
Before developing a survey, it’s important to agree up front on intended purpose and hard-data-versus-soft goals, as that will guide the questions and format, and determine the necessary survey respondent pool size. Those in turn affect cost and final product — report, slides, standalone graphics, landing page, etc.
Surveys do require a lot of effort to do well. In developing PR-focused surveys for clients, I have found three key stumbling blocks:
- Survey pool sizes (and traps)
- Questionnaire development
- DIY or tapping a vendor (Part 2)
How big is the pool?
For your survey to be viewed as credible, you need to be transparent regarding the survey pool size and the source of the respondents. That’s why a press release or article or slide deck about survey results will have fine print disclosing the demographics.
If your survey data is collected from a group that isn’t a good approximation of the population as a whole, then it may be biased. When a survey vendor looked at a corpus of press releases to determine common sample sizes for PR-focused surveys, the median size was about 1,000. A survey of 1,000 people in Australia (26 million in 2021) is obviously far more statistically valid for consumer sentiment than a survey of 1,000 people in the U.S. (332 million).
Business-to-business (B2B) surveys of highly targeted audiences (say, IT professionals at North American companies with a minimum of 1,000 employees) do typically have lower sample sizes than surveys of general consumers, voters, or employees. The same survey vendor found the median size of a B2B survey was 377 respondents vs. 1,032 respondents for a business-to-consumer (B2C) survey.
The trap of Simpson’s paradox
Be wary of pooling survey results to get “global” or “multi-country” results. Simpson’s paradox, also known as the amalgamation paradox, is a phenomenon in probability and statistics in which a trend appears in several different groups of data but reverses or disappears when these groups are combined.
Comparisons of results across countries is interesting for the cultural perspective, but consolidated, averaged data may not accurately represent the population sentiment of any country.
For example, perhaps you do business in Latin America as well as the U.S. Your product marketing team wants a survey that reflects a Latin American market that is similar in size to that of the U.S. You may survey the same number of people in three countries of vastly different sizes — Mexico, Colombia, and Brazil, for example. If you look at the data country-to-country, there may be significant differences among them, as well as in comparison to the United States. But when you pool the data across the three countries for an overall “Latin American” result to compare to the U.S., the combined result may give you very misleading impressions. That can lead to poor decisions related to any product aimed at those countries.
That is the question
I have learned the hard way that the more time and analysis that goes into the development of the survey questionnaire, the more useful the data will be.
Screening questions. Decide how finely you want to slice and dice your data, depending on whether you’re using the data for general PR visibility or for true market research. You’ll want some screening questions, but do you need to know age, gender, race, geographic region, company size? Those questions “count” toward the number of questions you are asking. You don’t want the survey to be too long, as the participant may get frustrated and quit. You may also be paying for a certain number of questions.
Yes/No questions. A survey that you’re conducting for PR purposes should produce bigger extremes in responses, in order to get more headline-worthy numbers. Meanwhile, a survey designed to produce true marketing insight may need questions that will result in more nuanced responses. Yes/no, black/white questions that force people to choose a single option instead of multiple choice or carousel “choose all that apply” responses will result in bigger numbers for stronger statements. In that way, if 30% said Yes, you can accurately assume 70% said No — instead of 40% No and 15% Sometimes and 15% Frequently. It’s easier to “reverse the math” or flip statements from a negative to a positive in order to get a headline-worthy number.
Avoid negative assumptions. Try to avoid questions that will result in twisty logic and false assumptions. For example, “Which of these options do you dislike the most?” or “Which do you like the least”? assumes all are disliked. Therefore, you can’t assume reverse statements are true (that people like X most). It’s better to ask positive questions (“Which do you like the most?” or “Rank by the order you like most”) and then assume the option with the lowest ranking is most disliked/least preferred/least favorite.
Question language and flow. Your questions should have a natural, logical order and build upon each other to help steer the thought process of the survey respondent. Sometimes new questions are added late in the development process but there’s a related question elsewhere in the survey. If you want to keep them both, it makes sense to have them appear consecutively. And be sure you’re using consistent terminology or have a reason for variance, for example, “customer support” versus “customer service” versus “customer experience.”
Second-guess yourself. Review each question in the survey and consider, “Why are we asking this? Who wants this info? What will be the resulting statement and will it be interesting or useful?” As noted above regarding screening questions, if you don’t have a good reason or know how the data will be used, then don’t bother with the question.
Intermission
Always leave your audience wanting more! In my next blog post, I’ll provide some advice on determining when it’s appropriate to do your own survey versus getting outside help. And I’ll list some qualified survey vendors.
In the meantime, please reach out to go@sterlingpr.com if you could use help in your PR or marketing efforts.