The Most Underused Tool in PR? The Survey.

Mar 5, 2026

Key Takeaways

  • A single survey can power six months or more of media coverage, bylines, social content, and sales collateral.
  • The most expensive survey mistake? Fielding first and figuring out the strategy later.
  • AI bots now pass as human respondents 99.8% of the time. Know where your data sample comes from.
  • The teams that get the most from research treat the survey as a content engine, not a one-time event.

Surveys can fuel months of media coverage, bylined articles, social content, and sales enablement. But most teams either skip them entirely or rush the process — and end up with data nobody covers, shares, or trusts.

After years of promoting surveys for clients, we’ve learned where teams trip up, and how to avoid the most common (and expensive) mistakes.

 

Horizontal bar chart from Documo illustrates survey findings that 88% of healthcare providers say fax problems impact patient care, versus a small minority who say they do not.

Documo’s 2025 Healthcare Fax & Workflow Survey found nearly 9 in 10 healthcare providers report that fax-related issues are severe enough to affect patient care.

Know what you’re building before you build it

Surveys serve different masters. PR teams want headline-worthy data points. Product marketing wants nuanced insights to refine strategy. Those goals require disparate question formats, sample sizes, and budgets. A survey designed to produce “fun facts” for media outreach won’t give your product team the granular data they need. A 45-question deep-dive will bore respondents before they finish.

Before you write a single question, get alignment: Will this data power a press release? A gated report? A keynote? That decision shapes everything downstream, including whether you need a survey at all. Sometimes, publicly available research or licensed third-party data is faster and cheaper.

 

Sample size, credibility, and a statistical trap

If your survey is going to face media scrutiny, you need transparency about whom you surveyed and how many. Reporters will ask. Industry benchmarks have held steady for years: about 1,000 respondents for consumer (B2C) surveys, and around 375 for B2B surveys targeting niche audiences like IT directors at companies with 1,000+ employees.

Context matters here. A survey of 1,000 consumers in Australia (population ~27 million) carries more statistical weight than the same number in the U.S. (population ~340 million).

And if your company operates in multiple countries, resist the urge to pool results into a single “global” number. Simpson’s paradox, a statistical trap where trends visible in individual groups reverse when combined, can turn multi-country data into a misleading mess. Say you survey equal numbers of respondents in Mexico, Colombia, and Brazil. Each country shows distinct patterns. Average them into one “Latin America” result and you may misrepresent all three. Compare countries individually. Don’t blend them into a smoothie and call it insight.

 

The questionnaire is where most surveys go wrong

We’ve learned the hard way that the more time and analysis that goes into questionnaire development, the more useful the data will be.

  • Screening questions eat into your total question count, and longer surveys mean higher dropout rates. If you don’t have a plan for how you’ll use a data point, don’t ask for it.
  • Yes/no questions produce bigger, more headline-worthy numbers for PR. If 30% say yes, you can report 70% said no — cleaner than parsing “sometimes” and “frequently.” For market research, you’ll want more nuance. Know which master you’re serving.
  • Negative framing creates logical traps. “Which do you dislike most?” assumes all options are disliked. Ask “Which do you prefer?” instead and infer the rest.
  • Question flow matters more than people think. Questions should build on each other using consistent terminology. Don’t toggle between “customer support” and “customer service” and “customer experience” without a reason.

And for every question, ask yourself: Why are we asking this? Who wants this data? What will the resulting headline be? If you can’t answer, cut the question.

 

Grouped bar chart from Hakkoda's State of Data Report 2024 showing confidence levels in data teams' ability to build GenAI capabilities, broken down by industry, with Manufacturing reporting the highest confidence at 53%.

Hakkoda’s 2024 Generative AI Report explored organizations confidence in Gen AI deployments.

DIY or bring in a partner?

Clients often ask: “Can’t we just use SurveyMonkey and our own email list?” You can — if all you need are informal data points for your blog. But if you’re surveying existing customers and prospects, the audience isn’t representative. The American Association for Public Opinion Research (AAPOR) best-practice guidance is clear: probability-based samples remain the gold standard for public opinion research, and nonprobability samples like opt-in online panels require special care to produce credible results. If you want serious media coverage, you need a serious survey partner.

Reputable partners that PR and marketing teams work with include: Atomik Research, Audience Audit, Dimensional Research, Drive Research, Harris Poll (a Stagwell company), OnePoll, Qualtrics, Researchscape International, Sapio Research, Toluna, Vitreous World, and Wakefield Research. Some are full-service; others focus on fielding and analysis. For larger budgets, consulting firms like Gartner, IDC, Forrester, and Frost & Sullivan add serious credibility, but they’ll restrict how you promote the findings in order to protect their brand reputation for impartiality.

 

AI just made this harder

Here’s the 2026 wrinkle: AI-generated responses are contaminating online survey panels. A study published in PNAS in late 2025 found that AI bots impersonated human respondents successfully 99.8% of the time, bypassing CAPTCHAs, logic traps, and attention tests. A Nature analysis published in February 2026 confirmed the problem is growing, with AI-mediated responses hitting 45% of submissions in some studies.

This directly threatens data credibility. When evaluating survey partners, ask about panel verification methods, AI detection protocols, and data-cleaning transparency. If your vendor can’t answer those questions clearly, your data may include synthetic responses — and journalists are increasingly aware of this risk.

On the flip side, AI is making survey design more efficient. Qualtrics and Toluna now offer AI-assisted questionnaire development and automated analysis. These tools accelerate the process without compromising integrity, as long as the human respondents are verified.

 

Lumos coverage in a SecurityInfoWatch.com article titled "Identity at the Breaking Point: Why Security Leaders Are Betting on Agentic AI," published by Steve Lasky on February 24, 2026.

Security InfoWatch article featured Lumos survey research exploring how agentic AI is emerging as a solution for enterprise identity management.

Don’t let good data die after one press release

Get executive buy-in early. You need a spokesperson who will champion the findings at conferences, on podcasts, and in bylined articles. Plan your content pipeline before you field the survey: press releases, blog posts, social campaigns, infographics, sales collateral. A strong survey can fuel six months of content or more. And plan for follow-on research. Year-over-year trend data is far more valuable to journalists than a one-time snapshot.

 

Ready, set, survey

Great data is just the beginning. The real ROI comes from what happens next: the media coverage, the executive bylines, the social campaigns, the sales collateral, and the conference talking points that keep the findings working for months. Sterling helps B2B technology companies plan, execute, and promote research — from setting survey goals and questionnaire strategy through earned media, thought leadership content, and executive platform programs that turn data into business momentum.

Let’s talk about your next survey project.

Lisa and Tiffany