Fraud Is Shifting. Respondent Behavior Is Changing. What IA’s Global Data Quality Benchmarks Reveal.

February 6, 2026

Every year, the conversation around data quality grows more urgent and more complex. As fraud tactics evolve, respondent expectations shift, and survey technology rapidly advances, the industry needs guardrails to understand what quality truly looks like now. Some of these challenges stem from intentional misuse of research systems, while others reflect changing respondent behavior, attention, and motivation, each requiring different forms of protection and design.

That’s where the Insights Association’s 2025 Global Data Quality Benchmarking Report offers real value: it provides a broad, comparative view of how quality indicators are trending across survey types, audiences, and geographies.

At Research Results, we see benchmarking as an opportunity, not a scorecard. Data quality isn’t a single metric or moment. It’s a system. A process. A shared responsibility. An ecosystem, if you will.  And like any ecosystem, its weakest failures are often the least visible, but the most damaging to insight.

And this year’s benchmarks highlight several trends that should shape how insights teams design, field, and actively protect their research going forward.

Below, we break down the most important takeaways and what they mean for the real-world decisions insights teams make every day.

  1. Fraud Is Moving Upstream, and That Focus Matters

One of the clearest signals this year: more fraud is being intercepted before respondents ever enter a survey, but interception alone does not eliminate quality risk

  • Suppliers are now removing 7.4% of respondents pre-survey.
  • Research agencies remove 2.8% pre-survey.

While these figures reflect meaningful progress, aggregate pre-survey removal rates can mask significant variation by audience, incentive level, and incidence rate. In practice, studies with narrow qualification paths or higher rewards often require substantially higher front-end filtering to protect data integrity.

That shift matters. It shows that tools like digital fingerprinting, identity validation, and link encryption are catching many forms of intentional misuse earlier. But it also reinforces a reality: that front-end screening must be paired with behavioral and engagement-based safeguards, or low-quality respondents will slip through.

What this means for insights teams:

  • Prioritize suppliers who invest in front-end protections.
  • Expect pre-survey removal rates to increase (that’s a good thing).
  • Build timelines with the understanding that quality takes place before data collection begins
  • Focus not just on how many respondents are removed, but on whether the most impactful sources of fraud are being intercepted.

We’ve strengthened multi-layer validation so that fraud is intercepted before it affects feasibility, quotas, or the respondent experience. That’s exactly why we built RADAR, our proprietary quality intelligence system designed to screen out bad actors at the door while preserving real, thoughtful respondents through an adaptive, multi-checkpoint pre-survey experience. RADAR was designed not to hit an industry-average removal rate, but to identify and remove the respondents most likely to distort results, whether through intentional fraud or sustained disengagement.

  1. Incidence Rate Estimates Remain Overly Optimistic, With Compounding Quality Risk

The gap between sold incidence rates (IR) and actual IR remains significant:

  • Agencies: ~7% below estimate
  • Suppliers: ~10% below estimate
  • Healthcare Provider studies: 24% below estimate

These gaps have real operational consequences. Underestimating IR can affect timelines, budgets, and sample availability, often creating downstream pressure to accelerate fieldwork or expand sourcing mid-stream. But IR variance is more than a forecasting inconvenience, it can also be an important diagnostic signal.

In practice, IR shortfalls tend to stem from a combination of factors. In some cases, feasibility assumptions may be overly optimistic or based on incomplete models of how audiences behave in real-world sampling environments. In others, screening criteria may be misaligned or overly restrictive, inadvertently excluding qualified participants and suppressing true incidence. And in still other cases, enhanced quality controls may be appropriately removing inauthentic, inconsistent, or fraudulent respondents before they can enter the dataset.

Understanding which of these forces is driving an IR miss matters, because each points to a different corrective action.

It’s also important to recognize that as incidence rates decrease, the relative impact of fraud and low-quality participation increases. Fraud is rarely evenly distributed; it tends to concentrate in narrower qualification paths and higher-incentive studies. In low-IR environments, even a modest volume of inauthentic traffic can disproportionately influence final datasets. For example, if a study has a true IR of 5% and roughly 5% of incoming traffic is fraudulent or highly unreliable, nearly half of the completed data may be compromised.

What this means for insights teams:

  • Build flexibility into feasibility assumptions, particularly for low-incidence or high-value audiences.
  • Revisit historical performance and known population statistics when estimating IR, rather than relying solely on broad benchmarks.
  • Use real-time field monitoring to identify IR variance early and assess whether it reflects planning assumptions, screening design, or quality controls.
  • Treat low-incidence studies as higher-risk environments that warrant enhanced fraud detection and engagement safeguards.

When we consult on feasibility, our goal is not simply to predict how a study will field, but to help clients understand the underlying risks and tradeoffs, so that performance pressures don’t quietly compromise data integrity or decision confidence.

  1. Certain Audiences Carry Higher Risk and Need Higher Protection

Some groups continue to see higher removal and abandon rates, especially those with higher incentives:

  • Global B2B has the highest global removal rate at 15.3%.
  • US B2B removal climbs to 18.5%.
  • Healthcare patients and providers show lower abandon rates, but more variability in incidence and LOI.

These patterns reflect not just increased difficulty, but fundamentally different quality risks associated with professional and high-value audiences. High-value audiences attract fraud attempts, often through identity misrepresentation rather than inattentive behavior, requiring more thoughtful evaluation, design, and post-collection review.

What this means for insights teams:

  • B2B and healthcare studies require thoughtful identity verification strategies aligned to study objectives, risk tolerance, and decision impact.
  • Survey design should balance depth with cognitive load, particularly for professional audiences where complexity is unavoidable.
  • Expect higher and more targeted removal rates for professional and high-value audiences, reflecting necessary validation rather than execution failure.

To us, protecting these groups isn’t optional; it’s essential to preserving trusted insights and confident decision-making.

  1. Survey Experience Still Drives Quality, Especially on Mobile

Mobile participation continues to rise globally. In some regions, more than 70% of respondents complete surveys on mobile devices. At the same time:

  • Longer surveys lead to higher abandonment.
  • Poor mobile optimization increases behavioral removals.

These effects are most pronounced for disengagement-related removals, rather than intentional fraud, which often requires separate detection strategies.

What this means for insights teams:

  • Design mobile-first, not mobile-compatible.
  • Structure longer surveys to reduce cognitive burden.
  • Limit grid-heavy or friction-inducing question formats.

Our team understands that improved design isn’t just good UX; it plays a critical roll in reducing disengagement-related removal and boosts completion quality when paired with appropriate quality controls.

  1. Link Encryption Adoption Is Improving, But Still Inconsistent

Encryption is one of the most effective tools for protecting survey links and preserving the integrity of sample counts, quotas, and safeguarding your fieldwork execution.

  • Agencies: 91.5% adoption globally
  • Suppliers: 75.2% adoption
  • Some regions show usage as low as 50%.

Gaps in encryption usage create vulnerabilities that can lead to spoofed completes, miscounted traffic, and unanticipated shifts in sample balance.

What this means for insights teams:

  • Add encryption usage to supplier checklists.
  • Ensure every survey link uses secure protocols.
  • Recognize that encryption protects both respondent data and study integrity.

At Research Results, encrypted link and secure routing are standard practice, not optional add-ons.

  1. Benchmarks Are a Guide, Not a Judgment

One of the most important principles outlined in the benchmarking report is that quality metrics should be used as a reference point, not a pass/fail evaluation, and not a substitute for study-level judgement.

Why? Because every study type, every audience, and every region behaves differently and carries different quality risks.  Benchmarks are most valuable when they provide context, not when they are applied uniformly or without consideration for methodology, sourcing, incentives and study objectives.

What this means for insights teams:

  • Compare against relevant segments, such as B2B versus B2C or consumer versus healthcare, rather than relying on global averages alone.
  • Track internal trends wave-to-wave to understand what is changing over time and why.
  • Use benchmark gaps as diagnostic signals, helping identify where deeper review, adjustment, or additional safeguards may be warranted.

As part of RADAR, we apply this same philosophy internally. We continuously benchmark our own performance across providers, audiences, countries, and study types, monitoring how quality indicators shift over time and where patterns diverge. This allows us to distinguish between expected variability and meaningful signals, and to refine sourcing, screening, and quality controls based on evidence rather than assumptions.

Data quality isn’t about perfection; it’s about progress, transparency, and continuous refinement.

Final Thoughts: Quality is an Ecosystem, and It’s Evolving

The 2025 benchmarks tell a clear story: the industry is improving in some areas, stabilizing in others, and still navigating new challenges as fraud becomes more sophisticated. But the direction is positive. Quality conversations are happening earlier. More companies are contributing data. More transparency is emerging. At the same time, increased sophistication, both in respondent behavior and fraud tactics, demands more deliberate, system-level approaches to quality.

At Research Results, we believe benchmarks help everyone—clients, suppliers, and agencies—make smarter decisions. They reinforce what we’ve known for decades: that data quality is built through rigorous process, thoughtful design, and collaborative improvement, and the willingness to continually reassess what “good” looks like as conditions change.

If you’d like help interpreting the benchmarks, evaluating your own metrics, or refining your quality processes, our team is always here to support you. Contact Ellen Pieper, Chief Client Officer, Ellen_Pieper@researchresults.com, or 919-368-5819 today to learn more.