Patient-Related Biases in Clinical Outcome Assessments
9 ene 2026
Mark Gibson
,
United Kingdom
Health Communication and Research Specialist
Clinical Outcome Assessments (COAs) are central to modern clinical research. By capturing the patient’s experience, whether through questionnaires, interviews or daily logs, they give voice to outcomes that matter most: pain, fatigue, functioning and quality of life. Regulators increasingly require patient-reported outcomes (PROs) as part of evidence for drug approval and labelling.
But patient-reported data are not neutral. Just as working memory introduces cognitive biases into recall, the act of reporting itself is shaped by context, expectations and motivation. Patients are participants, not measurement devices and their responses carry systematic distortions. These distortions are not random noise; they can bias trial results, exaggerating treatment benefits or underestimating harms.
This article explores patient-related sources of bias in COAs: response bias, withdrawal bias, acquiescence bias and recall/reporting limitations. It shows how these biases arise, their impact on validity and strategies for mitigation.
Response Bias: The Power of Expectation
Response bias occurs when patients’ knowledge or beliefs about treatment shape their reports. In blinded trials, this risk is reduced; in open-label trials, it becomes acute. Patients often expect benefit from novel therapies and may unconsciously report outcomes that confirm those expectations.
Positive skew: Patients receiving an experimental, novel treatment may report more improvement, not because symptoms changes but because they expect improvement (Wiley 2014).
Negative skew: Patients who suspect they are in a placebo arm may underreport progress, amplifying perceived differences between groups (Wiley 2024).
Example: In pain studies, open-label participants often report larger reductions than blinded participants, even when treatment is identical (ScienceDirect 2014).
Impact: Exaggerates treatment efficacy, especially for subjective outcomes such as pain, mood or fatigue.
Mitigation:
Maintain blinding wherever feasible (BMJ, 2011).
In open-label contexts, use objective anchors (e.g. actigraphy for sleep, activity trackers for mobility).
Train patients about the importance of neutral reporting.
Withdrawal Bias (Attrition Bias): Who Drops Out Matters
Not all patients complete trials. When dropout rates differ systematically between groups, withdrawal bias occurs.
Patients experiencing adverse effects or lack of improvement are more likely to leave a study (NIH 2019).
The remaining participants represent a skewed sample, i.e. those who tolerate and respond to treatment.
Example: In long-term safety trials, participants who suffer side effects may withdraw early, leaving an apparently healthier group. Treatment then appears safer than it is.
Impact: Inflates efficacy and underestimates harm, threatening internal validity (NHMRC 2019).
Mitigation:
Retention strategies: flexible scheduling, patient engagement, proactive support.
Analytic strategies: intention- to-treat analysis, multiple imputation for missing data and sensitivity analyses (NIH 2021).
Acquiescence Bias: The Tendency to Agree
Some patients show a systematic tendency to agree with statements or to endorse positive option, regardless of content. This is particularly strong in face-to-face assessments where social desirability plays a role.
Agreeing by default: Patients may nod through questionnaires to appear cooperative.
Positivity bias: Patients may endorse “better” outcomes to please interviewers or clinicians.
Example: In interviewer-administered surveys of self-efficacy, interviewer characteristics such as gender and tone influence responses (NIH 2025)
Impact: Inflates positive outcomes and reduces discrimination between treatments.
Mitigation:
Use balanced scales with reverse-coded items to detect yea-saying.
Provide anonymous modes of reporting where possible.
Train interviewers to use neutral tone and body language (Oxford Academic 2022),
Recall and Reporting Biases: Limits of Memory
Even when patients intend to report accurately, recall limitations distort responses.
Recency bias: Patients overweigh recent days when recalling longer periods.
Saliency bias: Vivid or unusual episodes dominate memory.
Telescoping: Events are misdated, pushed closer in time (forward telescoping) or further back (backward telescoping) (NIH 2021).
Impact: Reduces accuracy of patient-reported outcomes, particularly with long recall periods (BMJ 2018).
Mitigation:
Use shorter recall window, such as 1 week rather than 1 month.
Implement other ways of capturing real-time symptoms, such as electronic diaries or ecological momentary assessment (EMA) (Nature 2024).
Provide clear temporal anchors, such as “since your last clinic visit”.
Social Desirability and Stigma
Beyond acquiescence, patients may consciously minimise or exaggerate certain outcomes due to stigma or identity.
Underreporting: Patients may downplay depression, sexual dysfunction or treatment non-adherence to avoid stigma (EUPATI 2025).
Overreporting: Conversely, patients may exaggerate resilience or benefit to align with their self-image as “strong” or “coping well”.
Impact: Distorts reporting in sensitive domains, leading to systematic underestimation of certain conditions (Evidence-Based Nursing 2020)
Mitigation:
Use neutral, non-judgemental language.
Emphasise confidentiality and anonymity.
Provide indirect questioning formats for sensitive topics.
Implications for COA Design and Interpretation
Patient-related biases operate through multiple channels:
Expectations (response bias).
Attrition (withdrawal bias).
Interpersonal dynamics (acquiescence and social desirability).
Memory limitations (recall bias, telescoping).
Together, these biases distort the very outcomes the COAs are designed to capture. They can exaggerate benefits, understate harms or create variability unrelated to true treatment effects.
Recognising patient-related biases has several implications:
1. Trial design must prioritise blinding, patient retention and validated instruments.
2. Data analysis must account for patients missing appointments, attrition and acquiescence tendencies.
3. Interpretation must balance subjective PROs with objective measures where possible.
4. Patient engagement should acknowledge biases without dismissing patient voice: patients’ experiences remain central, even when filtered.
Conclusion
Patient-reported outcomes are indispensable for capturing what matters most in clinical trials. Yet they ate shaped by systematic biases: response bias, withdrawal bias, acquiescence bias, recall errors and social desirability. These are not trivial. They can tilt the evidence based, exaggerating efficacy and underestimating risk.
The solution is not to abandon patient-reported data but to design COAs that respect cognitive and social realities. Shorter recall periods, blinding, neutral phrasing, balanced scales and retention strategies all help. Combined with objective measures, these strategies ensure that patient voices are heard clearly and less distorted by bias and more faithfully representing lived experience.
Bias cannot be eliminated but it can be constrained. Only by recognising and mitigating patient-related biases can COAs provide trustworthy evidence for clinical care and regulatory decisions.
Thank you for reading,
Mark Gibson
Clermont-Ferrand, France
September 2025
References
BMJ (2011). Bias in clinical trials. The BMJ.
BMJ (2018). Outcome reporting bias in trials: a methodological approach. The BMJ.
EUPATI (2025). Bias in Clinical Trials. EUPATI Open Classroom.
Evidence-Based Nursing (2020). Understanding sources of bias in research.
National Institutes of Health (NIH) (2019). Assessing risk of bias - NHMRC.
National Institutes of Health (NIH) (2021). Risk of bias: why measure it, and how?
National Institutes of Health (NIH) (2025). Uncovering potential interviewer-related biases in self-efficacy assessments.
Nature (2024). Tackling biases in clinical trials to ensure diverse representation.
Oxford Academic (2022). Interviewer effects in a survey examining pain intensity.
ScienceDirect (2014). Empirical evidence of observer bias in randomized clinical trials.
Wiley Online Library (2014). Assessment bias in clinical trials.
Wiley Online Library (2024). The impact of blinding on estimated treatment effects.
Originally written in
English
