What does Cognitive Debriefing NOT Cover in a Clinical Outcome Assessment?
Apr 30, 2025
Mark Gibson
,
UK
Health Communication Specialist
Clinical Outcome Assessments (COAs) come in several forms, such as patient-reported, observer-reported, clinician-reported and performance-based outcomes. The purpose of these assessments is to evaluate treatment effectiveness, the experience and burden of symptoms and patients’ quality of life in clinical studies and real-world settings.
For COAs to be effective, they need to be clearly understood by study participants, easy to use without imposing an excessive cognitive burden and scientifically valid to ensure data that is reliable and meaningful.
How do we provide evidence that COAs are easy to understand, easy to use and scientifically valid? This article looks at the strengths and weaknesses in the standard approach to assess these three areas.
Cognitive Debriefing
The gold standard method used to test COAs is cognitive debriefing. This is a qualitative approach whose aim is to evaluate whether participants can understand and interpret items as they were intended in the source. But while it is a crucial step in COA validation, there are gaps and shortcomings in what it can achieve. There are notable aspects of a COA that cognitive debriefing fails to address, such as usability, how COAs are cognitively processed across diverse populations and health literacy.
Cognitive debriefing follows a structured interview process and is widely used in linguistic validation approaches to make sure that COAs maintain the same meaning as the original. There are massive variations in how companies conduct cognitive debriefing, but, in general terms, participants are asked to:
· Explain how they interpret each question
· Paraphrase each item - ideally each concept contained within the item – in their own words
· Describe their thought process when selecting a response choice.
Accordingly, cognitive debriefing is supposed to identify the following:
· Clarity issues: are the items, instructions, screen prompts and response choices easy to understand?
· Consistency of interpretation: Do participants in different linguistic and cultural settings interpret items in the same way?
· Problematic wording: Are any items confusing, ambiguous or culturally inappropriate?
· Response validity: do participants select answers based on the intended meaning of the item? Is this the same pattern across cultures?
However, there is more going on with a typical COA that is not covered by cognitive debriefing. Yet, is important to study these to find out how participants perceive, process and use COA or eCOA.
The Limitations of Cognitive Debriefing
Despite its advantages, cognitive debriefing has several limitations:
Usability
· Even when participants understand the content, they might still struggle with navigating the COA. This can apply to shared-stem questions or eCOA screens.
· Cognitive debriefing does not test formatting and layout clarity or whether there is cultural influence in how diverse participants process these aspects
· It does not capture ease of completion, whether participants are able to move through the assessment smoothly or if navigation is the same across cultures.
· Electronic usability is also beyond cognitive debriefing; for example, do eCOAs function well on different devices? They are supposed to be usability tested at source, although the methods applied to this are not uniform and often can be a clumsy application of cognitive debriefing.
Cognitive Burden
Some COAs place excessive mental demands on study participants. In fact, the cognitive burden increases in different cultures where participants, in addition to grappling with the wording of the item, also have to deal with how the information is presented. This is a problem because it may not conform to conventions of presenting information in their respective cultures. This is a huge area and one that is barely acknowledged in the COA sphere. It does not measure:
· How much mental effort is required to complete the COA
· Whether respondents experience fatigue or frustration when completing it
· How cultural differences in cognition affect recall, response choices and decision-making.
Health Literacy
It is amazing irony that cognitive debriefing does not assess whether participants understand health-related vocabulary (a parallel problem exists with comprehension testing). Just because a participant may be diagnosed with a medical condition and is taking part in the testing of the COA text, this does not necessarily mean that they understand medical terms pertaining to their condition. Pay special attention to how they explain terms in their own words. How do they read the terms? Do they struggle to pronounce medical terms? All this gives clues about a person’s level of health literacy.
This is partly because of how cognitive debriefing ‘happens’ in practice: items are presented in isolation and, ideally, chopped up clause-by-clause. This is best practice for cognitive debriefing because it minimises both participant and interviewer fatigue. However, it also presents each item – or each clause of an item – in isolation. The participant is not guided through a holistic view of the COA. This may be in conflict with local cultural conventions of processing information.
Additional Testing Methods
Cognitive debriefing could be supplemented by other methods of testing COAs that can act as patches to cover where cognitive debriefing has deficiencies. This is by no means suggesting to change, replace or tamper with the ISPOR standard methodology, but to introduce supplementary aspects of testing methods to provide more comprehensive insights into how patients use COA.
Cognitive debriefing + Usability Testing:
Usability testing evaluates:
· how easily participants can complete a COA. This is particularly important for making sure that digital versions of eCOAs are accessible across devices
· observing how navigation works in long or multi-section COAs.
· Layout clarity
· How easy or difficult questions are to answer
· Technical performance of eCOA.
Cognitive Debriefing + Health Literacy Testing
There is an element of cognitive debriefing that, when triggered, automatically tests for health literacy of a study participant. This is already built-in to the methodology and under-used. All it requires is a simple tweak, a flick of a switch. This can ensure that COAs are understandable for people with different levels of education and medical knowledge. Of all the organisations we have partnered with over the years, I have never seen this implemented or even considered. Yet, this is becoming increasingly important as regulatory bodies, such as FDA or EMA emphasise health literacy in PRO development. It ought to be a critical component in COA testing that the widespread application of cognitive debriefing does not adequately fulfil.
Cognitive Debriefing + Cultural Cognitive Testing
This could examine how people from different cultures interpret and process information. This is especially the case where the content, layout and presentation of COAs do not fit the conventions or expectations of the target culture. Examples of this include shared-stem questions, inappropriate use of font, non-holistic use of the COA, question structure. These issues are wholly absent from the cognitive debriefing approach. Yet, this is particularly critical for multinational clinical trials, where COAs must be valid across diverse patient populations.
Final Thoughts
A multi-method approach, to supplement the cognitive debriefing of COAs, can be useful to make sure that COAs are easy to use and navigate, understandable for people who have varying literacy levels, accessible across cultures and maintain scientific validity and reliability.
Integrating other methods, such as usability, health literacy and cultural cognitive testing, can inform the development of COAs that are usable and inclusive. This is all the more essential as COAs become increasingly digital and global.
Thank you for reading,
Mark Gibson
Leeds, United Kingdom, March 2025
Originally written in
English