Article

Why eCOAs Need Robust Usability Testing

May 2, 2025

Mark Gibson

,

UK

Health Communication Specialist

Technologies in health constitute an ever-evolving world. In the past two decades we have seen a wholesale migration from paper-based to electronic Clinical Outcome Assessments. eCOAs are critical tools in clinical trials and patient monitoring. They offer significant advantages over the paper alternative, such as streamlining data collection and improving data quality.

As with any digital tool, usability is not just a “nice-to-have”, it is a necessity. This is especially so in a sector where patient-reported data influences regulatory submissions, trial success and patient care, where poor usability can threaten the validity of the data collected.

However, there does not seem to be much knowledge, know-how or awareness of how usability testing ought to be conducted on eCOAs. The sector is replete with examples of poor practice that have – presumably – gone through the submissions process.  Common practices that I have seen is printing off eCOA screenshots and putting them through a standard cognitive debriefing process or attempting to use a cognitive debriefing approach as a usability test. None of this is usability testing.

There are two realities of usability testing: the one that is emerging in the eCOA-sphere and one that the rest of the world undertakes. The former is often touted by methodologically naïve individuals with no real insights into what usability testing is, while the latter offers a whole universe of practice, in tech, automotive, device and consumer electronics sectors. Anybody involved in eCOA should take notes from these latter areas.

Why Should Usability Testing of eCOAs Matter?

·       Data Quality and Patient Compliance

Unlike traditional clinical data collected by healthcare professionals or by patients themselves in a clipboard-and-pen scenario, eCOAs rely on data being entered solely by study participants or their caregivers. A poorly designed interface can:

·       Put people off from engaging.

·       Cause people to make entry errors.

·       Be the reason by data is incomplete or inconsistent.

This is no different to other sectors, such as fintech or e-commerce, where poor usability leads to people not completing purchases or making errors when filling in forms. For eCOAs, the consequences are far more severe, such as invalid data that could jeopardise an entire clinical study.

·       Meeting Regulatory Expectations

Regulators, including FDA and EMA, expect digital health products that have followed a user-centred design approach. This is not just about tokenism or an act of ‘validation’, but having users – patients – involved from the outset of an eCOA development, assessing the ease of use of the eCOA and how the data is collected. The Patient-Reported Outcome guidance document emphasises the importance of keeping users’ cognitive burden to a minimum and ensuring that measures like eCOAs are easy to use. Showing evidence of a robust and thorough usability process, conducted in the spirit of iterative development, supports compliance during the regulatory review.

Learning from other industries

Usability testing of eCOA cannot be done in a vacuum. A great deal of usability testing of eCOA is directed towards onscreen text: what the users understand, as if it were still cognitive debriefing. As with paper-based approaches, comprehension testing of the words alone, whether by cognitive debriefing or other methods, only assesses a fraction of what could be tested in eCOA. Tasks and functionality need to be observed too. Any approach to usability testing must be done within the context of how human-factor testing works in other sectors. For example:

·       How are functions tested?

·       What kind of cognitive walkthroughs are developed to assess completion of user tasks such as symptom logging, diary entries or questionnaire navigation? How is this to be recorded? What scope for video analysis?

·       Similarly, is there scope for incorporating eye-tracking or heatmap techniques to study what users are looking at when they go through eCOA screens, as well as what they do not see or ignore?

These approaches are commonly adopted in the electronics, wearables and automative sectors. In the medical device sector, which is governed by IEC 62366. This mandates that human factors and usability engineering are part of device design. This means that usability is both formative and summative. The formative is an integral part of iterative design and the summative is final testing, a validation, to demonstrate usability under realistic conditions. This is where usability testing of eCOA typically takes place, at the end of the development process.

A summative usability test of an eCOA could simulate patients completing daily assessments in the environment where they will complete the eCOA: if it is designed to be completed at home, then the usability test should be conducted there; if it is completed at a clinic, the test should simulate as closely as possible those conditions.

Findings From Previous Studies

In a Phase III trial involving older participants with arthritis, a usability test of an eCOA revealed that:

·       Function buttons were too small.

·       Font sizes were too small.

·       Error messages were ambiguous and difficult to understand.

The eCOA interface was then redesigned to include:

·       Large touch targets

·       A bigger font and better contrast themes

·       Simplified language, with ambiguity and jargon removed.

Without this intervention, the trial data may have been negatively affected.

Universally the Same?

Even when conducted well, eCOA usability testing typically has one major flaw: it focuses on the source language. Usability testing across languages also needs to happen to identify cultural inconsistencies and ambiguities resulting from poorly adapted localisation. As we have seen in previous articles, these issues are hardly touched upon in Linguistic Validation studies. Surely, this needs to change!

Why Usability Testing is Often Overlooked

The compressed timelines of clinical trials can lead to early-stage usability testing being skipped. This is compounded by the misconception that eCOAs are simple forms, rather than complex interactive tools. In addition, there is too much reliance on summative pilot studies that often come too late in the process for significant UX changes.

Final Thoughts

Just as other sectors, such as medical device development or the automotive industry, safeguard users to obsessive detail with ergonomically designed devices, apps and dashboards, the pharma sector need to prioritise patient experience when developing eCOAs. And when it is considered, the testing approach needs to be rigorous and informed, rather than inappropriate and, frankly, lazy extensions of what is already done, e.g. a clumsy adaptation of cognitive debriefing.

Without this there is a real risk of doing the ‘wrong thing’, where data reliability can be undermined and trials outcomes compromised. Any advice I must impart on this is as follows: think carefully about who will do your eCOA usability testing and what ideas they have about how to approach it.

Thank you for reading.


Mark Gibson,

Leeds, United Kingdom, March 2025

Originally written in

English