The Dragonfly Trapped in Amber: When Method Becomes Dogma
Oct 27, 2025
Mark Gibson
,
UK
Health Communication Specialist
In this article I discuss whether codified research practices can stifle innovation in the Patient Voice sector.
In research, as in religion, methods can become sacred. What begins as a breakthrough process, insightful and fresh, gradually becomes codified. It then becomes published, taught and copied. From there, communities of practice emerge. At their best, they share expertise, raise the bar for standards and democratise entry into the sector in question. But at their worst, they can form a priesthood: guarding orthodoxy and stamping out heresy, keeping the research method in question frozen in amber.
It could even become like a cartel: people not necessarily qualified jostling for seats at the table, conspiring to keep others out, agreeing implicitly on methodological boundaries and ensuring that only sanctioned players hold influence. Here, innovation is not so much frozen as fenced off.
I have seen this happen in a few sectors that we work across. All of them, in some way or other, relate to Patient Voice research. Cards on the table: I am talking about PIL user testing in Europe, IFU usability testing, the health literacy and the COA linguistic validation spheres. The parallels between these sectors are astonishing.
The pattern is familiar: a methodology is adopted to fulfil an activity that is in response to a regulatory nudge. It could be a new methodology, perhaps from a single research team’s bold experiment, or an established one. Take, for example, cognitive debriefing to test comprehension across languages and cultures. It could even be an established research methodology that was bent out of shape from the outset because of misapplication, due to a lack of understanding within the community of practice in question. An example of this is the David Sless method of consumer information testing as applied to EU PIL testing in 2005 (and still in effect and equally misapplied even today). This is a case where a research method designed for formative, iterative product development is adopted to perform a summative research function: square peg, round hole, right from the word ‘go’.
Whatever the case, what happens next follows a familiar pattern: the method, applied to this new regulatory requirement, is refined for that purpose, written up and presented. Then there are workshops, training manuals and checklists. Soon, what began as a research activity becomes a protocol that can be mass produced.
That in itself is not the problem. Codification of a research approach has many benefits. It promotes clarity and consistency. It makes your research robust and credible if you follow the protocol. However, it also carries three hidden consequences:
1. It lowers the bar to entry.
This is what I call the Painting-by-Numbers phenomenon. Once methodologies are step-by-step and freely available, almost anyone can copy them. This sounds egalitarian, but, in practice, the method becomes routinised. Rather than probing for insight, people just follow a script. The output is standardised and often superficial. This is why I call it ‘painting-by-numbers’: it is research by following instructions, colouring well within the lines, as opposed to having the skills to paint the entire portrait yourself. Colouring book versus canvas.
2. The Checkbox Trap: Research becomes Compliance
We have seen in several sectors, what were once exploratory exercises in understanding the patient’s perspectives have been reduced to box-ticking exercises. Are the tasks completed? Yes. Do the insights mean anything? Not certain. The methodology survives, but the spirit is gone. And nobody seems to notice it is missing.
3. The Dragonfly in Amber: the method is frozen
Once enshrined, methods become difficult to challenge. Any change is by committee and exceedingly slow. This is the case even when the research method in question has flaws. All research methods are imperfect. Each one was born in a moment and for a context. Codification and regulatory guidance can freeze a method, and this can mean that the context becomes lost or even forgotten. The method lives on, but it is embalmed. And anything that deviates is treated as heresy.
I’ve seen this happen: suggest an adjacent approach or an unconventional tweak and watch the immune system react. The reaction is always one of suspicion and the tone dismissive: if it ain’t broke, don’t fix it. Then watch the priesthood clamp down: Inquisitions are held. Sanctions are often taken against the heretic.
Even so, when, within a given sector, the gold standard method is entering its third or fourth decade unchanged and unquestioned, this cannot be a good thing.
The Silos We Build Ourselves
What is being learned here? Not much. Because we work across several sectors, when you look at them horizontally, you see how respective research approaches in different areas could benefit the activity that you are engaged in. This would be without tampering with the established methodology and without skirting around regulatory guidance:
· When you take COA and look at it from other research viewpoints, it is alarming to see that cognitive debriefing probably only tests around 40% of what could be tested in a COA. With eCOA, that decreases to around 25%.
· There is an emergent practice around usability testing of eCOA that is unique and does not reflect more established UX testing conventions in other sectors. An adapted form of cognitive debriefing is absolutely not appropriate for usability testing, neither is merely printing off screenshots (yes, onto paper). Looking at it from being involved in UX testing in other sectors, this is astonishing.
· Survey science has long established how to design good questions, minimise participant fatigue and elicit reliable data, yet Clinical Outcome Assessments often ignore these principles. Why?
· A very simple tweak to the cognitive debriefing method can not only provide insights into a person’s health literacy level but also can help detect the authenticity of patients.
· Information design offers deep guidance on how to communicate with laypeople, but why do informed consent forms, IFUs, package leaflets the world over seem to be in blissful ignorance of this?
· Behavioural sciences and marketing have shown how simple messages can engineer behaviour change, yet public health campaigns rarely borrow from them.
These are just a few examples of where thinking beyond merely painting-by-numbers can benefit an entire research sector and yield deeper and more meaningful insights from patients. It is insulation that causes this reticence. The priesthoods and cartels that form around regulatory-based research rarely cross into adjacent disciplines. The doors are there, but unopened.
Del Boys and Disruptors
I was at a conference recently where there were no less than 60 delegates describing themselves as ‘disruptors’. Over sixty of them. Where was this conference, you might ask? Davos? TEDGlobal? No. Just a B-list patient voice conference. But they were there, the disruptors, lanyards swinging. ‘Disruptor’ is a description, which, like ‘very stable genius’, is something that a person does not apply to themselves. Other people need to do that for them. In addition, if they are employed by a company and their job at that conference is to stand by a sales booth – all day - and, yet, they still describe themselves as a disruptor and do not see the irony in that… oh, the lack of self-awareness is painful…
But you know the type, dishing out buzzwords almost AI-like, bubble-gum for expertise, currency in nothing more than trading gossip (which they call ‘connecting people’). These are the people who are entering the priesthoods of the communities of practice, for posturing purposes. No research expertise to speak of, but they have a seat at the table. What chances do methods have of evolving then?
From Checkbox back to Discovery
When methods become dogma, innovation is the first casualty. The rituals continue. The checklists are checked. The priesthoods meet and drink the long draught. But the learning stops. We forget that even the dragonfly in amber was once a living thing.
Turning codified processes back into research activities does not cost anything. Neither is it a radical thing to do. It’s not even the extra mile, barely the extra hundred metres. It is building on what exists, not replacing it. It builds on top of regulatory expectations. Maybe this is where the true disruption is: simply honouring the original intent of patient involvement: to understand, to listen and to learn. We owe it to them to do better. It cannot just be a token gesture, wrapped in compliance.
Let’s keep our methods but let us also keep them alive.
Thank you for reading,
Mark Gibson
Cologne, Germany, October 2024
Originally written in
English
