Oversight or Steer?
13 feb 2026
UK
,
Spain
Reclaiming the Human Role in AI-Mediated Patient Voice Research
As Artificial Intelligence permeates virtually all areas of human activity, we increasingly hear terms like “oversight” to describe the human role, managing and overseeing the labour and eventual output of the AI tool. You hear this term everywhere now. It sounds like something responsible and reassuring, like a guard rail. But what does “oversight” even mean? What does it entail? Is it enough? Should we be adopting a new word instead?
At first glance, “oversight” seems like an appropriate response to the risks of automation. It implies vigilance, control and ethical awareness. It also means the input of human expertise. But when it comes to the deeply human work of high-stakes research and communication, such as Patient Voice Research or the development of Clinical Outcome Assessments, oversight alone is insufficient. What this needs is something much more engaged, interpretive and ethically grounded. And this is steer.
Oversight is largely passive. Steer is wholly active. The human is at the wheel.
This article explores the crucial difference between oversight and steer. It argues that, if we are serious about preserving the integrity of Patient Voice Research or patient-facing documents in the age of AI, we must stop settling for supervision and start reclaiming authorship. With a mindset of steer instead of oversight, the machine will never replace us.
The Problem with Oversight
In many AI-enabled research workflows, human involvement is framed as a kind of insurance policy. After AI has generated the task, such as a text, or translation, or conducted an interview with a patient, the human reviewer steps in to check for hallucinations, errors or bias. This is oversight: a final pass to catch what went wrong.
So, in our sector, when a Language Service Provider (LSP) says:
“We conduct lay summaries of clinical studies using AI, with human oversight”
“AI does the IMP label review, with human oversight”
“FT1 translations in Linguistic Validation studies are conducted by an AI tool, with human oversight”
It basically just means AI does all the work and a human looks at it.
In theory, oversight protects against harm. In practice, it often results in shallow corrections, technical validation and surface-level quality control. Oversight tends to be passive and reactive, occurring after the AI has already shaped the content. It treats the human as a gatekeeper and not a co-creator.
Worse still, oversight can create the illusion of responsibility without real accountability. And if AI output is reviewed by a human but not meaningfully reshaped, who is truly responsible for its content, tone or the whole integrity of the knowledge produced and trusted? The unsaid truth is that, if anything goes wrong, such as a hallucination that was not caught or the inevitable Swiss Cheese scenario of errors slipping through the entire process, the person who is conducting the oversight (a clumsy circumlocution in order to avoid the term ‘overseer’), is liable for this error.
When oversight is stretched thin, across massive datasets, tight timelines and opaque systems (black boxes), it becomes more ritual than safeguard. It is tantamount to keeping the human in the loop as an act of tokenism. The token human. It is a mind-bending and humiliating reality.
What is Human Steer?
Human Steer is a fundamentally different orientation. It positions the human not as a late-stage post-editor, but as a central guide through the AI process. To steer us to direct, not just to react. It is to shape the input, challenge the output and question every step of automation, even the premise of it.
When we speak of human steer, we do not mean oversight in the bureaucratic sense: checking outputs after the fact. We mean something older and deeper: the embodied act of guiding a vessel through uncertainty, over time and with care.
The human becomes the ancient tillerman, the long-haul truck driver, the airline pilot and the space shuttle commander. In all of these roles, what matters is not control in the abstract, but presence in the moment. The human steer has the capacity to intervene, to decide, to slow down when others would speed up. The human feels the weight of what is at stake, ethically, as well as technically.
This is what we mean by human steer. It cannot be outsourced. It cannot be templated. It cannot be replaced.
As we move toward AGI (Artificial General Intelligence), systems capable of navigating entire domains, we must insist that the human remain at the helm and not at the edges, the latter is what human oversight implies.
In the storms to come, the machine may calculate the trajectory, but only a person knows to turn the wheel.
Human oversight just checks boxes, the human works for the machine. Human steer makes meaning, the machine works for the human.
Steer implies accountability, authorship and creative judgment. It requires the human researcher to engage emotionally, intellectually and ethically with the work. This is vital in Patient Voice Research, where the subject matter is not just content, but suffering, vulnerability and trust.
Why Steer Matters in Patient Voice Research
Patient Voice Research is a relational methodology. It is not only a data-gathering exercise. It is built on listening, co-construction of meaning and witnessing lived experience. In this context, steering is the method itself.
Here, we explain why:
1. Authenticity Over Plausibility
AI can simulate plausible narratives of illness, but it cannot live them. Human steer ensures that the research does not conflate linguistic coherence with lived truth. Without steer, we risk treating synthetic empathy as real patient testimony, which would be a serious ethical breach.
2. Resonance, Not Just Readability
AI-translated documents may be grammatically and lexically correct, but emotionally hollow. Steer enables a reshaping of the tone, register and rhythm, creating documents that sound like they were written for and by real people, not machines thinking in English and speaking in code.
3. Inclusion Without Simulation
AI might simulate the voice of a rural patient or neurodivergent respondent, but only real human engagement can include them. Steer keeps the focus on participation, not performance.
4. Bias Interrogation
Oversight may detect obvious errors, but steer interrogates the assumptions and data behind the system itself: Whose voices trained this model? Who is absent? Whose worldview is being reproduced exponentially?
5. Relational Care
AI can perform sympathy, but it cannot feel care. Steer preserves the possibility of human attunement, of saying “Would you like to stop?” or “That sounds really hard”. This is moral practice, beyond any interface feature.
Oversight versus Steer
Task | Oversight | Steer |
AI-generated translation | Check for grammar and terminology errors | Reframe tone, voice, and resonance for target culture |
Simulated patient narratives | Validate profile accuracy | Question necessity, ethics, and bias in simulation |
Chatbot-led interviews | Review transcripts for coherence | Design protocols that protect consent, care, and context |
AI summarised research content | Confirm alignment with original data | Challenge omissions, over-simplifications, and tone |
Why Oversight Persists
Oversight is scalable. It fits within existing compliance models. It creates the appearance of control without the labour of true engagement. In a resource-constrained environment, it is tempting to treat oversight as “good enough”.
This is a dangerous compromise. Patient Voice Research was born out of a refusal to reduce experience to data points, To oversee AI simulations of patients, as though that were ethically equivalent, is to betray the very roots of the field.
A Call to Shift Oversight to Steer
If AI is to have a place in Patient Voice Research, it needs to be under human steer. This means:
Involving researchers early in the design of AI tools and protocols and not just after the event, at the end of the process.
Making AI auditable, transparent and ethically constrained.
Treating patient voice not as content but as co-authored experience.
Most of all, it means refusing the slow slide from presence to performance, from conversation to mere content generation.
Listening Is Not Passive
To listen well is to respond, to interpret and to be moved. These are human acts. They can only ever be simulated but never replicated.
Oversight may catch errors, but steer keeps us honest.
In a field founded on presence, dignity and relational ethics, the question is not whether we can oversee AI, but will we choose to steer?
Thank you for reading,
Mark Gibson, Leeds, United Kingdom
Nur Ferrante Morales, Ávila, Spain
July 2025
Originally written in
English
