Article

Rethinking Work: The Personal and Professional Shift with AI

Oct 14, 2025

Mark Gibson

,

UK

Health Communication Specialist

I want to start acknowledging that my previous articles on this topic focused on my personal engagement with AI, how I use it in everyday life, my interests around it and even my fears. This article marks a shift. It is about how we, as a team, are using large language models (LLMs) in a professional context, at least in terms of ‘R&D’ and development of ideas.

But first, an important disclaimer: yes, we have AI-enabled capabilities, and we are actively developing further applications. However, for client-facing project work, actual deliverables, we never use them without expressly involving clients. There is full transparency, no ‘black box’. We do not use AI in project delivery unless the client is fully informed and on board.

What we do use AI for, and with pronounced effect, is our back-office operations: scheduling, prioritising, invoicing, purchase orders, payment tracking, and so on. It is infused across the workflow. And it is incredible.

It is a gift for someone like me. I have not publicly disclosed this before, but AI helps on a personal note. As a neurodivergent individual (there, I said it…), it has always been really hard work to stay organised and focused. Forward planning was a big challenge. AI solves all of that struggle. It does not just solve it. It completely takes it away. Literally, at a keystroke. I am no longer the same person, no longer the same worker.

Initially Resistant…

In 2020 and 2021, we partnered with a device company and an independent contractor who had an interest in machine learning. GRC was tasked with testing a range of devices for the US, involving both human-factor and comprehension testing. This was no small feat during the Covid lockdowns. Twenty-five devices, all with IFUs, to be tested for FDA submission.

We developed a method for remote UX testing that worked well, but that is not the focus of this story. What happened afterwards was a turning point for me. When the projects were over and delivered, we worked with an AI specialist (I call him that now, but I am certain he did not describe himself in those terms) who trained a very simple model. The basis of the training was a best practice guide that I had written and the full verbatim data from the usability tests. So, it was not a massive data set. Even so, the model produced stunningly clear and tailored IFUs that could be customised to different audiences: tech-savvy users, first-time users, older users, people with low health literacy, adolescents, children – you name it.

It was only a proof of concept. From technical documentation, we could generate an IFU ready for comprehension testing. It should have blown my mind. But it did not. Instead, I resisted it.

Ego versus Opportunity

At the time, I did not recognise the treasure I was looking at. I could not distinguish the emeralds from the rubies or the diamonds. It was all too dazzling. I showed resistance and, frankly, distaste. Why? Because I considered being able to write well for lay audiences part of my identity. Editing and shaping language for diverse patient audiences is work that I love and take pride in and this electronic thing could take that away. I had also seen the worst of use of automated translation memories. Garbage in, garbage out. I judged this new tool by the same standard, even though I could see with my own eyes, that this was very far from garbage out.

Of course, my attitude was all misplaced and very wrong. What the AI consultant had built was not a replacement, but a co-creator. It would enable document design that could follow the spirit of formative, iterative development. Just like it should be.

I failed to see this. I let ego win. Stupid ego. Tantrums were thrown. I acted like a Luddite and I regret that.

Reframing the Role of the Writer

Besides, so what if a tool like this challenges the role of the medical writer? Let us be honest: not all of them are great. I know this deeply. And so do you. Just look at who won in the wake of the Covid pandemic and the mass questioning of and turning away from medical expertise. It was because the know-nothings were able to reach more people, with clearer information that resonated, as opposed to staid, unimaginative official communication from medics and public health professionals that fell flat. This was because of poor communication choices. Look at the average package leaflet intended for patients, still largely poor, in 2025. If an automated tool comes along and communicates health information better than they do, they have only themselves to blame.

I see medical writers in the same way that I see translators. It is like a pyramid, just like this:

 At the top of the pyramid, you have the elite: very clever people, very good at their respective jobs, whether writing or translating, never out of work, always sought after and respected. This is a very small apex, by the way. Then you have a wide middle band of perfectly adequate writers / translators, not bad but not brilliant either: they do their job. Then there is a broad base of a mixed bag of untalented chancers who would otherwise be unemployed – or unemployable. This, I am afraid, is the conclusion I make after years and years of working with thousands upon thousands of writers and translators. When you first start out, it takes a couple of years to navigate your way through this pyramid. You inevitably encounter, let us say, the ‘inconsistent performers’ along this journey.  It is interesting to note that these two sectors, translation and medical writing, are most vulnerable to AI.

If AI could wash away the bottom tier, and let us be honest, maybe a chunk of the middle too, that would be a decent outcome for the field. The most talented ones stay in the field and thrive. It would raise the floor and force a rethinking of competence. And leave the real work to people who actually care about clarity and doing it right. It is like a form of digital Darwinism.

In no way am I suggesting that we should be producing and submitting IFUs, or any written documentation to do with drug or device products, without human steer. But the LLM can do the heavy lifting: take the technical document, generate a first draft, then hand it to the human co-writer. This would be co-creation: two coats of paint on the wall: one AI, the second and most important coat is the human. Then, it would go through testing. The material that undergoes testing would have a far better starting point than with input from writers who are towards the bottom of the pyramid. It all makes perfect sense.

I wish I’d had this clarity when we were experimenting five years ago.

Now I see AI tools for what they are in this sphere: a collaborator, not a competitor.


Thank you for reading,


Mark Gibson,

Leeds, United Kingdom, Easter 2025

Originally written in

English