Article

What’s That Coming Over the Hill? How We Use AI In Our Work

9 jun 2025

Mark Gibson

,

UK

Health Communication Specialist

The book “The Language Game” by Morten H. Christiansen and Nick Chater was published in 2022. In book terms, this is a recent book, full of ideas that challenges and builds on contemporary linguistic thinking. But in its discussions around AI, it might as well be the Epic of Gilgamesh. By the time it was published, the examples the book included from ChatGPT model 3 were hideously out of date. In my personal library, I have around 50 books on AI, all of them decidedly old by the time they are published. The development and, crucially, adoption of AI far, far outpaces the period of time that it takes to draft, edit, print and publish a physical book.

I have written my fair share of book chapters and I remember one in particular that focused on multilingualism with an electronic health application, which was originally written in 2002, had a round of editing in 2005, by which time the e-health apps I had cited were well expired. It was published in 2007, and those examples were fossils and not much use. Another book I co-authored, called Electronic Health Horizons (it must still be available somewhere), was written between May and July 2000 and published by September of that year. This is more like the timeframe in which books need to be produced in the electronic age.

The Language Game provides a number of examples of the limitations of ChatGPT3, that, even by the time of publication, were quite behind the times. It quotes examples of the limitations of the tool which are very well known now and, crucially, fixed:

1: Q: Who was president in the United States in 1600?

A: Queen Elizabeth I was president of the United States in 1600.

Or,

2: Q: How many eyes does a spider have?

A: A spider has 8 eyes.

Q: How many eyes does my foot have?

A: Your foot has two eyes.

Further to this, the authors also reflect on ChatGPT not being able to make the link between the six-word story, “For sale: baby shoes, never worn”, and the heavy emotional weight that this implies.

The point raised in the book is that not only does it not understand the difference between sense and nonsense, but that AI and Large Language Models (LLMs) do not understand anything at all. For example, it states that LLMs have no more understanding of language than “a jukebox has of the songs it is playing”, particularly so when generating translated texts. While I am an expert applied linguist, I am not an expert in this, so I cannot comment – authoritatively – on what LLMs do or do not understand.

I have written elsewhere about our love for and commitment to usability testing. But it is not a passion for technology that drives me in particular; it is the human use of technologies that fascinates. For all I might try, I cannot become excited or even remotely enthused about any technology. For me, they are just something you use to get stuff done. So, when I approach any kind of human-factor input of a digital tool or device, I always see it from the perspective of the naïve user. Personally, I am never really an early adopter of any digital tool.

In terms of AI and LLMs, the examples included in the Language Game pre-coloured my view of AI. I was already weary about the dependence on translation memory tools by LSPs and that over-reliance and blind deployment of them can cause havoc during testing phases, like cognitive debriefing. So, when the first ChatGPT model was unveiled for general use, I was more interested in what press articles and people were saying about it, rather than using it myself. Until I did. And my working and personal life has never been the same since – for the better. ‘For the better’ does not even come close: it is more like a personal revolution.

That phrase: “For sale: baby shoes, never worn” was attributed to Ernest Hemingway. It is a six-word story, and it tasks the reader to put together the context, and even the beginning and how it ends. It turns out that Hemingway’s authorship was a myth. But there is a similar micro-story by the Guatemalan writer Augusto Monterroso. It is called “El Dinosaurio” (The Dinosaur), published in 1959. It goes like this:

“Cuando despertó, el dinosaurio todavía estaba allí.”

(When he awoke, the dinosaur was still there).

That is the entire story. Its proposition and open-ended nature invite the reader to make up their own narrative. Not only for each person, but each time this line is mentally explored, a different storyline might emerge: a different beginning, middle and ending, unique to a personal thought at a given time.  There is a lot to unpack here and a lot that ought not to be taken for granted. Is the dinosaur real or part of a surreal dream? Does it represent trauma? Does being ‘still there’ represent some kind of failure? Who says the dinosaur has to be a bad entity, its presence causing distress? It could be the protagonist’s companion, like Puff the Magic Dragon and its presence brings comfort. And who is the protagonist? An adult? A child? The standard translation into English contains an inherent bias: it puts the protagonist as a male: when he woke up. The Spanish does not say that at all: it is ungendered. This is an example of cultural coating, the unintended shade of meaning that a word might acquire during the translation process.

The point is that AI is very much like the effect that this micro-story has on the imaginations of people. The story can be completed according to the individual’s fears, biases, memories and worldview. It holds up a mirror to the reader. In a very similar way, everybody is using a tool like ChatGPT and myriad other AI-enabled apps for their own purposes, in their own way and for their own tasks. Each user is developing their own narrative. One person might be using it to write poetry, another might be using it to learn languages, another as a business development colleague – the tool literally taking the place of a human, and so on.

Whereas Monterroso’s story is minimalist, where he invites imagination through absence, AI is maximalist, adapting endlessly to a user’s input. It is estimated that in April 2025, one fifth of humanity is using ChatGPT. If their experience is anything like mine, then there are over a billion personal revolutions distilling right now. How exciting is that?

This next set of articles explores what we, at GRC, and me, personally, use AI tools for.

Thank you for reading,


Mark Gibson

Leeds, United Kingdom, April 2025.


Originally written in

English