Article

From Google to ChatGPT: Offloading, Cognitive Debt and the Future of COAs

23 dic 2025

UK

,

Spain

In the first three articles of this series, we traced the story of working memory. We began with Miller’s “magic number seven”, a rhetorical flourish that became received wisdom. We saw how Cowan revised the true capacity down to four, give or take one, chunks and how Baddeley showed that working memory is not one box but a system of fragile subsystems. We then introduced the Nine Circles of Burden, a layered framework for understanding how COA items overload patients, not only by taxing memory but by adding visual, cultural, contextual and formatting burden and, in the outer rings, by colliding with impairment and modern digital habits.

In this final piece, we turn to the modern era of cognitive offloading. Miller wrote before the integrated circuit, before the wire. Baddeley’s model predated the web. Today, patients live in a world where information, and even reasoning, can be externalised instantly to search engines and Large Language Models (LLMs).

This digital environment doesn’t just change how people access knowledge. It changes how they use their memory and reasoning in the first place. To understand the future of COA design, we must consider two phenomena: the Google Effect and the rise of LLMs (2022 onwards). Together, they reveal a world where patients’ working memory is not only biologically limited but also culturally and technologically eroded.

The Google Effect: From Remembering Facts to Remembering Locations

In 2011, Sparrow, Liu and Wegner published a paper in Science describing what they called the Google Effect. They showed that when people believe information will be available online, they are less likely to remember the content itself. Instead, they remember where to find it.

The key findings were:

1.       Priming: When asked trivia questions, people became slower to respond to computer-related words, suggesting that difficult questions primed them to think of external sources.

2.       Reduced recall: If told that trivia statements would be saved, participants remembered fewer of them.

3.       Location memory: People were more likely to remember where information was stored than the information itself.

This was quickly dubbed digital amnesia. Later surveys showed that people struggled to recall personal information like phone numbers, relying instead on devices.

Implication: Memory was shifting from factors to pointers. We were not remembering the content itself but pathways to the content.

ChatGPT and the Era of Cognitive Debt

If Google shifted memory from facts to location, ChatGPT shifts it from reasoning to answers.

With a search engine, the user still had to formulate queries, sift results and integrate information. With ChatGPT, the user pastes a prompt and receives a fluent, structured and apparently authoritative response (which can be completely wrong). This makes the offloading far deeper.

A 2025 MIT Media Lab study, Your Brain on ChatGPT, found that:

  • People relying on ChatGPT showed reduced activation in brain regions linked to recall and executive function.


  • Their brains were literally doing less work.


  • Researchers coined the term “cognitive debt”: short-term efficiency gained at the cost of long-term erosion of reasoning and metacognition.

Participants also reported feeling more confident in their answers when assisted by ChatGPT, even when those answers were wrong. The smooth fluency of LLM output created an illusion of comprehension.

Implication: Offloading to ChatGPT erodes the practice of reasoning itself, as well as recall.

A Double-Edge Sword

Not all research paints AI as a cognitive crutch.

  • A 2025 ScienceDirect study on nursing education showed that AI-assisted structured debriefing improved critical thinking and competency. When used as a Socratic coach, not as a proxy, AI promoted deeper reflection.


  • A 2024 NIH study found that ChatGPT could help screen for mild cognitive impairment by analysing text responses, accurately flagging early decline.

The lesson is nuance: how AI is used matters.

  • Passive use: outsourcing memory and reasoning leads to cognitive debt.

  • Active use: structured prompting and Socratic questioning can strengthen cognition.

When Digital Offloading Meets the Nine Circles of Burden

How do the Google Effect and LLM overreliance map onto the Nine Circles of Burden?

  • Core: With AI, patients may expect parsing, recall and mapping to the externalised. This erodes the practice of processing and juggling even a few concepts internally.


  • Circle 1 (WM limits): offloading reduces rehearsal practice. Effective span in real-world in real-world patients may not be 2-3 chunks, not 4.


  • Circle 2 (Visual processing): Patients accustomed to tidy AI summaries have less tolerance for cluttered COA layouts. Dense text feels alien.


  • Circle 3 (Cultural/linguistic): LLMs amplify biases of dominant languages. COA translations must content not only with linguistic differences but with patients’ growing expectation of machine-mediated text.


  • Circle 4 (Delivery mechanisms): Paper feels archaic in an AI-first culture. eCOAs must guard against the expectation of “scroll and summarise later”.


  • Circle 5 (Contextual stressors): Under stress, patients default to shortcuts. AI has conditioned them to expect quick answers, making deep recall feel unnatural.


  • Circle 6 (Adaptation/wording): Ambiguity now invites reliance on external paraphrase. Patients may expect AI to “reword it” rather than wrestling with fuzziness.


  • Circle 7 (Presentation conventions): Outdated formatting (caps, underlining, jargon) now competes not only with cognitive load but with AI-trained expectations of clean, fluent text.


  • Circle 8 (Participant impairment): For patients with mild cognitive impairment or attention disorders, AI can either be a support (structured coaching, scaffolding recall) or a crutch (deepening dependency, making decline). The interaction is critical.


  • Circle 9 (Digital/technological offloading): Google and ChatGPT are the cultural outer ring, shifting memory from content to location and cognition from reasoning to fluency. This erosion changes the baseline of what patients can realistically do unaided.

What We Learn

The Google Effect taught us that memory is shifting from content to location. ChatGPT shows us that cognition is shifting from reasoning to fluency. Both reduce the everyday practice of working memory.

For COAs, this means:

  • The effective capacity of patients is smaller than ever. A binary pain item may once have strained memory at the edge; now, for many, it exceeds it.


  • Global attributional items demanding 6-9 concurrent concepts are not just difficult, they are cognitive unrealistic.


  • Patients with existing impairment (Circle 8) may be doubly disadvantaged: reduced capacity from condition plus reduced rehearsal from culture.


  • But AI is not only a threat. Used actively, it can support reflection, scaffold recall and even detect cognitive decline.

The challenge is to design COAs and patient-facing tools that acknowledge reduced working memory practice, while also leveraging AI as a mirror and not a mask. It should be a way to prompt, not to replace, the patient’s own cognition.

Conclusion

From Miller’s seven to Cowan’s four, from Baddeley’s multi-component model to our Nine Circles of Burden, we have seen that working memory is smaller, more fragile and more easily overloaded than designers assume. The digital era adds a new twist: cognitive offloading through Google and ChatGPT erodes memory and reasoning further, creating cognitive debt.

For COAs, the implication is stark: instruments must be designed for patients who can hold fewer concepts, tolerate less clutter and face more stress, all while living in a culture that has weakened natural recall.

The future is not hopeless. AI can erode cognition is used passively but it can strengthen it if used actively. The same holds for COAs: they can overload patients if designed carelessly or they can scaffold cognition if designed with empathy.

The Nine Circles of Burden remind us that patients descend through layers of difficulty before they can answer. Our job is not to add circles but to remove them where we can, so that what remains is not a descent but a clear path for the patient’s voice to be heard.

Thank you for reading,

 

Mark Gibson, Ávila, Spain

Nur Ferrante Morales, Ávila, Spain

September 2025

Originally written in

English