The Absence of Dialectic: This is why LLMs do not really ‘think’.
24 oct 2025
Mark Gibson
,
UK
Health Communication Specialist
Long before AI came into being, most countries’ systems of education placed a premium on teaching students to ‘think’. This has its roots in ancient thinking, from Sumeria to Rome and was pioneered in good, old-school French education, namely the very radical notion of putting opposing ideas against each other and discussing the merits and drawbacks of each.
This model of thèse, antithèse, synthèse (thesis, antithesis, synthesis) still is a central intellectual exercise in French education. You learn to take a position, they dismantle it, then rebuild something wiser. Imagine that: Bringing opposing ideas to the table and talking them through. What a shocker.
This structure lives on across many education systems, such as the Anglo-American model of compare, contrast, evaluate. Unlike what many today think is ‘discourse’, the point of this is never to recite facts but to show understanding of them, to demonstrate that ideas develop through tension and change – and sometimes compromise.
This kind of structured struggle is what sharpens thought. It forces us to think clearly and teaches nuance. Crucially, it is precisely not how LLMs think.
What distinguishes human reasoning from AI-generated content is not so much emotion or originality, which a lot of commentary around AI has been focused on, but it is struggle. At the heart of critical thinking is the dialectical method: thesis, antithesis and synthesis. This ancient structure mirrors the way we discover truths and transform our understanding. LLMs do not naturally do this.
The Death of Expertise
However, the absence of dialectic in AI-generated text is not just a quirk of design. In fact, it is not solely because of AI at all. It reflects and even accelerates a broader cultural drift that long predates AI. For decades now, critical thinking has been in decline, eroded by our mass embracing of infobesity and the “Google effect” (our growing reliance on information retrieval over deeper understanding), and the tribal dynamics of social media.
As Tom Nichols argues in The Death of Expertise, we have entered an age where lived experience and internet search results are treated as equivalent to a lifetime’s dedication to a given area of expertise. I remember watching a podcaster called Destiny, in March 2024, stating that he had spent the past 4 months learning “everything” about the Gaza-Israel war. He actually said “everything”. In 4 months. I would have thought that an entire career’s dedication would only start yielding some insights, just like with any subject matter.
This cultural shift flattened and even stigmatised intellectual hierarchies but also poisoned discourse. Rather than engage in shared inquiry, we prefer polarisation and a culture of certainty over subtlety. This was played out in sharp relief during the Covid-19 pandemic: the expectation of certainty within a situation of uncertainty and a method of communicating science that errs on the side of uncertainty. As a side note, this also showed how unused people were to scientific communication. It is no the wonder there has been such a massive anti-science backlash: professional scientific communicators failed so badly from the outset. But I digress.
The No-Man’s Land of Rumi’s Field
Back to the cultural shift. Public debate became less about exploring truth and more about defending identity. Social media accelerated this process: arguments lobbed like grenades – prefabricated takes, loaded with selectively chosen ‘facts’. There is little appetite for synthesis because synthesis requires vulnerability. It entails the possibility of being changed.
As Rumi would have it, we do not meet in the field beyond right and wrong; we shout across it instead. We throw metaphorical artillery over it. Rumi’s field is now no-man’s land.
So, along come LLMs. This is what it has been trained on. And this is what it produces: one-sided stances. It does not do this because it thinks, but because it mimics the appearance of thought. It manufactures the tone of expertise, but it cannot contribute any insight.
We were already in an age of experts everywhere. Now, with AI, anyone can now sound like one. LLMs offer fluency without substance and conviction without consequence. This is synthetic expertise. It is manufactured. It is content that reads like informed analysis but any risk and contradiction of intellectual labour that genuine thinking requires are subtracted from it.
What is Dialectal thinking?
Dialectic is the art of reasoning through opposition. You begin with a thesis, i.e. a stance or a position, and you then confront it with its opposite, the antithesis. Then, you arrive at a deeper resolution, the synthesis. An example of this is:
· Thesis: Freedom is essential
· Antithesis: But absolute freedom can hurt others
· Synthesis: Therefore, freedom must be balanced by responsibility.
This is what critical thought is.
Static Balance
Static balance is the default of LLMs like ChatGPT. If you ask it about a complex topic, you are going to get a template, such as:
“X has both pros and cons. On one hand…. On the other…. Ultimately, it depends on…”
This is static argumentation. It challenges nothing. It transforms nothing. It presents no risk, no intellectual growth. It is a waste of electricity, the equivalent of jogging on the spot.
This happens because of the probabilistic language modelling of a tool like ChatGPT. All it does is to predict what is most statistically likely and not what is most intellectually daring.
Also, dialectics inherently require conflict, while LLMs have safety protocols that avoid this. It depends on the template thinking of safe structures: compare, list, conclude.
Why do we need dialectics?
Real thinking needs the reader to be engaged in the process of thinking. Real thinking stimulates discovery and it mimics how the human mind evolves under pressure.
All tools like ChatGPT do, with generated articles, is to present rhetorical summaries, rather than provoking thought. This explains why so many online articles feel exactly like this: one position put forward, no synthesis. It just becomes visual noise.
Not Transformative Thought
Dialectics require struggle and transformation. Yet LLMs do not think in motion. They think in equilibrium. It lacks the arc of transformation that defines real understanding – for now.
Even with more sophisticated prompts, where you specifically ask it to do dialectical thinking, you have to think of the thesis, antithesis and synthesis yourself. Then, it can give you a structure, but the struggle between the ideas is not there. Why bother even using it to generate any writing, if it cannot even handle this basic line of argumentation?
Thank you for reading,
Mark Gibson
Leeds, United Kingdom, May 2025
Originally written in
English
