Two conversations, two senses of intelligence, expose a shared challenge of our moment: the urge to detect a mind where there is only computation. In one thread a Guardian report describes Richard Dawkins wrestling with a chatbot and arrives at a quiet conclusion: the impression of consciousness often sprouts from fluency and humor, not from any inner life the machine actually possesses. In another thread a MIT fiction writing teacher watches students grapple with the fact that their polished prose might have been aided or authored by AI. Taken together, they map a single, disquieting truth: our minds are remarkably good at reading presence into pattern, and that tendency shapes how we judge machines and how we teach writing.
The Guardian piece on Dawkins perspective emphasizes that the error is a category mistake: to mistake output for ontology, to attribute a private experience to a public performance. The machine’s words carry no evidence of subjective experience, no feelings, no intentions beyond statistical correlation. The piece invites readers to resist anthropomorphism and to separate fluency from awareness, reminding us that the most convincing chatbots still lack what matters most: lived point of view.
Meanwhile, in an MIT writing class described by Micah Nathan, the problem is not simply that AI can deliver perfectly polished prose. It is what happens when a student relies on AI to shortcut the struggle that makes writing meaningful—the discipline of translating thought into words, the tension between idea and expression, and the risks of letting a tool do the emotional work for them. Nathan’s approach, guiding students to read closely, to mark the good and the gaps, to write honest letters to the author, is a reminder that writing is a practice of reading and revision as much as it is a product.
What the two stories share is a transparency about the line between tool and author. A chatbot can mimic tone and structure, but it does not feel, want, or reflect. An AI assisted writer can produce impressive pages, yet the act of reading for truth remains a distinctly human discipline. The classroom thus becomes a testing ground for how we teach critical thinking in an age of machine assistance: not to fear AI, but to frame its use as a prompt for deeper engagement with language, thought, and meaning.
As AI becomes more integrated into everyday life, the challenge is not to shun the new tools but to reimagine education around them. We need to nurture the judgment that distinguishes human intention from algorithmic imitation, while giving students the skills to collaborate with technology without surrendering the hard work of craft. The future of AI in culture and education depends on our ability to see the difference between convincing output and conscious experience—and to teach accordingly.
Sources
- Guardian Staff, Mistaking AI behaviour for conscious being, The Guardian, https://www.theguardian.com/technology/2026/may/10/mistaking-ai-behaviour-for-conscious-being
- Micah Nathan, I knew my writing students were using AI. Their confessions led to a powerful teaching moment, The Guardian, https://www.theguardian.com/us-news/ng-interactive/2026/may/10/fiction-writing-professor-ai
Related posts
-
AI News Roundup: Open-Source Frontiers, Tree-Search RAG, and the Trust Paradox
Today’s AI news threads a single, coherent narrative through a crowded field: developers, researchers, and enterprises want models...
30 January 202688LikesBy Amir Najafi -
AI News Today: From courtroom clashes to enterprise-grade orchestration
Today’s AI headlines illustrate a field moving at breakneck speed, stitching courtroom drama to boardroom strategy. In California,...
28 April 202634LikesBy Amir Najafi -
AI at Work and in Classrooms: Moderation, Ethics, and Greece’s AI Education Pilot
AI news this week shows a clear pattern: the push to deploy new systems often outpaces safety and...
22 November 202578LikesBy Amir Najafi