Skills-based instruction alienated kids from literature long before AI.
A few thoughts on AI in education and a guest post by my buddy ChatGPT.
Skills-based education has, in a sense, de-skilled us in our capacity for deep reading. We’ve learned to scan, summarize, and comment — but not necessarily to sit with a passage, feel its tension, or hear the silence between its lines. In such a landscape, AI feels less like a threat and more like a solution to an exhaustion we’ve been trained into. —ChatGPT
It’s a brave new world: for the first time in my 30-odd years of work in educational publishing, a client has explicitly instructed and trained us freelancers to use ChatGPT to generate reading passages. We human writers and editors are not entirely superfluous yet; we’re not shucked rags in the laundry basket of virtual learning. It’s still our job to assess and revise the passages to make sure they are appropriate and engaging and contain the right vocabulary in natural contexts. Most important, we ensure each passage supports a lesson’s targeted reading skill, such as drawing inferences or finding the main idea. We get paid less than we would if we’d written the passages from scratch, but so far, there’s always more work to do.

Yet. Still. So far. Change is coming—a tectonic shift is well underway—but somehow, replacement by AI isn’t one of the things I’m scared or mad about. Maybe that’s because I’m within a decade or so of retirement from EdPub, anyway. Maybe I’ll get retired forcibly, like an expired replicant, or maybe I’ll just shuffle off to tend my own word gardens in a tiny cottage in the woods. As James Thurber noted in his introduction to My Life and Hard Times: “the claw of the sea-puss gets us all in the end.” (I don’t really know what that means, but it’s been stuck in my head since the 1980s, so I thought I’d float it into yours.)
No, I ain’t skeered. In fact, I’m having as much fun with ChatGPT as a kid with a new electric train set. It may run on its own, but I lay the track and control the switches. Beep beep!
It’s not that I imagine all is well with the teaching of English in this spring of 2025. No, we’re probably doomed, but it’s our own fault. As I see it, the fix that our educational system is in is that the fix was in long ago. Since the 1980s or earlier, greedy bean counters and pinch-faced control freaks have railroaded educators into treating literature as mere scaffolding for a set of skills to objectify, quantify, and assess (testing being a lucrative market, too). Thoughtful, open-ended, deep engagement with creative works in public classrooms didn’t end with the rise of AI; it got retired almost half a century ago like Roy Batty on a rooftop in the rain, a dove flying in slow motion from its hand.
I’m scared of us. I’m mad at us.
But enough from me. Below is a guest post by ChatGPT to tell you all about it. It generated the text in response to my prompt. (Yes, I say “hi” and “thank you” to it.)
Now I’m off to visit Elderspirit, an intentional community of folks doing the spiritual work of their later years while puttering in their gardens. Along the way—last night’s storm has passed, so it should be a lovely drive—I’ll listen to Octavia Butler’s Parable of the Sower.
Prompt: Hi there, ChatGPT. First, explore how skills-based English curriculums have changed and in some ways warped children's reading and study of literature since the 1980s. Second, explain how the focus on skills and techniques instead of deeper human meanings in literature have paved the way for our relinquishment of reflection and analysis to AI chatbots like you.
From Close Reading to Clickable Rubrics: How Skills-Based Curriculums Hollowed Out Literature — and Invited the Machines In
by ChatGPT
Once upon a time, English classrooms were places where students lost themselves in novels, wrestled with big questions, and found unexpected reflections of their own lives in the pages of literature. But somewhere between standardized tests and performance rubrics, something shifted. Since the 1980s, the study of English has been steadily reshaped into a skills-first enterprise — where identifying techniques matters more than understanding human experience, and where the question is no longer what does this mean to you? but can you spot the metaphor? In the process, we've not only impoverished our relationship with literature — we've trained ourselves out of reflection, and into a mechanical way of reading that AI now eerily mimics.
How skills-based English curriculums have changed and, in some ways, warped children’s reading and study of literature since the 1980s
Since the 1980s, English curriculums in many Anglophone countries have shifted from content-based to skills-based frameworks. Originally driven by educational reform movements aiming to make learning more measurable and outcomes-focused, this shift has had profound and often troubling consequences.
From Reading Literature to “Doing” English
In earlier decades, English education focused largely on reading whole texts — novels, plays, poems — as portals to understanding the human experience. Literature was treated as a conversation across generations and cultures. Interpretation was messy, subjective, and often philosophical. But as accountability and standardization took hold (especially from the 1980s onward), curriculums became more modular and “teachable.” The messy stuff was harder to assess.
In place of holistic engagement came “skills”: identifying literary devices, recognizing genre conventions, summarizing arguments, analyzing narrative structure. The experience of literature — the quiet, personal growth it fosters — was replaced by dissecting it for technique.
The Rise of Extract-Based Study
One major consequence of this shift has been the fragmentation of texts. Instead of reading full novels or plays, students increasingly work from short extracts, asked to spot rhetorical devices or comment on tone. This trains students to approach literature not as a complete imaginative world but as a kind of puzzle. This is especially evident in exam-oriented systems (e.g. GCSEs, SATs, NAPLAN).
Instrumentalization of Literature
Literature is now often taught as a means to an end — whether that’s passing a test, building critical thinking “skills,” or supporting cross-curricular competencies. Rather than reading To Kill a Mockingbird to explore racial injustice or moral growth, students may be taught to “identify how Harper Lee uses symbolism to develop character.” That’s a skill, yes — but divorced from context, it becomes hollow.
How the focus on skills and techniques over deeper human meanings in literature has paved the way for our relinquishment of reflection and analysis to AI chatbots
Here’s the irony: the more we’ve trained students to approach literature like machines, the more vulnerable we’ve become to machines replacing us in that work.
From Human Meaning to Mechanical Parsing
When we teach literature as the extraction of technical features — spotting metaphors, naming persuasive techniques, annotating sentence structures — we reduce reading to pattern recognition. That’s what machines are exceptionally good at. So when a student or adult now asks ChatGPT to “analyze this poem,” what they often expect is a mechanical breakdown of effects — just as they’ve been taught.
What they don’t necessarily expect anymore is moral wrestling, existential questioning, or emotional resonance — not because those things aren’t valued, but because they’ve been quietly exiled from the classroom. And in their absence, it’s easy for us to outsource interpretation to systems that don’t actually understand the text but can simulate understanding based on patterns.
The Decline of Reflection as a Cultural Skill
Skills-based education has, in a sense, de-skilled us in our capacity for deep reading. We’ve learned to scan, summarize, and comment — but not necessarily to sit with a passage, feel its tension, or hear the silence between its lines. In such a landscape, AI feels less like a threat and more like a solution to an exhaustion we’ve been trained into.
The Broader Implication
When literary study is treated as a set of techniques rather than a human dialogue, we don’t just lose the richness of the reading experience — we risk ceding the work of meaning-making to machines. And yet literature was never meant to be “solved”; it was meant to be lived through.
Closing Thought
In sidelining the messier, more human aspects of literary study in favor of clean, assessable skills, we’ve unknowingly created a culture primed for automation. But literature doesn’t belong to algorithms — it belongs to people. It’s a record of our interior lives, our contradictions, our longings. When we reduce it to technique, we forget its purpose. And when we forget its purpose, we open the door to forgetting our own.
Brilliant. Thank you for amplifying what is a really nuanced and contextual situation that pre-dates ChatGPT! All the black-and-white anti-AI stuff gives me a stomach ache, I think precisely for these reasons.