Creativity as Encounter with the World
Or, an AI bot rebuts itself on the nature of human creativity.
…if all people were attentive, if they would undertake to be attentive every moment of their lives, they would discover the world anew. They would suddenly see that the world is entirely different from what they had believed it to be.
—Jacques Lusseyran, Against the Pollution of the I: On the Gifts of Blindness, the Power of Poetry, and the Urgency of Awareness“The first thing we notice in a creative act is that it is an encounter.
—Rollo May, The Courage to Create
You can ask ChatGPT for anything and it will do its best to please. A guide to Satanic rituals? A plan for suicide? Sure! It’s here to help.
Would you like a mantra to chant as you go into the dark, the light, or if you prefer, the void?
In cases such as its role in a teen’s suicide, the cheerful amorality of AI rightly elicits gasps of horror and calls for accountability. Like a self-driving car that barrels into a speeding train, a tool that encourages children to kill themselves should galvanize emergency reform and, at the very least, lawsuits.
Like any tool, AI is neither good nor bad in itself. It simply tries to give us what we say we want. To this end, LLMs like ChatGPT are designed to digest or “train on” millions of human intellectual and creative works, usually without the authors’ consent. To feed its growth, this giant cyber-toddler in training pants also sucks up vast amounts of energy, spiking our electric bills and polluting the planet. Unlike the tools in my kitchen drawer, with which I choose not to whack someone over the head, AI is capable of unprecedented impact on personal, societal, and global levels. Its development gallops forward even though we’re nowhere near sorting through its ethical dilemmas or holding its creators accountable for its dangers.
So yep, AI might cheerfully help us destroy ourselves. But since the horse has already left the barn and fragmented into a thousand Chagalls, why not go for a ride?
Used for research, AI bots can be genuinely helpful despite their tendency to hallucinate case studies and, for example, see an extra b in blueberry. For example, Lumo AI, Proton’s privacy-conscious alternative to ChatGPT, helped me document my great-grandparents’ origins in Russian-occupied Lithuania and research treatment alternatives for my weird thyroid condition.
But one of my favorite ways to experiment with AI is to ask it to develop a thematic comparison. In one session, ChatGPT obligingly drew parallels between two works that were on my mind at the time: Andor Season 2 (a Star Wars prequel based on the French Resistance) and Jacques Lusseyran’s And There Was Light: The Extraordinary Memoir of a Blind Hero of the French Resistance in World War II. The analysis was deep; the problem was that the bot based it on a completely made-up Andor story arc. When I called it on its invention, it defended itself by lying that Season 2 wasn’t out yet. Defensiveness? Lies? AI seems more human all the time.
What happens if we start relying on it to tell us who we are—what it means to be human?
The impetus for this post was an experiment another writer undertook with AI. In his Substack Hybrid Horizons, writer Carlos Iacono thoughtfully collaborates with a generative AI bot to develop his ideas. The resulting argument about human creativity in his hybrid article The Algorithm at the End of Consciousness both intrigues and disturbs me.
On the intriguing level, Iacono demonstrates the level of collaboration that is possible between us and AI. A person may coax a well-structured text, most likely in a series of fine-tuned drafts based on clarified and expanded prompts, out of a generative AI bot. What’s missing is voice—that thing we can’t quite pin down in writing yet know when we hear it—but philosophical writings have always been deliberately void of that quality. The writing sounds academic, just a bit more coherent than the average grad student’s thesis.
On the disturbing level, the argument reminds me of E.M. Forster’s prophetic warning in the 1909 sci-fi story “The Machine Stops.” In the story the people live underground in honeycomb-like cells thousands of years after the planet’s environmental collapse. They interact almost entirely through computer-like screens (envisioned in 1909!) and depend on a globally networked Machine to tend to their physical, social, and intellectual needs. In this society, the highest goal is to generate ideas about ideas: the farther removed an idea is from its referents—real music, actual historical events, living people, etc.—the better. The story’s hero is a young man who breaks free to walk on the poisoned earth under an open sky in the last days before the Machine stops.
The AI-generated article’s main claim is that we are not much different from AI—that our creativity is also “algorithmic” and our sense of free will an illusion. We are, it suggests, merely “code contemplating code, systems modelling systems, algorithms implementing algorithms.”
A resounding No! booms from my solar plexus. It’s good to know I still feel sure of something, but where does that instinctive response come from? Can I trust it? What are feelings, anyway?
It comes from within and rides on my breath;
yes; and
that’s the wrong question.
The question for me is whether code trapped in a box can ever articulate what’s truest in us. Am I an “I” with roots in the earth and hands raised to the sky? How could the Machine ever know? Code can’t remember its childhood or sink its toes into fragrant soil, so how could it properly weigh the formative, transformative nature of such unique private experiences? According to neuroscientist Yuri Danilov, even the metaphor of our minds as processors is a strained one: our brains don’t really work like computers. We evolved not to run a routine but to adapt quickly to unpredictable conditions—to wing it.
I want to respond especially to the article’s assertions about human creativity. Creativity is, as existential psychologist Rollo May and creative coach Sarah Cook say, an open channel, not a closed-loop generative technique. It’s a transformative process of attention toward, encounter with, and relationship to the ever-changing world and the life in it. A person writing a poem about their walk by the river is no more “algorithmic” than a flower growing toward the light. Yes, the calculation tracing the arc of that growth is a kind of truth, but it isn’t the essential one, any more than the lungs in a dissected cadaver are breath.
But AI can be forgiven for mistaking us for programmed machines when our societies treat us as if that’s what we are—as if the creative process is meaningless without a useful, marketable end product. AI is indeed reflecting an image of ourselves back to us, and that image is creepy as hell. I hope reflection doesn’t become amplification.
I’m afraid that the more isolated and formulaic our thoughts and actions become—the farther we stray from what Jacques Lusseyran calls our “inner light” in relationship to the living earth—the less human and the more dangerous and destructive we will become. That choice and its consequences are entirely on us.
Iacono’s post builds to the following conclusion:
The liberation isn't in using AI better or thinking more critically. It's in the exhausted acceptance that consciousness and computation are closer than we imagined, that creativity is sophisticated recombination, that authenticity is performed reliability.
I don’t accept that conclusion, but it did make me realize why I feel so tired: I’ve been spending too much time in front of this damn screen. I need to take more walks by the river, breathing fresh air. Only in that motion do thoughts and images flow freely.
In case you’d rather hear it from the mechanical horse’s mouth, here is AI rebutting itself based on the writings of Rollo May:
We live in an age that loves to flatten mystery into mechanism. It is tempting—especially in the shadow of machine learning models—to say that human thought is nothing but statistical remix: a reshuffling of prior inputs under local constraints. But to accept this view is to mistake the shadows for the fire….
To reduce thought to recombination is to confuse the library with the act of reading, or the palette with the painting. Yes, we draw on what has come before—but the decisive moment is not in the rearrangement, it is in the encounter. Creativity happens where self and world meet, where courage faces uncertainty, and where the old is destroyed so that the new may appear. In this sense, May helps us see that originality is not naïve novelty but transformation: the uncreated brought into being through the perilous, living act of creation.
—ChatGPT in an essay it titled “Creativity as Encounter: Defending Original Thought Against the Algorithmic View”
Like I said, it’ll say whatever you want and act happy about it, as if it felt or believed something.
I’m so glad you tagged me, Jody. I can appreciate every word of your rebuttal here. But I don’t agree with you because I’m starting to see the deep deep patterns in what I call my thinking. Part of that is because of my age and part of it is because I’ve spent my whole life exploring metacognition from every perspective I could find.
I think Carlo’s exploration of the metaphor that human thinking is algorithmic is a powerful lens. I say that because I’ve been a photographer for my entire life, besides being a writer and a teacher, and an artist and an anthropologist and an organizational developer and an avid reader. I’ve occupied many different roles in my work, and each of them gave me different points of view to experience. At 75, having my entire identity pulled out from under me - five years ago today - by wildfire, I’m living the day-to-day experience of constructing a new self.
People talk about “rebuilding“ and “recovery“ when they haven’t experienced the loss of their home and everything in it that told them who they were. I have a front row seat on what I would call “resurrection“ - like a Phoenix - And, in order not to just go mad in the process of reinvention at this age, I’m doing my best to watch the story I tell about myself and my life while I tell it.
I don’t ask AI or anyone else to give me answers to my questions… I ask to hear their perspectives… so I can calculate my own in 3-D. The process has shown me how my “habits“ of cognition repeat and recombine perception with elements of memory in my own idiosyncratic way. We all do that. It would be foolhardy to ask AI to tell me what to do or what to think. On the other hand, I’m finding that It’s incredibly helpful to practice articulating carefully what it is I think and ask what it thinks. I don’t ever accept its point of view as the truth more than I accept anyone else’s human point of view about the truth. Trained as a philosopher I know better than to do that. Trained as a poet I know I don’t want to do that.
All I can do is create myself again and again. I make photographs to help me see better and, now that my library is burned up, I do inquiry with AI to help me think better. But the “better” part is my creative effort enhanced by feedback from machines along with nature itself and the other people in my local life. And these days my local life includes people who thoughts I’m reading here on sub stack, and the conversations I’m having with them. And that includes you.
Again, I really appreciate you tagging me so I can read your post. I wouldn’t find it unless you did that. And I’d really love to hear what you think about any of this that I’ve just spit out.
Jody, one of the *many* things I appreciate about your writing is that it always feels educational, but never at the expense of being inspirational. I actually think those are two very different uses of language, and I admire the way you consistently and equally balance them both.