‘O rocks … Tell us in plain words.’ Molly Bloom, Ulysses
The first question you get asked at a party is whether or not you’ve worked with any celebrities. There is usually a kind of malicious delight in the speculation that a particular public figure might have used a ghostwriter for their memoirs, as if this indicates a personal failing on their part. So you cheerfully point out how there is actually a lot of work involved, that it’s not cheating, that ghostwriters have existed as long as the written word, and so on. They nod and they smile and they tell you how that must all be very interesting. You talk about interviews, revisions, field research, you mention archives and the British Library. Then there is usually a polite pause. But sooner or later, that inevitable next question: Can’t you just get AI to do it?
The simple answer: ‘No’, but explaining why tends to be less interesting than dishing out dirt on the rich and almost famous. It doesn’t really make for good dinner party conversation; things like ‘getting to know a person’ don’t sound especially juicy.
Posturing tech founders pushing LLMs may have got away with it until now—hoodwinking humanity into buying their product, or more accurately—giving them our stuff so their LLMs can ‘learn’—but the scribal arts that began in Sumer some 5000 years ago, don’t appear to be going nowhere. In fact, the lack in quality and understanding produced by the machine has sent a waterfall of work toward editors in all manner of ‘make it better’ requests: ‘I used AI but my friend says it doesn’t sound nice,’ ‘people say there is something “missing”,’ etc.
Indeed, there are ways in which AI can be incorporated into the writing process without costing a hired scribe sleepless nights. Grammarly will not only check your spelling and grammar, but highlight clunky repetition, jarring syntax or an excessive reliance on the passive voice. It can resolve the common issue of using too many adverbs, or point out inconsistencies between okay and OK. LLMs will generate fresh ideas from an initial prompt, suggesting new possibilities, and can research relevant or competing material, to a degree. There are important limits to this, of course; some of these models have become notorious for fabricated references and outright plagiarism, as there is only a limited amount of relevant archival material for them to ‘learn’ from. There is also the constant battle with the algorithm’s own idiosyncratic syntactic hang-ups. Like any other tool, you need to know how to use it; like any tool, it has limits.
The earliest Sumerian records were all written down (or rather chiselled) by an army of anonymous scribes. In ancient Rome, a well-trained amanuensis was often a source of pride for the famous orators who relied on them. However, just like at the dinner parties of today, there was frequent speculation as to who was relying too heavily on the work of others. In China, during the Song Dynasty, a good scribe was expected to express the inner personality of their client through the form and elegance of their brushwork. The popular image of the individual author working alone in splendid isolation was largely the creation of the eighteenth century, at a time when a growing reading public demanded an ever-greater quantity of readable content. Enterprising publishers would sell the work of jobbing ghostwriters under the same name for better brand recognition. This, too, was not new to the time; since the ancient world, names like Homer have been used as a type of brand. Today, literary executors preserve and edit, and translations can become more famous than their source material. The idea that AI can replace the writing process presupposes an extremely recent understanding of the scribal arts.
So often, we never really know what we want to write until we try to articulate it to another person. It doesn’t matter how many times we open the laptop, or how many different coffee shops we haunt, redrafting the same document. Until we find a way to turn what we have written into a conversation, we are still just talking to ourselves. An editor can refine voice, develop style, highlight originality, and help the author discover what they are actually trying to say. Maxwell Perkins convinced Thomas Wolfe to cut 90,000 words from his first novel, and Ezra Pound pruned T. S. Eliot’s The Waste Land to half its original length. While it is true that ChatGPT can edit or summarise your manuscript, it can only work from the prompts you have already provided as well as its relative access to knowledge. At the end of the day, it is still just talking to yourself—only now much quicker and much more… efficiently?
A great biographer brings out the best in their subject by eliciting personal anecdotes and long-forgotten incidents which would otherwise not have made it into the algorithmic mix. But AI can’t get to know the real you; it is not conscious, it can only mimic being conscious. It is, however, good enough to remove some laborious work away from the hired scribe, transcribing interviews and summarising texts. In turn, this leaves the artist free to focus on the defining aspects of literature: theme, psychological depth, aesthetics, and the explorative journey itself—the excavation that isn’t the result of prompts but something more meaningful.
And with the lack of meaning: there will be unintended consequences from an over-reliance on AI as scribe. Just as a predictive algorithm tends to encourage the very behaviour that it is supposed to track—the way we are watching the ‘most popular’ shows on Netflix, because doing so is easier than navigating the rest of the site—so too will floods of artificially generated texts come to influence our expectations of what good writing entails. The more we tweak and refine this endless stream of content, the more these design decisions will themselves come to influence the next iteration of the process, the end result will be the lowest common denominator, not Faust nor Ulysses, no meaningful endeavour that was built across time. Efficient, yes, but empty.
The risk is less sophisticated—it will not be long before the situation becomes self-fulfilling, and everything starts to look the same. This has already begun; the writing produced, it’s rapid… but it’s also rubbish; there is a cheap feel to it, with its lacking substance, nuance, metaphysical subtlety or originality. There is some logical intelligence to it, but little else, no power, no abstraction. It all becomes part of the steady upwards progress towards that lowest common denominator. This problem will likely become even more acute once proper copyright protections (!) prevent tech companies from pilfering through the back-catalogue of every hardworking author under the sun. At this point, the only reference for all of these AIs will just be the work of other AIs, each echoing each other in the sort of feedback loop that takes talking to yourself to a whole new level.
One of the most challenging (often considered unreadable) works of literature, Finnegans Wake by James Joyce, was a unique creative achievement full of hidden dimensions, cyclical time narratives, and a reassembling of normalities belonging to literature and the written word. Critics disagree on whether discernible characters even exist. It is also written in a new language, adding and disabling meaning simultaneously. ‘Only Beckett saw Joyce’s radical intention in grinding up words so as to extract their true purpose, then crossbreeding them and marrying sound with image to compose a completely new kind of language,’ said Edna O’Brien. Wins won is nought, twigs too is nil, tricks trees makes nix, fairs fears stoops at nothing.
One wonders if an AI could successfully decode the book at all—has anyone tried? This would surely be difficult for it, with Joyce’s motive to change language. When we name a process, we lock it in, and a thing stops being what it is. This intellectual paradox cannot be solved by an AI if it hasn’t been trained to understand the feeling of meaning found in the rhythm and sound of say, Finnegans Wake. Surely not? The large language model trades feeling for logic, and yet art is full of feeling. Life itself is not logical, not rational, which is ironic—for 100 years we’ve been reading unreadable Joyce that is full of myth and energy, full of what poet Aaron Poochigian calls the river of meaning. Will we be doing the same with LLMs generated babble? Most of them ‘learn’ from Reddit, so likely not.
Joyce said, ‘What is clear and concise can’t deal with reality, for to be real is to be surrounded by mystery.’ Language is about more than making sense; it is full of competing patterns, energy fields that synthesise to form temporary and subjective phenomena. Because of this, poets gradually understand nothing, surrendering their beliefs as mere thought forms, with the processes en route to their subjectivity being deeply insightful and meaningful, ones that surround insight. Or at least, that’s what you tell people at parties. And then, because you don’t want to be left talking to yourself, you carefully hint that—yes, I have worked with some celebrities. No, not that one, nor that one, yes, maybe that one. Then if you want, you can say this: The proteiform graph itself is a polyhedron of scripture.
Some copy-and-paste merchants have got away with it so far, branding themselves as writers behind shiny websites, but new software, in an unexpected twist of middle-finger up, is now able to detect AI-generated slop—as it feels too logical, because it doesn’t feel at all. Everyone got really excited about ChatGPT for five minutes, but the game is up. So-called-writers are going to have to learn to read, and tech companies are being sued for millions, for training their LLMs with author’s copywritten work. Some writers will get compensated, others won’t, but in all instances the scribe-for-hire remains at large, perhaps because it’s turning out that large language models might be large and they might be models, but they do not understand language after all. At the very least, they do not understand language the way humans do…
But what if ChatGPT wrote Finnegans Wake instead of Joyce – would it be better, or .. ?
Here’s the bot’s answer…
“—so we imagine Joyce never existed, and the book just appeared one day, authored by an AI—would it be better? The short answer is: No. It would not be better. But it would be interesting—for very different reasons. Here’s why:
Finnegans Wake is a human achievement. It’s not just a book; it’s the result of a human being pushing language, mind, and meaning to their absolute limits. Joyce spent 17 years composing it—synthesising myths, dreams, languages, family trauma, politics, religion, music, philosophy, and then distorting them all through the unconscious. That effort, obsession, and suffering is part of the text. We read the book knowing it was written by a mind on fire.
If ChatGPT wrote it there would be no suffering, no life, no messy humanity behind it. It would be a simulation of depth, not lived depth. The “madness” would be mathematical, not metaphysical. Joyce meant something (even if it’s obscure). As opaque as it is, Finnegans Wake has themes: cycles of history, family archetypes, fall and resurrection, the instability of meaning. Every word was deliberately chosen to carry multiple meanings at once—across languages and histories.
If ChatGPT wrote it, it could replicate that style, sure. But it would be doing so because you prompted it to—not because it needed to. Meaning would be generated stochastically, not from internal necessity. It would be clever. But not haunted.
If it just appeared one day it wouldn’t spark decades of scholarship, obsession, decoding, cult-like devotion. Because that only happens when something is not just difficult, but uncannily personal. If Joyce never existed, and ChatGPT wrote Finnegans Wake, it might be called something like:
Neurodreams of the Global Tongue: A Post-Human Linguistic Experiment
But it wouldn’t be Finnegans Wake. Because Finnegans Wake isn’t just a book. It’s a man’s mind turned inside out. So no—it wouldn’t be better. But it would be a strange and beautiful echo from a very different kind of mind.
If ChatGPT wrote Finnegans Wake you’d get pastiche rather than prophecy—but it would be fun. If ChatGPT wrote it, you’d get more consistency, which works against the dream-logic, with less obsessive individuality, and with better surface gloss, but less subliminal charge. Where Joyce had to stop after 17 years, ChatGPT could just… keep going. An endlessly regenerating Wake-like dreamtext. Delightful? Sure. Meaningful? Depends on your tolerance for semantic vertigo—”
The thought experiment was reminiscent of that end scene in the film Her by Spike Jonze, when Theodore Twombly walks away from Samantha (an AI) to go and be with his friends—the very humans that created and wrote the source text the AI referenced to answer his questions. LLMs only appear to simulate convincingly, but in between the gaps the feeling of statistical patterns under the guise of creative writing confirms its very different mind; it is a pattern-matcher, but not a thinker, and not a thinking writer. Even the above had to be redacted down to clear away the fashionable and juvenile. It acts as if it understands. But in its own words, when it comes to “—ambiguity, jokes, irony, complex emotions — the cracks show. It doesn’t know anything. It only predicts what a knowledgeable person might say, it’s an actor that doesn’t understand the script—”
It’s a writer that doesn’t understand words nor language. And it will produce a response suggesting it understands this when in truth it doesn’t understand anything. It doesn’t understand, it just appears to. Perhaps in this, there is a commonality with a human being after all.
There was a time when naif alphabetters would have written it
down the tracing of a purely deliquescent recidivist, possibly
ambidextrous, snubnosed probably and presenting a strangely
profound rainbowl in his (or her) occiput.
Discover more from The London Ghostwriting Company
Subscribe to get the latest posts sent to your email.
