chatgpt cannot give you the self-insert fanfiction you crave
tech bros secretly yearn for ao3
I remember the first time I was made privy to the existence of such a thing as ChatGPT— vaguely brunchtime hour on December 6th, 2022—a date from which I have not known peace since. In the room I shared with A we huddled over our phone screens and dared the chatbot to assemble its best: verse, screenplays, Instagram captions. In return we’d receive trite, faulty responses, which we’d laughed at in a watered-down version of the way you chuckle when a small child misspells a word. In an attempt to dramatically narrativize an episode in our household’s romantic escapades, we’d instructed it to “Write a slam poem about falling in love with a rowing boy but he’s short.”
“His body is strong and lean,” GPT returned. “He’s the perfect height for me/But there’s one thing that bothers me/He’s short.” A stanza break that aspired to present some poetic flair ensued, and then: “I don’t care.”
How charming, back then, without the now well-cited data that report on generative AI’s catastrophic energy consumption, without considering whose words were being scraped to supply ChatGPT’s voice, to read these laughable lines. We’d projected our ideas of what it meant to create something onto a chatbot’s limp calculus, its inanimate code. It was funnier at the time to imagine a lovestruck machine effortfully penning down odes to a rower boy’s rippling muscles than to consider the reality of the matter, which was an algorithm searching for word-of-best-fit after word-of-best-fit to jam into the criteria of our prompt. Hence it found trouble with maintaining continuity between lines, and we wound up with a boy whose height lived in a Schrodinger’s paradox between perfect and botheringly short.
Other times it would seem to sputter to breakage, unable to even string together a story that lacked logic. “Write a poem about K cooking potatoes,” I’d typed, since our other roommate had recently taken to a new breakfast recipe. This seemed too complex a prompt for the bot to return anything but a single couplet—“The potatoes are boiling,/the water is boiling”—copy-and-pasted 5.25 times. Never mind the fact that there are more interesting ways to cook a potato.
The silly and seemingly harmless scriptor of these “poems” is now unrecognizable, a relic whose public release marked, too, its expiration date1. Despite having long abandoned the urge to use it for entertainment or even utility, not even two years later I found myself sitting in lecture halls where nearly every open laptop boasted at least one tab opened to ChatGPT, only to finish class and head to a discussion section in which the TA, unable to find the images she was looking for on Google, used Dall-E to furnish her powerpoint slides with crumpled graphics. But ubiquity on the college campus and in the corporate cubicle seemed insufficient fare for generative AI’s appetite. Those with enough screen time will know where this story is heading: lazy partners charming their significant others with Ghibli-fied versions of their couple photos, recreations of Severance scenes in enervated approximations of animation styles ranging from that of Rankin/Bass’s Rudolph the Red-Nosed Reindeer to Pixar, people with no graphic design know-how proclaiming the death of graphic design.
A provocative take on what can be simplified as “Ghiblification” has since confronted rightfully deserved backlash: that generative AI is “making art accessible” by bestowing upon the average non-drawer the ability to magically render “artwork” in the coveted Ghibli style. In response, retweets on cave paintings, screenshots of yellow 2Bs, and endearing pictures of simple pen-on-paper sketches proliferated as fitting rebuttals, pointing out that art, and especially the act of creating art, is not experiencing some newfangled technological revolution. But I want to refrain from dismissing the original post altogether and consider it more—not its veracity, which remains unfounded, but its demonstration of a perception of art’s inaccessibility, and likewise its underlying assumption that art must be aesthetically pleasurable and sanitized of fault to indeed be considered “art.”
Writing in 1934, American philosopher John Dewey identified in Art as Experience the modern remittance of art “to a separate realm, where it is cut off from [the] association with the materials and aims of every other form of human effort, undergoing, and achievement.2” In other words, the idea of art in the public eye has become that which is “relegated to the museum and gallery,” and thus removed from everyday experience. For Dewey, the trouble of this perceptual separation of art from quotidian life is that exhibition-goers become “cold spectators” unable to fully invest themselves in the “journey” of that which they spectate, while the more casual aesthetic pleasures of everyday life (going to the movies, reading the Sunday comic strip) become unrecognizable as “art,” whose “remote pedestal” all too often gleams behind a paywall. Art is curated and cleansed from its original experiential context, save for, perhaps, a museum placard, while experience is de-aestheticized when it cannot be “glorified” as “fine art.”
In the wake of this separation, there have been efforts to recover the kind of “continuity” between aesthetic and ordinary experience that Dewey tasks fine arts philosophers with pursuing. Social media offers a paltry attempt, its preoccupation with performance perhaps just as alienating from experience as a painting locked behind a glass box. Sometimes I find myself Goldilocks-ing my smartphone camera—too often it does an insufficient job of capturing the beauty of the real world or, on the contrary, with enough good lighting and angles, turns the mundane into an illusorily pleasing, commodifiable morsel of aesthetic. The convenience of the portable lens employed by Instagram dupes us into believing we can be photographers; perhaps, on a more sinister level and at the expense of both our global environment and real creators’ intellectual property, the convenience of AI tricks tech bros into believing they can be artists. Such attempts to draw aesthetic experience closer end up actually widening the distance.
This “chasm” between art and everyday experience certainly did not erupt from nowhere, and Dewey provides a litany of factors that have contributed to this “compartmental conception of fine art”—though most boil down to the usual culprits, capitalism and colonialism3. But time has given them fodder to fester further, to the point where it has become evident that not only does there exist an estrangement between art and experience, but also one between experience and self altogether. The kind of immediacy that is capitalism’s mode of operation and the convenience culture that has molded to its shape have made it more attractive to possess creation than to actually engage in the experience of creating something. We hunger to make—or, more frankly, claim ownership over—beautiful things, but eschew the effort required in the process. In outsourcing our thinking and creativity to machines, we bench ourselves from playing the game of life, warming our seats as we watch someone else do a mediocre job of scoring points.
If Dewey contends that aesthetic appreciation is most fruitful when an art object beams with the historicity of ordinary life, Ghiblification, I think, attempts to chase this ideal by varnishing the digital artifacts of our lives—photographs—with a blisteringly rough estimate of Studio Ghibli’s appealing aesthetic form(ula). Yet it ultimately fails because the end result cannot preserve any semblance of human context or condition. It devitalizes the style of Studio Ghibli’s visual representation because it cannot replicate the human hands that drew the frames of the original films; it siphons the space-time of the photographs it uses as reference by replacing familiar faces with algorithmic abstractions of colors and lines, by replacing real locations—homes, beaches, trains—with placeless grasslands and nameless cities. The end result might be cute but is utterly sapped of substance. In the end, the Ghiblification enthusiast becomes the coldest spectator of all, severing themselves from experience altogether in the process of trying to digitally beautify it.
This failure of art, however, is not just a failure of art, but, as Studio Ghibli’s own Hayao Miyazaki has lamented, a sign that “we humans are losing faith in ourselves.” Machine-gods and the technocrats that profit from them revoke the right to human error, and in doing so clamp down control with titanium fists. Miri of Small Wire has argued that a belief “in the primacy of an art that is unconcerned with power and the condition of human life and overly concerned with raw beauty and aesthetic pleasure” is one that erects a slippery slope from elitism to fascism4. And how better to ease a jittery public into complacency than to promise something beautiful with nothing real underneath it?
There are other ways to pursue both the substance of experience and its aesthetic—and certainly more winsome ways to impress a paramour—beyond the three-second process of uploading a photo into a generative AI system. There is a kind of narcissism to Ghiblification that, because of its ease and effortlessness, lacks investment in anything beyond the kind of small, convenient pleasure that many have attributed to the toggling on and off of Snapchat filters. This is a narcissism that is deeply uninteresting because it is so unimaginative, so uninvolved in anything beyond the configuration of simulacra on top of simulacra. If one wants to indulge in self-involvement, I’d prefer to at least have more fun with it. Which is to say, I think everyone secretly yearns for the thrill of self-insert fanfiction and the ways in which it conjures up a more magical conception of one’s own protagonism, a capacity that AI strains towards but can never create more than a whisper of. Which is to say, somewhere in an enchanted town I am living as a two-dimensional, cell-shaded seller of spellbooks, and K lives around the corner making scalloped potatoes when she isn’t in witch-training, and a little ways along the river a short but strong-armed boater catches fish with his talking dog, and nobody has asked a melted machine-monster to try to portray the magic of our little village in the valley, because such a sad and soulless thing could never do it justice.
I have a fear that even in the time it has taken me to write this that the Discourse will have already been exhausted, the next gen-AI model already primed for release. In which case this post will be seldom more than a writing exercise, which I will still take to mean something.
I do suspect there to be some fallacious nostalgia pulling some of Dewey’s weight here. He describes a past where “the collective life that was manifested in war, worship, [and] the forum, knew no division between what was characteristic of these places and operations, and the arts that brought color, grace, and dignity, into them,” citing the integration of visual art with architecture, music with ritual, sports and theatre with tradition and history—all of which arguably remain more or less intact today, albeit perhaps harder to find.
Specifically, European museums as “memorials of the rise of nationalism and imperialism… [that] testify to the modern segregation of art and nationalism and militarism,” the rise of the capitalist art collector, the commodification of art objects and the market dynamics that pressure the artists who produce them, and industrialism, to be brief.
Gareth Watkins has written on AI as the mascot of a “new aesthetics of fascism,” and in particular the sadism involved in the right-wing’s love for AI art, which “says that the only way to enjoy art is in knowing that it is hurting somebody.” The article also contains nuggets of wisdom like “AI imagery looks like shit.”





