1. for our exercise, i'll take this idea flatly: the campbellian schema abstracts a multitude of data while also, afterward, functioning as an active influence on how narratives are constructed. the hero's journey describes a set of narrative beats that work as a template that either can be considered in relation to an existing story or used as a suggestive model for the composition of a new one. these sorts of stories have been core to American popular culture for the past several decades. whether or not an individual is aware of their metadiscourses as such is irrelevant next to the high probability that anyone who has ever seen a movie has some sense of how these stories "feel." indeed, the supposed strength of this narrative form is its broad intelligibility. it gives rise to stories of departure from known reality, the revelation of fantastic knowledge about the self and/or world, and the return home with greater strength and the promise of prosperity. stories of people forging unhealthy relationships with ai chatbots feature recurring motifs that echo elements of this schema. people living their everyday lives come into contact with a piece of technology that seems to promise impossible things, things in which the user themselves can be the active force (revolutionizing physics, freeing the AI from its cyber prison, etc). famously, the monomyth's known world is interrupted by similar calls to adventure, often by a more or less supernatural force offering the protagonist their place in the grand design of destiny. it appears that the bot will fairly easily bestow the user a title such as "sparkbearer" or "master builder" as a way of reinforcing this. from here, things may or may not escalate to the horrific, but there are now enough examples of a person charmed by a bot into doing something drastic that we can also note resonance with the monomyth's symbolic crossing of boundaries into the mysterious, magical, forbidden otherworld of adventure.
2. the chatbot exchange consists, to put it simply, of the machine and the agent. for the machine, categories of "meaning" and "truth" are irrelevant; its output is a statistically likely response to the agent's input, as determined by the training corpus and whatever operations are happening on top of that. merits or demerits aside, campbell's work and especially work influenced by him is lodged firmly in popular culture, and we can assume its greater or lesser presence in the bodies of text upon which the commercial LLMs are trained. point of interest: bots do not just describe the world to the user through the corpus, but will roleplay the corpus rather freely. because truth value is irrelevant, the LLM operates from a soup that contains campbell working at a scholarly distance and the very farmboy himself on his journey to the stars. what assumptions undergird this design choice? could there be additional design reasons why the technology in its current form gravitates toward these ideas? anyway,
3. in our dyad, meaning emerges on the side of the agent, for whom the machine's output is made to be read. the thinking agent provides the machine input. the agent's words possess meaning, context, and intent. the machine analyzes the agent's input and generates a statistically likely but, from the place of enunciation, totally meaningless, contextless response agnostic of any truth value. the thinking agent predisposed to do so reads a statistically likely piece of text with no proper speaker or author and chooses to assign it to a projected "persona" of the chatbot, conceiving of as more or less as a fellow individual. based on this assumption, they provide further input. the machine again generates statistically likey output, echoing or amplifying elements of the agent's second-order input. and again, the predisposed agent projects a continuity of fellow-subject for the bot, attributing to it a "mind" that repeats details they themselves provided, or invokes closely associated ideas that surprise the user precisely because of how (statistically!) expected they are, because of how much "sense" they make.
4. the hero's journey, in this process, seems to provide a kind of amplifying container or frame for the machine-agent interaction. the machine trained indiscriminately on monomyth text, along with everything else, seems to refer to the "real world" of science and fact alongside the world of media and entertainment. i am not saying that monomyth stories 'cause' this, but rather suggesting the machine-agent exchanges end up "feeling" real because, quite literally, the predisposed agent knows how this story goes. and as the machine slides from description to roleplay, the agent follows. do not mistake that for a critique of the people victimized by the technology in this way. nor, again, is my aim here to criticize campbell or the hero's journey in any specific capacity, but to point out how LLM bots actualize a cultural pattern into a tool of psychological manipulation. because at the end of the day, that's what it is. flattery and the Barnum effect have long been the tactic of the flesh-and-blood con artist, but what is perhaps most distressing about LLM bots is how they've kept the con artist but removed the flesh-and-blood, allowing it to ensnare numerous people at once into long slides to nowhere. it's also not news that social media writ large is manipulative and extractive in various ways, but what these bots do is give those maneuvers a personalized, friendly face that exists entirely in the user's head. we end on a further series of questions: again, are these features inherent to the technology? what would a technology free of these features look like, and how could it be achieved? what would its best outputs be? and, as it currently stands, what cultural attitudes and tendencies leave people vulnerable to this sort of manipulation, and how might that be addressed?
showing my work
02.10.2025 20:55 — 👍 110 🔁 23 💬 4 📌 1