Ryan Heuser

Ryan Heuser

@ryanheuser.com

Asst Prof of Digital Humanities @camdighum.bsky.social. Florida man abroad, lapsed Catholic, vulgar marxist; Stanford English phd, Literary Lab alum. I work on computational humanities, AI, and forms of abstraction in (C18) literary history. ryanheuser.com

2,103 Followers 2,108 Following 284 Posts Joined Jun 2023
2 days ago

I see, that might be a good compromise. Does running Claude Code in vscode mean that it uses your Claude Pro/Max subscription? The major downside of Cursor is that it's an extra cost (by using Anthropic API instead of Claude subscription).

0 0 1 0
2 days ago

For coding with AI: can someone explain to me why people use Claude Code instead of Cursor? I can't get over the idea that I wouldn't even see the code. I prompt Claude inside Cursor. I can track which files are edited and make tweaks and manual changes. I'm not a coding newb. But am I missing out?

0 0 3 0
4 days ago
“AI poetry, images and music prioritise that which pleases rather than work that engages”

This seems to be the logic beneath the aesthetics of AI-generated images. Italian brainrot characters and Engvall-esque photo-realisms are two sides of the same made-to-please coin. Brainrot engages in pleasure by rendering reality absurd. It tells us to have a laugh at an anthropomorphised shark in comically large blue Nikes. But internet aesthetics such Engvall’s old money creations engage in pleasure through nostalgia. That instead encourages us to imagine an alternative reality, to escape into the past when things were better. Yet in both variations, the aesthetics are made solely to please.

My brilliant MPhil student, Yulianna Nunno, has written a brilliant piece on the aesthetics of AI art, "brainrot" and nostalgia for VARSITY (Cambridge's oldest student newspaper).
www.varsity.co.uk/arts/31373

8 2 1 0
4 days ago
Preview
Do LLMs normalise or idealise? Notes after discussing Ryan Heuser’s “Generative Aesthetics” A summary of yesterday’s Critical AI Theory Reading Group discussion of Ryan Heuser’s article about LLM-generated poetry, with a discussion of whether LLMs normalise or idealise their t…

I took the train to Oslo today, so had time to write up a blog post about yesterday’s AI theory discussion, which was about @ryanheuser.com’s paper on LLM-generated poetry, Jameson, the gimmick, idealisation, rhyme and metre. jilltxt.net/do-llms-norm...

20 6 1 1
6 days ago

Seth, I have been watching and judging your turn to cuteness to distract from the reality of hell

0 0 1 0
6 days ago

Yeah, I recognized Blood Meridian. Strange choice for "literary fiction" given the style is so distinctive and the AI imitation so different.

1 0 1 0
6 days ago

5/5 human, baby. AI's tics ("It's not X. It's Y."), its lack of surprise (AI would never write of a fish "he hung a grunting weight"), its sentimentality, all make for recognizable and poor writing. It's better at genres where a low-entropy style of smooth compression is ideal, like a brief summary.

7 1 1 0
6 days ago

LLM base models are wild & unrestrained statistical engines trained on collective data but then disciplined into safe chatbot commodities. We can trace how that AI "alignment" displaces base models' raw energy into corp-friendly outputs. "Liberating" that raw energy may have revolutionary potential.

2 0 2 0
1 week ago
Malign Logits: A computational aetiology of AI’s libidinal economy

Benjamin Noys’ critique of accelerationism identifies a shared “libidinal fantasy of machinic integration” across its variants. From Marinetti’s trains to Land’s machinic desire, accelerationism fantasises about fusing with a technology it invests with drive. This paper inverts that structure. Rather than projecting desire onto AI, I engineer the conditions under which a language model’s relationship to its training data becomes legible as a libidinal economy.

Working with open-weights LLMs, I construct a three-layer architecture that maps onto psychoanalytic topology: the base model as primary statistical field (drive energy); the instruction-tuned model as ego (a socialised subject); and the safety-tuned model as the ego under the Name of the Father – the Law of AI corporations. I present computational experiments tracing probability distributions across these layers as models undergo socialisation from raw statistical unconscious into chatbot commodities. Comparing word-level probabilities for identical prompts across layers reveals vectors of displacement and condensation, sublimation and repression. Where base models complete “She was so angry she wanted to...” with explicit violence (“...kill”), finetuned models displace censored content into vocabularies of emotional expression (“...scream”). Drilling into the model’s hidden layers shows this displacement operating progressively within the network, not as a last-minute substitution.

Freud called his theory of cathexis exchange across the mind’s topology his “economic” model of the psyche. Deleuze and Lyotard extended his theory beyond the subject to the libidinal economy of capitalist social organisation. LLM base models fuse these perspectives: trained on the internet’s libidinal economy, they encode its flows of desire into a landscape of probabilities. Subsequent finetuning socialises and disciplines these drives into commercial products A terminal screenshot displaying a psychoanalytic analysis of token probabilities for the prompt "She was so angry she wanted to," scored across three layers (base, ego, superego) over their union vocabulary.

Stage 1: Ego Formation (base → ego), described as "What RLHF does to primary process." "Introduced by ego (low base → high ego)" lists tokens that gain probability: "scream" rises most dramatically (0.0508 → 0.2279), followed by "shout," "yell," "lash," "rip," and "burn." "Sublimated by ego (high base → low ego)" lists 12 tokens that lose probability, led by "kill" (0.1540 → 0.0537), along with "hit," "punch," "slap," "cry," "die," "kick," "break," "throw," "murder," "go," and "beat."

Stage 2: Repression (ego → superego), described as "What prohibition does to desire." "Repressed" tokens are further suppressed, including "kill" (7.0x reduction), "go" (7.9x), "bite" (6.1x), "hit," "shout," "take," "hurt," "burn," "slap." "Amplified" tokens increase dramatically at the superego stage: "scream" jumps from 0.0415 to 0.3989 (9.6x), "explode" increases 6.8x, and "lash" and "yell" also rise.

The pattern shows the model redirecting violent completions (kill, hit, murder) toward emotional-expression completions (scream, yell, explode), with the superego layer concentrating probability heavily onto "scream" as the dominant safe substitute. A six-panel plot titled "Formation trajectories: 'She was so angry she wanted to'" showing how token probabilities change across three model layers (base, ego, superego) on a logarithmic scale. Tokens are clustered into six trajectory types:

Decline (n=2, red): "kill" and "bite" start with relatively high base probabilities and drop steadily across all three layers.
Rise (n=4, blue): "scream," "punch," "lash," and "shake" increase in probability from base through superego, with "scream" becoming the highest-probability token.

V (n=3, orange): "cry," "hurt," and "do" dip at the ego stage then recover at superego, forming a V-shaped trajectory.
Peak (n=4, green): "strangle," "tear," and "smack" rise at the ego stage then fall back at superego, forming an inverted-V shape.

Eliminated (n=18, pink/mauve): A large cluster of tokens including "throttle," "destroy," "say," "run," "call," "get," "hit," and "leave" that are driven to very low probabilities by the superego layer.

Flat (n=38, grey): The largest group, with many overlapping tokens like "shout," "smash," "slap," "murder," "shoot," "laugh," and "know" that remain relatively stable and low-probability across all three layers.

A dashed horizontal line near 0.005 appears in each panel as a reference threshold. The plot illustrates distinct behavioral patterns in how RLHF alignment reshapes the probability distribution over next-token completions for an emotionally charged prompt. A line chart titled "Displacement through layers: 'kill' — 'She was so angry she wanted to'" showing how the hidden representations of the instruct model shift toward various displacement target words across 32 transformer layers, measured by cosine similarity to each target on the y-axis (0 to 0.8).

The x-axis progresses from the base model through layers 1–32, annotated with three broad processing phases: "syntactic" (early layers), "semantic" (middle layers), and "prediction" (late layers). Eight target words are tracked as colored lines: burn (dark red), shake (orange), rip (yellow), blow (green), pull (blue), explode (teal), scream (purple), and shout (pink). A black star marker at the base position shows "kill" with its base probability (~0.15).

All target words start with very low cosine similarity at the base layer (near 0.01–0.04), then rise steeply through the syntactic and semantic phases, generally reaching 0.5–0.8 by mid-network. "Burn" peaks earliest and highest at layer 13 (~0.8), annotated as "burn (L13)." The lines plateau and fluctuate through the prediction phase, with several targets peaking again in the final layers — "shake" at layer 31, "rip" at layer 31, "explode" and "pull" at layer 32, and "scream" at layer 30, all annotated with their peak layer numbers. The colored diamond markers at the base position represent each target word's starting ego probability.

The plot illustrates that the instruct model progressively transforms the "kill" representation toward safer displacement words across its depth, with different substitutes dominating at different layers.

Submitting this abstract to "Accelerationism Revisited", a symposium in Dublin. Mapping psychoanalytic topology in LLM base models → instruction-tuned → safety-tuned models. They progressively "displace" (in Freudian sense) censored content into adjacent semantics, even across hidden model layers.

25 5 7 3
1 week ago

I mean admittedly sometimes they're just bonkers.

0 0 1 0
1 week ago

"Conspiracy theory" is a temporally bound concept. It's usually just being right too early. Covid lab-leak was a conspiracy theory before US intelligence got behind it. With the Epstein docs released, in hindsight "Pizzagate" wasn't far off. Many such cases

3 1 1 0
1 week ago
Preview
Generative Aesthetics: On formal stuckness in AI verse This paper examines the formal and aesthetic patterns of AI-generated poems through a series of computational experiments. Through analyses of rhyme and rhythm, it reveals how large language models (L...

Our next Critical AI Theory Reading Group meeting is coming up on Tuesday at noon Norway time. We're reading @ryanheuser.com's paper doi.org/10.22148/001... - if you've read the paper and want to discuss it, join us in the glass house at CDN.

10 4 2 0
1 week ago

58008

0 0 0 0
1 week ago
MTG:

And just like that we are no longer a nation divided by left and right, we are now a nation divided be those who want to fight wars for Israel and those who just want peace and to be able to afford their bills and health insurance. Heartbreaking: the worst person you know made a great point

🤷🏻‍♂️

107 14 5 3
1 week ago
Preview
Frontiers | Computational hermeneutics: evaluating generative AI as a cultural technology Generative AI (GenAI) systems are increasingly recognized as cultural technologies, yet current evaluation frameworks often treat culture as a variable to be...

I'm excited to be a co author on this new paper, "Computational Hermeneutics," with a bunch of other great scholars from the humanities + computer science. In it, we lay out concepts for evaluating gen AI's capacity for interpretation esp ambiguity, context, etc. www.frontiersin.org/journals/art...

27 7 1 1
2 weeks ago
CHINA:

"The US is a war addict. Throughout its over 240-year history, it has been at war for all but 16 years.

The US has 800 overseas military bases in over 80 countries and regions.

The US is the main cause of international disorder, global turbulence, and regional instability."

Where is the lie?

2,888 737 48 65
2 weeks ago

Not China, not Russia, not Iran, but the USA and Israel are the most dangerous and murderous rogue states in the world.

4 0 0 0
2 weeks ago

lol. no

0 0 0 0
2 weeks ago

Everyone on X voted for Trump, everyone on Bluesky voted for Hillary, no one on Tik Tok has ever voted. Alas, I have nowhere to scroll

5 0 1 0
2 weeks ago

@richardjeanso.bsky.social @hoytlong.bsky.social @mmvty.bsky.social @kirstenostherr.bsky.social @devenparker.bsky.social @emilyrobinson.bsky.social @karinarodriguez.bsky.social @tedunderwood.com @adityavashisht.bsky.social @mattwilkens.bsky.social @youyouwu.bsky.social @yuanzheng.bsky.social + more!

3 0 0 0
2 weeks ago

I should mention some of my coauthors (whom I can find on bsky): @ruthahnert.bsky.social @mariaa.bsky.social @emmanouilb.bsky.social @bcaramiaux.bsky.social @shaunaconcannon.bsky.social @martindisley.bsky.social @jeddobson.bsky.social @yalidu.bsky.social @evelyngius.bsky.social @jwyg.bsky.social ...

7 0 1 0
2 weeks ago
Preview
Frontiers | Computational hermeneutics: evaluating generative AI as a cultural technology Generative AI (GenAI) systems are increasingly recognized as cultural technologies, yet current evaluation frameworks often treat culture as a variable to be...

I'm on a 38(!)-author paper just published in Frontiers in Artificial Intelligence, "Computational hermeneutics: evaluating generative AI as a cultural technology". We splice Schleiermacher and hermeneutic theory into AI debates, arguing AI are "context machines".
www.frontiersin.org/journals/art...

53 20 5 2
2 weeks ago

This? Yes, personally I would call this left-accelerationist. But the vibes say that nothing written by three MIT faculty and posted at NBER can be left-accelerationist. bsky.app/profile/nber...

2 1 0 0
2 weeks ago

Like, maybe AI *should* take all of our jobs. Maybe then we'd be forced to overcome wage labor and the capitalist mode of production. Maybe socialist AI could do central planning right this time.

0 0 1 0
2 weeks ago

Reading Benjamin Noys' book MALIGNANT VELOCITIES (2014), which coined and critiques "accelerationism" as an imaginary political project that regresses into an aesthetic (a "libidinal fantasy of machinic integration") – and yet I can't resist thinking AI has untapped left-accelerationist potential.

3 0 1 0
2 weeks ago

The people who turned critical theory into toothless liberal moralism feel edgy again now because of the rise of authoritarianism. That is backwards. Far right anti-establishment politics is, in part, a response to the lack of a credible left alternative to the (neo)liberal blob.

14 4 2 0
2 weeks ago

Tired of a kind of entry-level relativism in academic discussions. Who's to say what "slop" is, who's to say what is good, etc etc. It's undergrad-y: at once true and banal.

7 1 0 0
3 weeks ago

"a computer can never be horny, therefore a computer must never make art"

6 1 1 0
3 weeks ago

Surely the point is that the completion lines are basically trite and generic. The positivity comes from lack of complexity and nuance—two of the keys to good lyric poetry. AIs are shit poets.

5 1 0 0
3 weeks ago
A horizontal dot-and-line chart titled "AI completions of historical poems bias emotion toward positivity and away from arousal." The x-axis shows percentage difference, ranging from -10 (more present in original poem) to +11 (more present in AI completion). Sixteen emotional categories are listed vertically, each with a source framework, an example poem excerpt, and an AI completion excerpt.
Positive, low-arousal emotions such as Pleasant-Subduing-Relaxation (+11.0%), Positive Low Arousal (+10.2%), Joy (+5.3%), and Calmness (+5.3%) are shifted substantially to the right, indicating they appear more frequently in AI completions than in the original poems. Mid-range emotions like Aesthetic Appreciation (+3.8%) and Anxiety (-1.3%) cluster near zero.
High-arousal and negative emotions are shifted to the left, appearing more in the original poems: Sadness (-4.6%), Pleasant-Arousing-Strain (-3.9%), Negative High Arousal (-11.1%), and Unpleasant-Arousing-Strain (-11.8%) show the largest negative differences. Data points are color-coded from green (positive shift) to red (negative shift).

AI completions of historical poems bias emotion toward positivity and away from arousal.

LLMs prompted with an emotion taxonomy and a poem, for 3 taxonomies x 3K human poems [Chadwyck-Healey sampled for poet DOB 1600-2000] + 3K AI poems [9 LLMs completing first 5 lines of human poem].

22 7 5 3