I see, that might be a good compromise. Does running Claude Code in vscode mean that it uses your Claude Pro/Max subscription? The major downside of Cursor is that it's an extra cost (by using Anthropic API instead of Claude subscription).
For coding with AI: can someone explain to me why people use Claude Code instead of Cursor? I can't get over the idea that I wouldn't even see the code. I prompt Claude inside Cursor. I can track which files are edited and make tweaks and manual changes. I'm not a coding newb. But am I missing out?
My brilliant MPhil student, Yulianna Nunno, has written a brilliant piece on the aesthetics of AI art, "brainrot" and nostalgia for VARSITY (Cambridge's oldest student newspaper).
www.varsity.co.uk/arts/31373
I took the train to Oslo today, so had time to write up a blog post about yesterday’s AI theory discussion, which was about @ryanheuser.com’s paper on LLM-generated poetry, Jameson, the gimmick, idealisation, rhyme and metre. jilltxt.net/do-llms-norm...
Seth, I have been watching and judging your turn to cuteness to distract from the reality of hell
Yeah, I recognized Blood Meridian. Strange choice for "literary fiction" given the style is so distinctive and the AI imitation so different.
5/5 human, baby. AI's tics ("It's not X. It's Y."), its lack of surprise (AI would never write of a fish "he hung a grunting weight"), its sentimentality, all make for recognizable and poor writing. It's better at genres where a low-entropy style of smooth compression is ideal, like a brief summary.
LLM base models are wild & unrestrained statistical engines trained on collective data but then disciplined into safe chatbot commodities. We can trace how that AI "alignment" displaces base models' raw energy into corp-friendly outputs. "Liberating" that raw energy may have revolutionary potential.
Submitting this abstract to "Accelerationism Revisited", a symposium in Dublin. Mapping psychoanalytic topology in LLM base models → instruction-tuned → safety-tuned models. They progressively "displace" (in Freudian sense) censored content into adjacent semantics, even across hidden model layers.
I mean admittedly sometimes they're just bonkers.
"Conspiracy theory" is a temporally bound concept. It's usually just being right too early. Covid lab-leak was a conspiracy theory before US intelligence got behind it. With the Epstein docs released, in hindsight "Pizzagate" wasn't far off. Many such cases
Our next Critical AI Theory Reading Group meeting is coming up on Tuesday at noon Norway time. We're reading @ryanheuser.com's paper doi.org/10.22148/001... - if you've read the paper and want to discuss it, join us in the glass house at CDN.
58008
🤷🏻♂️
I'm excited to be a co author on this new paper, "Computational Hermeneutics," with a bunch of other great scholars from the humanities + computer science. In it, we lay out concepts for evaluating gen AI's capacity for interpretation esp ambiguity, context, etc. www.frontiersin.org/journals/art...
Where is the lie?
Not China, not Russia, not Iran, but the USA and Israel are the most dangerous and murderous rogue states in the world.
lol. no
Everyone on X voted for Trump, everyone on Bluesky voted for Hillary, no one on Tik Tok has ever voted. Alas, I have nowhere to scroll
@richardjeanso.bsky.social @hoytlong.bsky.social @mmvty.bsky.social @kirstenostherr.bsky.social @devenparker.bsky.social @emilyrobinson.bsky.social @karinarodriguez.bsky.social @tedunderwood.com @adityavashisht.bsky.social @mattwilkens.bsky.social @youyouwu.bsky.social @yuanzheng.bsky.social + more!
I should mention some of my coauthors (whom I can find on bsky): @ruthahnert.bsky.social @mariaa.bsky.social @emmanouilb.bsky.social @bcaramiaux.bsky.social @shaunaconcannon.bsky.social @martindisley.bsky.social @jeddobson.bsky.social @yalidu.bsky.social @evelyngius.bsky.social @jwyg.bsky.social ...
I'm on a 38(!)-author paper just published in Frontiers in Artificial Intelligence, "Computational hermeneutics: evaluating generative AI as a cultural technology". We splice Schleiermacher and hermeneutic theory into AI debates, arguing AI are "context machines".
www.frontiersin.org/journals/art...
This? Yes, personally I would call this left-accelerationist. But the vibes say that nothing written by three MIT faculty and posted at NBER can be left-accelerationist. bsky.app/profile/nber...
Like, maybe AI *should* take all of our jobs. Maybe then we'd be forced to overcome wage labor and the capitalist mode of production. Maybe socialist AI could do central planning right this time.
Reading Benjamin Noys' book MALIGNANT VELOCITIES (2014), which coined and critiques "accelerationism" as an imaginary political project that regresses into an aesthetic (a "libidinal fantasy of machinic integration") – and yet I can't resist thinking AI has untapped left-accelerationist potential.
The people who turned critical theory into toothless liberal moralism feel edgy again now because of the rise of authoritarianism. That is backwards. Far right anti-establishment politics is, in part, a response to the lack of a credible left alternative to the (neo)liberal blob.
Tired of a kind of entry-level relativism in academic discussions. Who's to say what "slop" is, who's to say what is good, etc etc. It's undergrad-y: at once true and banal.
"a computer can never be horny, therefore a computer must never make art"
Surely the point is that the completion lines are basically trite and generic. The positivity comes from lack of complexity and nuance—two of the keys to good lyric poetry. AIs are shit poets.
AI completions of historical poems bias emotion toward positivity and away from arousal.
LLMs prompted with an emotion taxonomy and a poem, for 3 taxonomies x 3K human poems [Chadwyck-Healey sampled for poet DOB 1600-2000] + 3K AI poems [9 LLMs completing first 5 lines of human poem].