Tyler Shoemaker's Avatar

Tyler Shoemaker

@t-shoemaker.bsky.social

Assistant Professor, Critical AI. Texas A&M University tylershoemaker.info

200 Followers  |  215 Following  |  48 Posts  |  Joined: 31.08.2023  |  1.8932

Latest posts by t-shoemaker.bsky.social on Bluesky

Preview
The Irrational Decision How the computer revolution shaped our conception of rationalityโ€”and why human problems require solutions rooted in human intuition, morality, and judgment

Iโ€™m excited to announce that my new book, _The Irrational Decision_, is now available for pre-order from Princeton University Press.

04.08.2025 14:31 โ€” ๐Ÿ‘ 58    ๐Ÿ” 16    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 4
Clown Core  - Van (Visual Album)
YouTube video by BandcampOddities Clown Core - Van (Visual Album)

Van.

www.youtube.com/watch?v=kdA0...

31.07.2025 22:45 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

If that's how both of you want to burn through your indulgences this jubilee year, I won't yuck your yum

29.07.2025 19:09 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Media theorists stop saying diagram (v.) challenge

28.07.2025 15:22 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

I would've thought LessWrong would make its film debut in a Potter movie, not Eddington

23.07.2025 16:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A drone show spelling out the phrase "solidgoldmagikarp"

A drone show spelling out the phrase "solidgoldmagikarp"

Mechanistic interpretability is making a real impact in our world

23.07.2025 16:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Plus 1-37 pages of appendices

23.07.2025 14:16 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
LISP source code with florid ornamentation down the left hand margin

LISP source code with florid ornamentation down the left hand margin

Book history just writing itself over here

14.07.2025 16:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Grokโ€™s Antisemitic Meltdown Was Entirely Predictable Elon Muskโ€™s AI chatbot, Grok, started spewing antisemitic conspiracy theories this week. Itโ€™s not the first time something like this has happened โ€” and a reminder that LLMs arenโ€™t truth-telling machin...

I wrote about Grokโ€™s recent and very predictable meltdown for @jacobinmag.bsky.social

jacobin.com/2025/07/grok...

10.07.2025 18:15 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

My follow-up article on synthesis in gen AI is now out ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ
ย 
Responding to @shanedenson.bsky.social
response, I expand on my transcendental argument about LLMs and develop the case for a structuralist reading of Kantian synthesis.

Available Open Access โฌ‡๏ธ

journals.ub.uni-koeln.de/index.php/ph...

10.07.2025 09:39 โ€” ๐Ÿ‘ 39    ๐Ÿ” 10    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

Doomers in 2015: human values are incredibly complex, AI will never be able to model them

Alignment researchers in 2025: we found the evil vector. Itโ€™s the vector that makes the AI do evil stuff

x.com/NeelNanda5/s...

29.06.2025 16:48 โ€” ๐Ÿ‘ 255    ๐Ÿ” 38    ๐Ÿ’ฌ 15    ๐Ÿ“Œ 30

This book is an axe that will break up the frozen sea of our "nuh uh" vs "yuh huh" "AI and the humanities" debates. Retvrn to philosophy of language, to structuralism: what kinds of signs are the words coming out of LLMs?

24.06.2025 14:57 โ€” ๐Ÿ‘ 13    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Preview
โ€œLanguage and Image Minus Cognitionโ€: An Interview with Leif Weatherby by Robin Manley

Leif Weatherby (@leifw.bsky.socialโ€ฌ) discusses his new book, Language Machines, with Robin Manley (@robinmanley.bsky.socialโ€ฌ). The interview covers similarities between structuralism and Large Language Models, Saussure's relationship to Marxism, and theories versus histories of the present.

11.06.2025 13:26 โ€” ๐Ÿ‘ 21    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3
Post image Post image

Applications just opened for a postdoctoral position in Digital Media Studies with Markus Krajewski and myself at the Department of Media Studies and the Digital Humanities Laboratory at the University of Basel. (1/3)

12.05.2025 13:19 โ€” ๐Ÿ‘ 20    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

Hey now, at least one of those params is an index representing 26 other disparate metrics for "happiness"

08.05.2025 16:55 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Cover of "Ultrabasic Guide to the Internet -- For Humanities Users at UCSB" showing the title followed by: "Version 1.0 (beta), By Alan Liu, English Dept."

Cover of "Ultrabasic Guide to the Internet -- For Humanities Users at UCSB" showing the title followed by: "Version 1.0 (beta), By Alan Liu, English Dept."

Blast from the past: In 1994 I wrote an โ€œUltrabasic Guide to the Internet -- For Humanities Users at UCSB.โ€ 124 page. Self-published & sold through my universityโ€™s bookstore. Hereโ€™s a PDF: liu.english.ucsb.edu/wp-content/d.... I canโ€™t believe the effort I put into this back then!

28.04.2025 18:02 โ€” ๐Ÿ‘ 141    ๐Ÿ” 26    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 2

For sure. Though honestly, I'm a little unconvinced it works to capture what's at work here. Both passages are squishy to me

02.04.2025 19:09 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
With machine learning, we are no longer discussing the automation of manual and mental work โ€“ generally corresponding to how physical and cognitive labour have become absorbed by the machine in the form of fixed capital. Instead, this qualitative extension of automation beyond the mechanical reproduction of instructions involves an overcoming of automation itself, whereby algorithmic rules now generate or construct patterns from the re-assemblage of data. What is at stake here is the automation of automation: the automated generation of new algorithmic rules based on the granular analysis and multimodal logical synthesis of increasing volumes of data.

With machine learning, we are no longer discussing the automation of manual and mental work โ€“ generally corresponding to how physical and cognitive labour have become absorbed by the machine in the form of fixed capital. Instead, this qualitative extension of automation beyond the mechanical reproduction of instructions involves an overcoming of automation itself, whereby algorithmic rules now generate or construct patterns from the re-assemblage of data. What is at stake here is the automation of automation: the automated generation of new algorithmic rules based on the granular analysis and multimodal logical synthesis of increasing volumes of data.

The so- called programming lan-
guage hierarchy, in which one abstract schematic after another, from
mnemonics for operation codes to high-level languages incorporating
English-language (or other natural-language) words and phrases, facil-
itates the automation of activities formerly performed manually, should
be understood as an emblem of the generally recursive automation of
programming as a political- economic activity: an activity that has no
purpose but to automate other labor activities, not excluding itself.

The so- called programming lan- guage hierarchy, in which one abstract schematic after another, from mnemonics for operation codes to high-level languages incorporating English-language (or other natural-language) words and phrases, facil- itates the automation of activities formerly performed manually, should be understood as an emblem of the generally recursive automation of programming as a political- economic activity: an activity that has no purpose but to automate other labor activities, not excluding itself.

Very tellingly, "the automation of automation" has lately served to tag both senses โ€“ to the exclusion of the other. Parisi: auto-auto is ML, not code. Lennon: auto-auto is code, not ML. Horseshoe theory of automation incoming

02.04.2025 19:03 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

What's the German word for the ego boost + outrage that comes from finding oneself in a dataset/model?

20.03.2025 16:38 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Abstract for "Unreasonable Characters"

The Unicode Standard relies on a key design principle: its authors encode character information, not charactersโ€™ visual forms, their glyphs. But ultimately, character and glyph are not so distinct. User interfaces and artistic practice often obscure their differences โ€“ and in the process, show Unicode for what it is: a historical record bound by prior standards and computing technology. Combining technological history with discussions of artistsโ€™ interventions in the standard, I trace in this record Unicodeโ€™s gaps, ghosts, and politics. For, the characterโ€“glyph distinction has governed what elements of writing Unicode supports, and its instability, I conclude, renders visible the decisions that make this so.

Abstract for "Unreasonable Characters" The Unicode Standard relies on a key design principle: its authors encode character information, not charactersโ€™ visual forms, their glyphs. But ultimately, character and glyph are not so distinct. User interfaces and artistic practice often obscure their differences โ€“ and in the process, show Unicode for what it is: a historical record bound by prior standards and computing technology. Combining technological history with discussions of artistsโ€™ interventions in the standard, I trace in this record Unicodeโ€™s gaps, ghosts, and politics. For, the characterโ€“glyph distinction has governed what elements of writing Unicode supports, and its instability, I conclude, renders visible the decisions that make this so.

We often talk about digital text in terms of plain/rich text, but that breaks down with Unicode. Instead, the operative distinction is character/glyph. The problem? Unicode characters are by definition unrenderable. So naturally, I tried to render them. Xerox was the key

amodern.net/article/unre...

05.03.2025 16:14 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A screenshot of Amodern 12: Countertype. The text reads:

This issue of Amodern turns to new scholarship on typography in the expanding field of reproductive print technologies. It has its roots in โ€œBefore and Beyond Typography,โ€ a 2020 conference sponsored by Stanford University that explored โ€œthe vitality of non-typographic publishing networksโ€ and โ€œthe dynamic interplay between technological change and non-typographic printingโ€ around the globe. Organized by Thomas S. Mullaney and Andrew Amstutz, the conference drew together scholars working on print cultures that either preceded the global spread of industrial type printing and the discursive conflation of type with modernity or jostled alongside type in the twentieth century as โ€œalternative trajectories.โ€ 

The banner image---swirling letters projected onto wall with silhouettes of viewers in front---is from Rafael Lozano-Hemmer's Encode/Decode.

A screenshot of Amodern 12: Countertype. The text reads: This issue of Amodern turns to new scholarship on typography in the expanding field of reproductive print technologies. It has its roots in โ€œBefore and Beyond Typography,โ€ a 2020 conference sponsored by Stanford University that explored โ€œthe vitality of non-typographic publishing networksโ€ and โ€œthe dynamic interplay between technological change and non-typographic printingโ€ around the globe. Organized by Thomas S. Mullaney and Andrew Amstutz, the conference drew together scholars working on print cultures that either preceded the global spread of industrial type printing and the discursive conflation of type with modernity or jostled alongside type in the twentieth century as โ€œalternative trajectories.โ€ The banner image---swirling letters projected onto wall with silhouettes of viewers in front---is from Rafael Lozano-Hemmer's Encode/Decode.

Amodern 12: Countertype is out today with excellent accompanying images from Rafael Lozano-Hemmer's Encode/Decode. The issue turns to "typography in the expanding field of reproductive print technologies."

amodern.net

I have a piece in there on Unicode, xerography, and the problem of plain text.

05.03.2025 16:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim
Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.

Sanity Checks for Saliency Maps Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.

Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Thomas Heap, Tim Lawson, Lucy Farnik, Laurence Aitchison
Sparse autoencoders (SAEs) are an increasingly popular technique for interpreting the internal representations of transformers. In this paper, we apply SAEs to 'interpret' random transformers, i.e., transformers where the parameters are sampled IID from a Gaussian rather than trained on text data. We find that random and trained transformers produce similarly interpretable SAE latents, and we confirm this finding quantitatively using an open-source auto-interpretability pipeline. Further, we find that SAE quality metrics are broadly similar for random and trained transformers. We find that these results hold across model sizes and layers. We discuss a number of number interesting questions that this work raises for the use of SAEs and auto-interpretability in the context of mechanistic interpretability.

Sparse Autoencoders Can Interpret Randomly Initialized Transformers Thomas Heap, Tim Lawson, Lucy Farnik, Laurence Aitchison Sparse autoencoders (SAEs) are an increasingly popular technique for interpreting the internal representations of transformers. In this paper, we apply SAEs to 'interpret' random transformers, i.e., transformers where the parameters are sampled IID from a Gaussian rather than trained on text data. We find that random and trained transformers produce similarly interpretable SAE latents, and we confirm this finding quantitatively using an open-source auto-interpretability pipeline. Further, we find that SAE quality metrics are broadly similar for random and trained transformers. We find that these results hold across model sizes and layers. We discuss a number of number interesting questions that this work raises for the use of SAEs and auto-interpretability in the context of mechanistic interpretability.

2018: Saliency maps give plausible interpretations of random weights, triggering skepticism and catalyzing the mechinterp cultural movement, which now advocates for SAEs.

2025: SAEs give plausible interpretations of random weights, triggering skepticism and ...

03.03.2025 18:42 โ€” ๐Ÿ‘ 96    ๐Ÿ” 16    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

โ€œJust tokenize itโ€ is the โ€œWill it blendโ€ of ML today

23.02.2025 15:08 โ€” ๐Ÿ‘ 51    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Starts at 4pm!

06.02.2025 15:28 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

So let it be written, so let it be done

31.01.2025 16:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For sure. I read the LangChain logo (๐Ÿฆœ๐Ÿ”—) as a blithe response to the entirety of the stochastic parrots debate: language or not, we're chaining outputs into the software stack

31.01.2025 16:19 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I'm hearing that not all the GPUs need to go BRR. Big if true

27.01.2025 15:16 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
"Is this" butterfly meme with the phrase "Is this AI literacy?"

"Is this" butterfly meme with the phrase "Is this AI literacy?"

19.01.2025 21:24 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image

We just advertised a (permanent) job in my neck of the woods!! Come work with me and lead a well-supported DH service center for the university, Research and Infrastructure Support Unit (RISE) at @unibas.ch. (1/6)

10.01.2025 18:05 โ€” ๐Ÿ‘ 26    ๐Ÿ” 22    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

@t-shoemaker is following 20 prominent accounts