maybe: leif weatherby's Avatar

maybe: leif weatherby

@leifw.bsky.social

Hegel, cybernetics, Buffalo Bills

1,226 Followers  |  960 Following  |  254 Posts  |  Joined: 21.09.2023
Posts Following

Posts by maybe: leif weatherby (@leifw.bsky.social)

Is this streamed?

01.03.2026 21:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Correct Ty

01.03.2026 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Glenn Billy Glenn Joel

01.03.2026 19:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you’re around NYU next week, join us! We have a very exciting list of speakers lined up

01.03.2026 18:04 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Defund the humanities?

01.03.2026 01:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

thanks Deirdre! will be in Boston next year :)

01.03.2026 00:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Cultural AI: An Emerging Field Presented by the Digital Theory Lab and hosted by the Remarque Institute A three-day conference

join the Digital Theory Lab Mar 9-11 for Cultural AI: An Emerging Field (RSVP required)

as.nyu.edu/research-cen...

28.02.2026 20:13 β€” πŸ‘ 22    πŸ” 3    πŸ’¬ 1    πŸ“Œ 5

The Gell-Mann amnesia effect, but for coding agents.

27.02.2026 17:37 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0

Andrew?

27.02.2026 22:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Literary Theory 101

27.02.2026 20:23 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Many such cases

27.02.2026 17:09 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Fuck does bran not actually help?

24.02.2026 22:26 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Mb it was Captain Hook and when he lost the hand they were like ooohhh

24.02.2026 03:11 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I hear taste is eating Silicon Valley

24.02.2026 03:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Everyone is Maxxing when they should be Marxxxing

22.02.2026 19:39 β€” πŸ‘ 105    πŸ” 19    πŸ’¬ 2    πŸ“Œ 1

The corollary to this is that figuring out how to do reliable evals and tests for tasks other than code β€” incl fuzzy, subjective tasks β€” has become a critical bottleneck and will receive a great deal of attention.

19.02.2026 18:42 β€” πŸ‘ 38    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

Softwarization will be uneven and weirdly distributed for this reason

19.02.2026 22:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 19.02.2026 18:26 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Optimist: AI has achieved human level performance!

Realist: β€œAI” is a collection of brittle hacks that, under very specific circumstances, mimic the surface level of human intelligence

Pessimist: AI HAS achieved human level performance

13.11.2024 20:23 β€” πŸ‘ 203    πŸ” 42    πŸ’¬ 5    πŸ“Œ 1

bureaucracy stays winning

19.02.2026 18:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The left is missing out on AI As a movement, it has largely refused to engage seriously with AI, ceding debate about a threat and opportunity to the right

www.transformernews.ai/p/the-left-i...

19.02.2026 18:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

also if you're looking for what Ricardo already pointed out, migration from skill markets due to machines, but you think that's "normal" you're like ok sure, that doesn't move the needle for me - but for the wonk centrists there's literally a needle that is the only object of their attention

19.02.2026 18:23 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

RIP the Textpocalypse, 2023-2026

19.02.2026 18:22 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

on the Left's supposed silence on AI: the concrete things happening at the moment conform in a pretty granular way to the picture Marx draws of technological innovation in the process of capital. the center, liberals, and the hype people all seem surprised, where no Marxist cd be

19.02.2026 17:57 β€” πŸ‘ 26    πŸ” 3    πŸ’¬ 4    πŸ“Œ 0

Would be extremely interesting to formalize sequences, but literacy wise this solves like 25 problems in 10 seconds this is god’s work

19.02.2026 17:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

do you have this up somwhere? i would love to teach with this, or show it to people at talks (with credit) so they can quickly grasp this point

19.02.2026 17:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Opinion | The Bots Are Plotting a Revolution, and It’s All Very Cringe

CDS-affiliated Assoc. Prof. @leifw.bsky.social wrote for The New York Times about Moltbook, the AI-only social network.

He argued that the AI social network is a form of storytelling reflecting human culture, noting that 90% of posts get no response.

www.nytimes.com/2026/02/03/o...

19.02.2026 15:57 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I’ve never thought about it it this way before but I write long by nature and that would be one reason I’m not really bothered by this stuff

19.02.2026 13:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Atlantic writers?

18.02.2026 17:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Photo of a passage from Leif Weatherby’s LANGUAGE MACHINES that reads: β€œThe most important thing to see is that no one is hand engineering the computation of the data (although the overall function is designed, its performance is only "steered" by shifting hyperparameters). It's worth emphasizing this point: as I learned about these functions, I looked for years for the point at which someone was manually intervening in this scheme to "understand" what was going on, to pick the
"concept" or "image" of the squirrel out of the data. Nothing of this kind happens. One can tune these systems and watch the error fall away. One can pick different known functions that work better, more or less, in different contexts. But no one "sees" what's going on. And not because nets are black boxes but because they only work if the "answer" isn't given, either to the human or to the machine. If we know the concept in advance, learning isn't possible. This is why the net sits on the border between induction and something more, the hypothetical. When it recognizes an image, that's because we've "supervised" the images we've fed it in the labeling process. But we're asking it for something we don't already have: a quantitative formula of "squirrelness," a function that
1 Enes what it is to be a pixelated squirrel.”

Photo of a passage from Leif Weatherby’s LANGUAGE MACHINES that reads: β€œThe most important thing to see is that no one is hand engineering the computation of the data (although the overall function is designed, its performance is only "steered" by shifting hyperparameters). It's worth emphasizing this point: as I learned about these functions, I looked for years for the point at which someone was manually intervening in this scheme to "understand" what was going on, to pick the "concept" or "image" of the squirrel out of the data. Nothing of this kind happens. One can tune these systems and watch the error fall away. One can pick different known functions that work better, more or less, in different contexts. But no one "sees" what's going on. And not because nets are black boxes but because they only work if the "answer" isn't given, either to the human or to the machine. If we know the concept in advance, learning isn't possible. This is why the net sits on the border between induction and something more, the hypothetical. When it recognizes an image, that's because we've "supervised" the images we've fed it in the labeling process. But we're asking it for something we don't already have: a quantitative formula of "squirrelness," a function that 1 Enes what it is to be a pixelated squirrel.”

This from @leifw.bsky.social is helpful on the β€œis it a black box or is it not” discourse of recent days. The specific context here is an image classification net w/ backpropagation but is generalizable on the importance of induction.

18.02.2026 01:41 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0