Taylor Webb's Avatar

Taylor Webb

@taylorwwebb.bsky.social

Studying cognition in humans and machines https://scholar.google.com/citations?user=WCmrJoQAAAAJ&hl=en

1,328 Followers  |  505 Following  |  109 Posts  |  Joined: 31.08.2023
Posts Following

Posts by Taylor Webb (@taylorwwebb.bsky.social)

Preview
Whither symbols in the era of advanced neural networks? Some of the strongest evidence that human minds should be thought about in terms of symbolic systems has been the way they combine ideas, produce novelty, and learn quickly. We argue that modern neura...

I like this general point about levels of explanation (which I think is similar to the point we make here arxiv.org/abs/2508.05776) but how does it relate to the discussion about mistakes made by LLMs? (Possibly explained by the earlier context of the clip)

22.02.2026 21:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But this example again appears not to involve the reasoning mode. I agree that β€˜thinking’ is confusing nomenclature but it’s notable that most (not all) of the stupidest mistakes come from the feedforward / parallel processing mode.

22.02.2026 21:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models. These models...

We also have a paper on this arxiv.org/abs/2411.00238 but it doesn’t seem to be an arbitrary failure. Instead they seem to fail in precisely the ways that human vision fails under time pressure (including with counting), and increasingly the models seem to resolve this via sequential processing.

22.02.2026 21:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Re: seeing vs. thinking, 'thinking' is arguably a bad term for this, in the vision setting the models depend on sequential processing to individuate objects but that wouldn't commonly be referred to as 'thinking' in the colloquial sense.

22.02.2026 20:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Huh, seems to be the result of different prompts, although it's arguably confusing to say that no liquid can be poured into it (at all or only in the current configuration?). In general most of the comically stupid mistakes (e.g. how many b's in blueberry) seem to be from the non-thinking models.

22.02.2026 20:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Funny, but these demos always seem to be the free / instant model. With thinking turned on it gets this correct.

22.02.2026 18:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

very interesting! Consistent with this, we found that induction heads seem to be completely distinct from what we called 'symbolic induction heads' i.e. function vector heads arxiv.org/abs/2502.20332

22.02.2026 01:36 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

How do you knock the induction heads out of an LM while preserving its ability to think? Is it even possible?

@keremsahin22.bsky.social's work is worth reading if you haven't seen it yet.

hapax.baulab.info

21.02.2026 21:31 β€” πŸ‘ 26    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Preview
Memorization vs. generalization in deep learning: implicit biases, benign overfitting, and more Or: how I learned to stop worrying and love the memorization

What is the relationship between memorization and generalization in AI? Is there a fundamental tradeoff? In infinitefaculty.substack.com/p/memorizati... I’ve reviewed some of the evolving perspectives on memorization & generalization in machine learning, from classic perspectives through LLMs.

18.02.2026 15:54 β€” πŸ‘ 132    πŸ” 27    πŸ’¬ 4    πŸ“Œ 5

Unfortunately the event is in-person only.

16.02.2026 16:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Mechanistic Basis of Reasoning (in Brains and AI) | IVADO

Very excited for our second workshop on the computational ingredients of reasoning (Feb 24-27), this one focused on mechanisms of reasoning in both AI and the brain. Check out the program to see our amazing lineup of speakers, and please consider attending! ivado.ca/en/events/me...

16.02.2026 16:29 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Building compositional tasks with shared neural subspaces Nature - The brain can flexibly perform multiple tasks by compositionally combining task-relevant neural representations.

Thrilled that my paper is out in the @nature.com. We explored how the brain builds complex tasks by compositionally combining simpler sub-task representations. The brain flexibly performs multiple tasks by dynamically reusing neural subspaces for sensory inputs and motor actions

rdcu.be/eRVUk

11.02.2026 22:40 β€” πŸ‘ 130    πŸ” 47    πŸ’¬ 4    πŸ“Œ 1
Post image

Excited to announce a new book telling the story of mathematical approaches to studying the mind, from the origins of cognitive science to modern AI! The Laws of Thought will be published in February and is available for pre-order now.

18.12.2025 15:59 β€” πŸ‘ 166    πŸ” 39    πŸ’¬ 2    πŸ“Œ 5

That is, in order to do the kinds of things that are supposed to require algebraic / rule-based operations, these models actually do something that is algebraic, which both affirms the importance of algebraic operations for human-like reasoning and also shows it doesn't need to be innate.

08.02.2026 02:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models Many recent studies have found evidence for emergent reasoning capabilities in large language models (LLMs), but debate persists concerning the robustness of these capabilities, and the extent to whic...

I generally agree, but the interesting thing is that the LLMs/VLMs sometimes do end up doing something very structured and algebraic, as we show e.g. here arxiv.org/abs/2502.20332 and here arxiv.org/abs/2506.15871 (the paper that @romanfeiman.bsky.social 's meme was commenting on).

08.02.2026 02:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Can you elaborate?

08.02.2026 01:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very cool! Yeah would love to chat at some point.

08.02.2026 00:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes!

08.02.2026 00:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm curious, would you say that this sort of thing is consistent with the predictions of the LoT hypothesis, i.e. these models may emergently be implementing a LoT?

07.02.2026 23:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

amazing summary of our work on visual symbolic mechanisms bsky.app/profile/this...

07.02.2026 23:05 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image 06.02.2026 21:07 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 2

Thanks Shahab!

07.02.2026 14:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to share that our work on Visual Symbolic Mechanisms has been accepted to ICLR! πŸŒ΄πŸ‡§πŸ‡·

06.02.2026 18:46 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Using Artificial Neural Networks to Relate External Sensory Features to Internal Decisional Evidence Abstract. All theories of perceptual decision-making postulate that external sensory information is transformed into the internal evidence that is used to judge the identity of the stimulus. However, ...

How do you know how visual stimuli are represented internally for decision making? This is perhaps the central question in perceptual decision making. In a new paper, we show that one can use artificial neural networks to crack this problem. #NeuroAi #VisionScience

direct.mit.edu/opmi/article...

05.02.2026 15:01 β€” πŸ‘ 24    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Apes Share Human Ability to Imagine
YouTube video by Johns Hopkins University Apes Share Human Ability to Imagine

Imagination in bonobos!

I am thrilled to share a new paper w/ Amalia Bastos, out now in @science.org

We provide the first experimental evidence that a nonhuman animal can follow along a pretend scenario & track imaginary objects. Work w/ Kanzi, the bonobo, at Ape Initiative

youtu.be/NUSHcQQz2Ko

05.02.2026 19:18 β€” πŸ‘ 289    πŸ” 110    πŸ’¬ 10    πŸ“Œ 10

Very excited about this work looking at the emergent mechanisms that vision language models use to perform structured visual processing, mirroring a computational strategy (visual indexing) proposed in cognitive science, but here learned by VLMs. Check out the paper/thread for more details!

05.02.2026 21:24 β€” πŸ‘ 25    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

The visual world is composed of objects, and those objects are composed of features. But do VLMs exploit this compositional structure when processing multi-object scenes? In our πŸ†’πŸ†• #ICLR2026 paper, we find they do – via emergent symbolic mechanisms for visual binding. πŸ§΅πŸ‘‡

05.02.2026 20:54 β€” πŸ‘ 83    πŸ” 25    πŸ’¬ 1    πŸ“Œ 3

This was a blast! Thanks for joining us!

31.01.2026 23:49 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Federal agents with weapons drawn, moments before murdering American citizens on the streets of Minneapolis at the dawn of 2026.

Federal agents with weapons drawn, moments before murdering American citizens on the streets of Minneapolis at the dawn of 2026.

What should academics be doing right now?

I have been writing up some thoughts on what the research says about effective action, and what universities specifically can do.

davidbau.github.io/poetsandnurs...

It's on GitHub. Suggestions and pull requests welcome.
github.com/davidbau/poe...

26.01.2026 03:27 β€” πŸ‘ 37    πŸ” 16    πŸ’¬ 0    πŸ“Œ 4
Post image

Can you solve this algebra puzzle? 🧩

cb=c, ac=b, ab=?

A small transformer can learn to solve problems like this!

And since the letters don't have inherent meaning, this lets us study how context alone imparts meaning. Here's what we found:πŸ§΅β¬‡οΈ

22.01.2026 16:09 β€” πŸ‘ 48    πŸ” 10    πŸ’¬ 2    πŸ“Œ 2