Guillermo Puebla's Avatar

Guillermo Puebla

@guillermopuebla.bsky.social

Cognitive scientist studying visual reasoning in humans and DNNs. https://guillermopuebla.com

51 Followers  |  111 Following  |  4 Posts  |  Joined: 29.10.2023  |  1.6097

Latest posts by guillermopuebla.bsky.social on Bluesky

โ€ฆ but my fully distributed model explains 99.9% of the fMRI variance ๐Ÿฅบ

06.09.2025 22:30 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Language and Computation in Neural Systems We are an international group of scientists consisting of linguists, cognitive scientists, cognitive neuroscientists, computational neuroscientists, computational modellers, computational scientists, ...

Interested in doing a PhD with me and lacns.github.io? Or with any of the incredible fellows in the IMPRS School of Cognition www.maxplanckschools.org/cognition-en - apply before Dec 1st at cognition.maxplanckschools.org/en/application

03.09.2025 15:15 โ€” ๐Ÿ‘ 35    ๐Ÿ” 37    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Preview
From Basic Affordances to Symbolic Thought: A Computational Phylogenesis of Biological Intelligence What is it about human brains that allows us to reason symbolically whereas most other animals cannot? There is evidence that dynamic binding, the ability to combine neurons into groups on the fly, is...

In our forthcoming paper, John Hummel and I ask what it would mean for a neural computing architecture such as a brain to implement a symbol system, and the related question of what makes it difficult for them to do so, with an eye toward the differences between humans, animals, and ANNs.

22.08.2025 18:25 โ€” ๐Ÿ‘ 34    ๐Ÿ” 13    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
Large Language Models Do Not Simulate Human Psychology Large Language Models (LLMs),such as ChatGPT, are increasingly used in research, ranging from simple writing assistance to complex data annotation tasks. Recently, some research has suggested that LLM...

Large Language models (LLMs) do not simulate human psychology. That's the title of our new paper, available as preprint today (1/12):

arxiv.org/abs/2508.06950

12.08.2025 15:05 โ€” ๐Ÿ‘ 125    ๐Ÿ” 45    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 6

This is great! One question: do you think this analysis extends to brain โ€œdecodingโ€ methods?

05.08.2025 17:15 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What does it mean if pure prediction fails? If you get 100% pure prediction you still donโ€™t know how the model predicted unless you run an experiment that manipulates independent variables. A digital clock 100% predicts the time of a cuckoo clock but it works totally differently.

12.12.2024 23:05 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ“Œ

01.12.2024 23:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A poster from VSS 2024 showing that emergent features can cause negatively accelerating search functions in relational searches. There are results from visual experiments and simulations from the CASPER model of visual search that included emergent features.

A poster from VSS 2024 showing that emergent features can cause negatively accelerating search functions in relational searches. There are results from visual experiments and simulations from the CASPER model of visual search that included emergent features.

What's the deal with negatively accelerating search functions in relational searches? The CASPER model of visual search shows how emergent features may allow for parallel processing in searches we'd expect to be steep and linear. Visit me Tuesday afternoon in the Banyan Breezeway! #VSS2024

21.05.2024 17:11 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

I've got an exciting #Visionscience and #STEMed announcement: My textbook "Practical Vision: Learning through Experimentation" is now in production at Routledge Books! The book focuses on hands-on, analog exercises to identify and discuss key mechanisms of human vision. Coming October 2024!

22.04.2024 16:36 โ€” ๐Ÿ‘ 25    ๐Ÿ” 8    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 1

Clear writing is (imperfect) evidence of clear thinking. The use of LLMs for writing is IMO often inexcusable, substituting for oneโ€™s own voice the median voices of the past and deceiving oneโ€™s audience with an incorrect picture of oneโ€™s understanding (corrupting their training data, so to speak).

15.03.2024 16:16 โ€” ๐Ÿ‘ 52    ๐Ÿ” 11    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1

Building larger LLMs to get AGI is like linearly accelerating towards light speed

14.01.2024 23:39 โ€” ๐Ÿ‘ 14    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
some heterogeneous neurons

some heterogeneous neurons

Excited to share @rgast.bsky.social's new PNAS paper from the lab, with Sara Solla.

We ask: does cell type heterogeneity affect what neural networks can compute? How might different brain regions leverage heterogeneity to achieve different things?

www.pnas.org/doi/10.1073/...

11.01.2024 14:26 โ€” ๐Ÿ‘ 14    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We then all agree in one respect: There is a lot of hype.ย  But the problem is with researchers who take the hype seriously?That we should focus on the strengths rather than weaknesses of DNNs? That there is little "confusion that deep neural networks (DNNs) are โ€˜models of the human visual systemโ€™โ€?๐Ÿคทโ€โ™‚๏ธ

08.01.2024 14:15 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Definitely.

06.12.2023 17:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is so cool:
www.pnas.org/doi/10.1073/...
Bacteria store a memory of swarming proficiency (measured as the time lag to start swarming on suitable media) in the form of intracellular iron levels. This memory can be passed down for 4 generations!

26.11.2023 16:54 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

A lot of the neuroscience work, particularly in hippocampus, has been focused on content-addressable memory, where data and address are the same. But this might not be the right way to think about memory in the brain. Maybe we have an addressing system that is separate from stored content.

05.11.2023 11:03 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@guillermopuebla is following 20 prominent accounts