Camilo Libedinsky's Avatar

Camilo Libedinsky

@libedinsky.bsky.social

Neuroscientist in Singapore. Interested in intelligence, both biological and artificial.

303 Followers  |  136 Following  |  15 Posts  |  Joined: 07.12.2023  |  2.1792

Latest posts by libedinsky.bsky.social on Bluesky

Preview
Neural manifolds: Latest buzzword or pathway to understand the brain? When you cut away the misconceptions, neural manifolds present a conceptually appropriate level at which systems neuroscientists can study the brain.

Many apparent disagreements over the utility of neural manifolds come from a lack of clarity on what the term really encompasses, argues @mattperich.bsky.social

#neuroskyence

www.thetransmitter.org/neural-dynam...

30.03.2025 15:02 โ€” ๐Ÿ‘ 63    ๐Ÿ” 15    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

New paper out in #Neuron: A general theory of sequential working memory in prefrontal cortex and RNN/SSMs with their exact neural mechanism. Plus unifying this new mechanism with the alternate mechanism of hippocampal cognitive maps! (1/9)

www.cell.com/neuron/fullt...

08.12.2024 15:27 โ€” ๐Ÿ‘ 136    ๐Ÿ” 40    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 2
Preview
Cognitive networks interactions through communication subspaces in large-scale models of the neocortex The neocortex-wide neural activity is organized into distinct networks of areas engaged in different cognitive processes. To elucidate the underlying mechanism of flexible network reconfiguration, we ...

I am excited to share the last work of my postdoc as a Swartz Fellow at NYU on the dynamic routing of large-scale cognitive networks in the neocortex! ๐ŸŒ๐Ÿง  Here's a quick breakdown: ๐Ÿงต

preprint: www.biorxiv.org/content/10.1...

07.12.2024 00:22 โ€” ๐Ÿ‘ 28    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

1760: Bayes: You should interpret what you see in the light of what you know.
1780: Galvani: Nerves have something to do with Electricity.
1850: Phineas Gauge et al: Different parts of the brain do different things.

01.12.2024 20:29 โ€” ๐Ÿ‘ 42    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

It feels like we're back in the 40s

03.12.2024 00:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?

01.12.2024 23:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.

01.12.2024 15:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

An example of out of training would be speaking a language that you've never before encountered. Absurd example, but definitely out of training :)

01.12.2024 15:41 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).

01.12.2024 15:38 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set

01.12.2024 14:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.

01.12.2024 14:11 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Yeah, I get that. I still prefer to attempt defining. But I guess it's just a personal preference :)

01.12.2024 13:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Oh oops. I always thought definitions helped think about issues properly.

01.12.2024 13:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

So do you then subscribe to something along the lines of definition 2 to assign intelligence? Or something else?

01.12.2024 13:08 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs

01.12.2024 12:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

LLMs would have some of the first, but none of the second. Could the second definition be closer to what you were thinking?

01.12.2024 11:31 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...

01.12.2024 11:29 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.

01.12.2024 11:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Depends on how you define intelligence. How would you define it such that LLMs have zero?

01.12.2024 06:20 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Important paper!

www.biorxiv.org/content/bior...

Im not sure the Discussion fully delineates its radical implications.

No more...

* Place cells
* Grid cells, splitter cells, border cells
* Mirror neurons
* Reward neurons
* Conflict cells

(continued)

22.11.2024 14:47 โ€” ๐Ÿ‘ 254    ๐Ÿ” 70    ๐Ÿ’ฌ 13    ๐Ÿ“Œ 15

And what does it mean to โ€œtruly understandโ€? Is the something more to understanding than what LLMs do? If you are curious about these questions this blog might help: blog.dileeplearning.com/p/ingredient...

18.11.2024 21:47 โ€” ๐Ÿ‘ 25    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

โ€ฆIt boils down to whether we want to recruit people for incremental progress on a path weโ€™re comfortable with, or if we need/want breakthroughs & paradigm shifts. I think we need the latter, & that requires learning from biology first, more than we have, rather than starting & staying in model land.

17.11.2024 16:23 โ€” ๐Ÿ‘ 32    ๐Ÿ” 3    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0
Post image

๐—ก๐—ผ๐—ป๐—ฐ๐—ผ๐—ฟ๐˜๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฐ๐—ผ๐—ด๐—ป๐—ถ๐˜๐—ถ๐—ผ๐—ป, subcortex matters!
Happy to share very short piece on thinking of cognition more broadly as solving a broad gamut of behavioral problems, including problems in the "here and now".
Open access for now:
www.sciencedirect.com/science/arti...
#neuroscience

08.12.2023 17:54 โ€” ๐Ÿ‘ 25    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

@libedinsky is following 20 prominent accounts