Many apparent disagreements over the utility of neural manifolds come from a lack of clarity on what the term really encompasses, argues @mattperich.bsky.social
#neuroskyence
www.thetransmitter.org/neural-dynam...
@libedinsky.bsky.social
Neuroscientist in Singapore. Interested in intelligence, both biological and artificial.
Many apparent disagreements over the utility of neural manifolds come from a lack of clarity on what the term really encompasses, argues @mattperich.bsky.social
#neuroskyence
www.thetransmitter.org/neural-dynam...
New paper out in #Neuron: A general theory of sequential working memory in prefrontal cortex and RNN/SSMs with their exact neural mechanism. Plus unifying this new mechanism with the alternate mechanism of hippocampal cognitive maps! (1/9)
www.cell.com/neuron/fullt...
I am excited to share the last work of my postdoc as a Swartz Fellow at NYU on the dynamic routing of large-scale cognitive networks in the neocortex! ๐๐ง Here's a quick breakdown: ๐งต
preprint: www.biorxiv.org/content/10.1...
1760: Bayes: You should interpret what you see in the light of what you know.
1780: Galvani: Nerves have something to do with Electricity.
1850: Phineas Gauge et al: Different parts of the brain do different things.
It feels like we're back in the 40s
03.12.2024 00:39 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?
01.12.2024 23:39 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.
01.12.2024 15:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0An example of out of training would be speaking a language that you've never before encountered. Absurd example, but definitely out of training :)
01.12.2024 15:41 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).
01.12.2024 15:38 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set
01.12.2024 14:39 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.
01.12.2024 14:11 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0Yeah, I get that. I still prefer to attempt defining. But I guess it's just a personal preference :)
01.12.2024 13:56 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Oh oops. I always thought definitions helped think about issues properly.
01.12.2024 13:33 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0So do you then subscribe to something along the lines of definition 2 to assign intelligence? Or something else?
01.12.2024 13:08 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs
01.12.2024 12:49 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0LLMs would have some of the first, but none of the second. Could the second definition be closer to what you were thinking?
01.12.2024 11:31 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...
01.12.2024 11:29 โ ๐ 1 ๐ 0 ๐ฌ 2 ๐ 0It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.
01.12.2024 11:22 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0Depends on how you define intelligence. How would you define it such that LLMs have zero?
01.12.2024 06:20 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 1Important paper!
www.biorxiv.org/content/bior...
Im not sure the Discussion fully delineates its radical implications.
No more...
* Place cells
* Grid cells, splitter cells, border cells
* Mirror neurons
* Reward neurons
* Conflict cells
(continued)
And what does it mean to โtruly understandโ? Is the something more to understanding than what LLMs do? If you are curious about these questions this blog might help: blog.dileeplearning.com/p/ingredient...
18.11.2024 21:47 โ ๐ 25 ๐ 6 ๐ฌ 0 ๐ 0โฆIt boils down to whether we want to recruit people for incremental progress on a path weโre comfortable with, or if we need/want breakthroughs & paradigm shifts. I think we need the latter, & that requires learning from biology first, more than we have, rather than starting & staying in model land.
17.11.2024 16:23 โ ๐ 32 ๐ 3 ๐ฌ 5 ๐ 0๐ก๐ผ๐ป๐ฐ๐ผ๐ฟ๐๐ถ๐ฐ๐ฎ๐น ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐ผ๐ป, subcortex matters!
Happy to share very short piece on thinking of cognition more broadly as solving a broad gamut of behavioral problems, including problems in the "here and now".
Open access for now:
www.sciencedirect.com/science/arti...
#neuroscience