What makes human pyramidal neurons uniquely suited for complex information processing? How can human neuronsβ distinct properties contribute to our advanced cognitive abilities?
01.08.2025 10:30 β π 36 π 9 π¬ 2 π 3@davidbeniaguev.bsky.social
For fun and work, I build Generative Models. Computational Neuroscience PhD. Was trying to understand the brain to help build AI, but it appears it's no longer necessary.. github: https://github.com/SelfishGene
What makes human pyramidal neurons uniquely suited for complex information processing? How can human neuronsβ distinct properties contribute to our advanced cognitive abilities?
01.08.2025 10:30 β π 36 π 9 π¬ 2 π 3New substack post that I decided to write on a whim today
It summarizes my thoughts about what is good now and what soon could be even better
open.substack.com/pub/davidben...
Now out in PLOS CB!
We propose a simple, perceptron-like neuron model, the calcitron, that has four sources of [Ca2+]...We demonstrate that by modulating the plasticity thresholds and calcium influx from each calcium source, we can reproduce a wide range of learning and plasticity protocols.
It seems more plausible every month that PIs might soon greatly reduce the pace in which they take on new students and impose significantly higher standards in recruiting
This would be a "vote with their feet" type of way to know if "grad student turing test" is passed in practice
New work from our lab by @idoai.bsky.social and @danielay1.bsky.social
What makes human cortical neurons have such a complex I/O function as compared to rat neurons?
It turns out it's not just about their size
All details in the thread by Ido
bioRxiv: www.biorxiv.org/content/10.1...
o3 sounds on paper like the perfect first year PhD student
Infinitely hardworking,
infinitely knowledgeable,
is exceptionally technically competent,
reads instantly everything you send it, and immediately starts working on it
I was going for the shock value in "a poor fit approximation of a low resolution approximation"
As to what is the meaning of R^2 = 0.8, and if its good or bad, see second message:
bsky.app/profile/davi...
The real question of what is the precise threshold to qualify for a "good approximation" vs "bad approximation" unfortunately can only be truly studied at the system level
Unfortunately, these studies were never performed as of current date
Predicting the number of spikes in a 600ms time window, I'm sure you'll agree, is a very coarse temporal approximation of a neuron, and even this coarse temporal approximation is only approximated with a R^2=0.8 fit
Biological neurons are simply not approximated well with artificial neurons
There is no meaningful way in which temporal processing in the brain is actually continuous
1ms time discretization is likely sufficient for anything going on in the cortex, hippocampus, cerebellum, basal gaglia, etc.
In the periphery it might be different, but this is mostly sensing
The first line doesn't need to be true for the second line to obviously be true
A human brain can be modeled as a very large artificial neural network
The specific details about how large it needs to be can be debatable and actually matter, but the premise is not
I strongly suspect that 2025-2027 will produce a huge amount of "miracle years" by scientists
Scientists often have accumulated decades of insights and were always lacking in grant funding and in enthusiastic, capable, and hardworking students to help them - will soon be unlocked
Yeah, this is true, but maybe we're all just boomers, and the classical way should no longer be considered the best or only way?
02.12.2024 17:00 β π 0 π 0 π¬ 0 π 0So if you are contemplating doing a PhD sometime in your life, better enroll now
01.12.2024 20:37 β π 1 π 0 π¬ 1 π 0I can easily see how several years from now it will be very hard to acquire classical PhD training in a theoretical field
Mainly due to language models maturing into research assistants that from an advisor point of view are simply better than an average PhD student
Today, code writing LLMs are the latest tools that make ideas be closer to their implementation
Not yet instant, though
Oh, i imagine the day where ideas are implemented and tested only minutes after being first thought of
That would be a great day
This is the promise of AI for me
I remember shortly after first learning matlab in 2nd year undergrad feeling "wow, I can think of things to do, and then nearly instantly do them and see the result"
It felt magical
Weirdly, ideas increased in complexity and these days always somehow feel at least 1-3 months of hard work away
Thanks
25.11.2024 17:50 β π 0 π 0 π¬ 0 π 0Thanks!
25.11.2024 17:19 β π 1 π 0 π¬ 0 π 0Wondering how hard it will be to create a self-made feed algorithm for bluesky
Is the source code for the 'discover' or 'following' feeds available somewhere? Is it tinkerable/tunable?
How do these things work here?
My secret hope is that this site picks up steam so that twitter will be forced to respond by reverting all the bad changed they introduced in recent ~2 years
Worst changes:
1) suppression of link sharing
2) tiktokification of news feed (engagement over substance)
Just joined, nothing to post so here is a link to my substack
I have two posts there, it was not really possible to share on twitter, so maybe here?
Titles:
1) Why I can no longer pursue a career in academia (Sep 2024)
2) Obvious next steps in AI research (Sep 2023)
davidbeniaguev.substack.com