David Beniaguev's Avatar

David Beniaguev

@davidbeniaguev.bsky.social

For fun and work, I build Generative Models. Computational Neuroscience PhD. Was trying to understand the brain to help build AI, but it appears it's no longer necessary.. github: https://github.com/SelfishGene

170 Followers  |  710 Following  |  20 Posts  |  Joined: 24.11.2024  |  1.6151

Latest posts by davidbeniaguev.bsky.social on Bluesky

Post image

What makes human pyramidal neurons uniquely suited for complex information processing? How can human neurons’ distinct properties contribute to our advanced cognitive abilities?

01.08.2025 10:30 β€” πŸ‘ 36    πŸ” 9    πŸ’¬ 2    πŸ“Œ 3
Preview
Science is a decentralized civilization-wide collaborative effort Anyone can contribute to science, the protocol is simple, just upload a document to the web and if someone finds it useful, they will build upon it and cite it

New substack post that I decided to write on a whim today

It summarizes my thoughts about what is good now and what soon could be even better

open.substack.com/pub/davidben...

27.02.2025 13:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis Author summary Researchers have developed various learning rules for artificial neural networks, but it is unclear how these rules relate to the brain’s natural processes. This study focuses on the ca...

Now out in PLOS CB!

We propose a simple, perceptron-like neuron model, the calcitron, that has four sources of [Ca2+]...We demonstrate that by modulating the plasticity thresholds and calcium influx from each calcium source, we can reproduce a wide range of learning and plasticity protocols.

19.02.2025 16:46 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

It seems more plausible every month that PIs might soon greatly reduce the pace in which they take on new students and impose significantly higher standards in recruiting

This would be a "vote with their feet" type of way to know if "grad student turing test" is passed in practice

04.02.2025 12:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New work from our lab by @idoai.bsky.social and @danielay1.bsky.social

What makes human cortical neurons have such a complex I/O function as compared to rat neurons?

It turns out it's not just about their size

All details in the thread by Ido

bioRxiv: www.biorxiv.org/content/10.1...

26.12.2024 17:40 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

o3 sounds on paper like the perfect first year PhD student

Infinitely hardworking,
infinitely knowledgeable,
is exceptionally technically competent,
reads instantly everything you send it, and immediately starts working on it

21.12.2024 09:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I was going for the shock value in "a poor fit approximation of a low resolution approximation"

As to what is the meaning of R^2 = 0.8, and if its good or bad, see second message:

bsky.app/profile/davi...

16.12.2024 22:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The real question of what is the precise threshold to qualify for a "good approximation" vs "bad approximation" unfortunately can only be truly studied at the system level

Unfortunately, these studies were never performed as of current date

16.12.2024 22:36 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

Predicting the number of spikes in a 600ms time window, I'm sure you'll agree, is a very coarse temporal approximation of a neuron, and even this coarse temporal approximation is only approximated with a R^2=0.8 fit

Biological neurons are simply not approximated well with artificial neurons

16.12.2024 22:30 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

There is no meaningful way in which temporal processing in the brain is actually continuous

1ms time discretization is likely sufficient for anything going on in the cortex, hippocampus, cerebellum, basal gaglia, etc.

In the periphery it might be different, but this is mostly sensing

16.12.2024 10:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The first line doesn't need to be true for the second line to obviously be true

A human brain can be modeled as a very large artificial neural network

The specific details about how large it needs to be can be debatable and actually matter, but the premise is not

16.12.2024 00:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I strongly suspect that 2025-2027 will produce a huge amount of "miracle years" by scientists

Scientists often have accumulated decades of insights and were always lacking in grant funding and in enthusiastic, capable, and hardworking students to help them - will soon be unlocked

06.12.2024 17:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah, this is true, but maybe we're all just boomers, and the classical way should no longer be considered the best or only way?

02.12.2024 17:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So if you are contemplating doing a PhD sometime in your life, better enroll now

01.12.2024 20:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I can easily see how several years from now it will be very hard to acquire classical PhD training in a theoretical field

Mainly due to language models maturing into research assistants that from an advisor point of view are simply better than an average PhD student

01.12.2024 20:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1

Today, code writing LLMs are the latest tools that make ideas be closer to their implementation

Not yet instant, though

Oh, i imagine the day where ideas are implemented and tested only minutes after being first thought of

That would be a great day

This is the promise of AI for me

28.11.2024 17:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I remember shortly after first learning matlab in 2nd year undergrad feeling "wow, I can think of things to do, and then nearly instantly do them and see the result"

It felt magical

Weirdly, ideas increased in complexity and these days always somehow feel at least 1-3 months of hard work away

28.11.2024 17:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks

25.11.2024 17:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks!

25.11.2024 17:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wondering how hard it will be to create a self-made feed algorithm for bluesky

Is the source code for the 'discover' or 'following' feeds available somewhere? Is it tinkerable/tunable?

How do these things work here?

25.11.2024 17:03 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

My secret hope is that this site picks up steam so that twitter will be forced to respond by reverting all the bad changed they introduced in recent ~2 years

Worst changes:
1) suppression of link sharing
2) tiktokification of news feed (engagement over substance)

24.11.2024 21:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
David’s Substack | David Beniaguev | Substack My personal Substack. Click to read David’s Substack, by David Beniaguev, a Substack publication. Launched a year ago.

Just joined, nothing to post so here is a link to my substack

I have two posts there, it was not really possible to share on twitter, so maybe here?

Titles:
1) Why I can no longer pursue a career in academia (Sep 2024)
2) Obvious next steps in AI research (Sep 2023)

davidbeniaguev.substack.com

24.11.2024 20:59 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@davidbeniaguev is following 20 prominent accounts