Toviah Moldwin's Avatar

Toviah Moldwin

@tmoldwin.bsky.social

Computational neuroscience: Plasticity, learning, connectomics.

294 Followers  |  346 Following  |  343 Posts  |  Joined: 20.11.2024  |  2.3527

Latest posts by tmoldwin.bsky.social on Bluesky

Preview
Tool use aids prey-fishing in a specialist predator of stingless bees | PNAS Tool use is widely reported across a broad range of the animal kingdom, yet comprehensive empirical tests of its function and evolutionary drivers ...

Tool use in insects: Assassin bugs apply resin to their forelegs before a stingless bee hunt. This makes the bees attack the bug in just the right position to be caught!

Videos will worth watching

www.pnas.org/doi/full/10....

17.05.2025 03:53 β€” πŸ‘ 198    πŸ” 84    πŸ’¬ 5    πŸ“Œ 6

(At high spatiotemporal resolution.)

17.05.2025 08:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In the grand scheme of things the main thing that matters is advances in microscopy and imaging methods. Almost all results in neuroscience are tentative because we can't see everything that's happening at the same time.

17.05.2025 08:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I also have a stack of these, I call it 'apocalypse food'.

16.05.2025 19:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You are correct about this.

16.05.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But so is every possible mapping,so the choice of a specific mapping is not contained within the data. Even the fact that the training data comes in X,y pairs is not sufficient to provide a mapping that generalizes in a specific way. The brain chooses a specific algorithm that generalizes well.

16.05.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

(Consider that one can create an arbitrary mapping between a set of images and a set of two labels, thus the choice of a specific mapping is a reduction of entropy and thus constitutes information.)

16.05.2025 18:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The set of weights that correctly classifies images as cats or dogs contains information that is not contained either in the set of training images or in the set of labels.

16.05.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Learning can generate information about the *mapping* between the object and the category. It doesn't generate information about the object (by itself) or the category (by itself) but the mapping is not subject to the data processing inequality for the data or the category individually.

16.05.2025 18:36 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
A Complete-ish Guide To Making Scientific Figures for Publication with Python and Matplotlib (Also Inkscape and MSWORD, unfortunately)

related:
dendwrite.substack.com/p/a-complete...

16.05.2025 10:43 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

GPT is already pretty good at this. Maybe not perfect, but possibly as good as the median academic.

16.05.2025 06:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What do you mean by 'generate information'? What is an example of someone making this sort of claim?

15.05.2025 19:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Paying is best. Reviews should mostly be done by advanced grad students/postdocs who could use the cash.

13.05.2025 19:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Nature vs. Nurture vs. Putting in the Work When people discuss whether a particular trait is innate or environmental, it is usually assumed that each of these components is fixed.
12.05.2025 19:59 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Why wouldn't you want your papers to be LLM-readable?

07.05.2025 15:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If such a value to society exists, it should not be difficult for the PhD student to figure out how to articulate it themselves. A lack of independence of thought when it comes to this sort of thing would be much more concerning.

04.05.2025 15:14 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oh you were on that? Small world.

04.05.2025 07:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But I do think in our efforts to engage with the previous work on this, we made this paper overly long and technical. We present the bottom-line formulation of the plasticity rule in the Calcitron paper.

03.05.2025 20:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
A generalized mathematical framework for the calcium control hypothesis describes weight-dependent synaptic plasticity - Journal of Computational Neuroscience The brain modifies synaptic strengths to store new information via long-term potentiation (LTP) and long-term depression (LTD). Evidence has mounted that long-term synaptic plasticity is controlled vi...

One of the reasons we wrote this paper is because the calcium control is a great theory but there were two semi-conflicting mathematical forumlations of it, both of which had some inelegenancies. I think we managed to clean them up, and made it more 'theory'-like.

link.springer.com/article/10.1...

03.05.2025 20:20 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I know that e.g. Yuri Rodrigues has a paper that incorporates second messengers but at that point it's not really parsimonous any moren

03.05.2025 20:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The leading theory for plasticity is calcium control, which I've done some work on. I do think that I've contributed on that front with the Calcitron and the FPLR framework which came out in the past few months. Anything beyond calcium control gets into simulation territory.

03.05.2025 20:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The reason why it's less active now is because people kind of feel that single neuron theory has been solved. Like the LIF/Cable theory models are still pretty much accepted. Any additional work would almost necessarily add complexity and that complexity is mostly not needed for 'theory' questions.

03.05.2025 16:22 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hebbian learning? Associative attractor networks (e.g. Hopfield)? Calcium control hypothesis? Predictive coding? Efficient coding? There are textbooks about neuro theory.

03.05.2025 13:28 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I kind of like the size of the single neuron theory community, it's the right size. The network theory community is IMHO way too big, there are like thousands of papers about Hopfield networks, that's probably too much.

03.05.2025 13:23 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Not really true, there are a bunch of people doing work on e.g. single neuron biophysics, plasticity models, etc. Definitely not as big of a field but we exist.

03.05.2025 13:20 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And 'because it's unethical in this situation' is not a valid response; the ethics are irrelevant to rigor and the epistemic question of the scientific approach to establishing truth in medicine.

02.05.2025 10:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Exactly. There's no difference between a 'peer reviewer' appointed by a journal and you, a scientific peer, evaluating the publication yourself.

01.05.2025 15:29 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Computational Neuroscience, Connectomics, and Consciousness Zappable Β· Episode

Ariel Krakowski interviews me on his podcast about brains, AI, plasticity, connectomics, consciousness, and everything in between.

open.spotify.com/episode/4m33...

28.04.2025 12:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If they're actually that great there's no problem, the university will want to keep them. But if their job would be at at risk from a younger competitor if not for tenure, that's evidence that they're not actually that great.

27.04.2025 17:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Also a lot of professors build their research careers on outdated scientific trends, in science especially you don't want something holding you to the past.

27.04.2025 07:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@tmoldwin is following 20 prominent accounts