Yash Mehta's Avatar

Yash Mehta

@yashsmehta.bsky.social

Cognitive Science PhD student, Johns Hopkins ๐Ÿง  Previously: HHMI Janelia ๐Ÿ‡บ๐Ÿ‡ธ, AutoML Lab ๐Ÿ‡ฉ๐Ÿ‡ช, Gatsby Unit UCL ๐Ÿ‡ฌ๐Ÿ‡ง www.yashsmehta.com ๐Ÿ‡ฎ๐Ÿ‡ณ

30 Followers  |  30 Following  |  7 Posts  |  Joined: 18.11.2024  |  1.5607

Latest posts by yashsmehta.bsky.social on Bluesky

Our modeling framework would offer a new avenue for understanding the computational principles of synaptic plasticity and learning in the brain. Research at HHMI Janelia, with fantastic collaborators Danil Tyulmankov, Adithya Rajagopalan, Glenn Turner, James Fitzgerald and @janfunkey.bsky.social!

18.11.2024 18:18 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We applied our technique to behavioral data from Drosophila in a probabilistic reward-learning experiment. Our findings reveal an active forgetting component in reward learning in flies ๐Ÿชฐ, improving predictive accuracy over previous models. (4/5)

18.11.2024 18:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

This method uncovers complex rules inducing long nonlinear time dependencies, involving factors like postsynaptic activity and current synaptic weights. We validate it through simulations, successfully recovering known rules like Ojaโ€™s and more intricate ones. (3/5)

18.11.2024 18:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
NeurIPS 2024: Model-Based Inference of Synaptic Plasticity Rules Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, appli...

website: yashsmehta.com/plasticity-p... Our approach approximates plasticity rules using parameterized functionsโ€”either truncated Taylor series for theoretical insights or multilayer perceptrons. We optimize these parameters via gradient descent over entire trajectories to match observed data (2/5)

18.11.2024 18:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿš€ Excited to share our paper has been accepted at #NeurIPS! ๐ŸŽ‰ We developed a deep learning framework that infers local learning algorithms in the brain by fitting behavioral or neural activity trajectories during learning. We validate on synthetic data and tested on ๐Ÿชฐ behavioral data (1/5 ๐Ÿงต)

18.11.2024 18:18 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
NeurIPS 2024: Model-Based Inference of Synaptic Plasticity Rules Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, appli...

Interesting approach to estimate the physiological update rules of synapses: yashsmehta.com/plasticity-p...

18.11.2024 13:53 โ€” ๐Ÿ‘ 24    ๐Ÿ” 5    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

๐Ÿ™Œ๐Ÿผ๐Ÿ™Œ๐Ÿผ

18.11.2024 16:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thank you, Konrad!

18.11.2024 16:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@yashsmehta is following 20 prominent accounts