Very important point! We've made arguments from a computational perspective that low-variance features can be computationally relevant (bsky.app/profile/lamp...), but it's much cooler to see it demonstrated on a model of real neural dynamics
23.11.2025 17:05 โ ๐ 23 ๐ 2 ๐ฌ 0 ๐ 0
Oops. Ooooooooooooops.
I do hope that nobody has been given or denied a job/promotion based on their SpringerNature citation counts in the past 15 years.
arxiv.org/pdf/2511.01675
h/t @nathlarigaldie.bsky.social
07.11.2025 14:01 โ ๐ 187 ๐ 99 ๐ฌ 6 ๐ 13
A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.
23.11.2025 07:47 โ ๐ 36 ๐ 18 ๐ฌ 2 ๐ 5
You will never guess what this is in reference to.
23.11.2025 11:27 โ ๐ 686 ๐ 213 ๐ฌ 20 ๐ 4
The missing heritability question is now (mostly) answered
Not with a bang but with a whimper
I wrote a little bit about the "missing heritability" question and several recent studies that have brought it to a close. A short ๐งต
21.11.2025 22:33 โ ๐ 337 ๐ 164 ๐ฌ 13 ๐ 21
How often have you heard that schizophrenia is โ80% geneticโ?
That number is almost certainly too high because it comes from twin studies that overestimate heritabilty.
Great explainer of this phenomenon๐
#neuroskyence #neuroscience #psychiatry
23.11.2025 11:49 โ ๐ 54 ๐ 17 ๐ฌ 1 ๐ 1
OSF
Super excited to share my first preprintย with Katherine Duncan and Morgan Barense (@barense.bsky.social) -- "Memory strength at reactivation, not memory age, governs prediction error driven updating of naturalistic event memory"! ๐ง ๐https://osf.io/preprints/psyarxiv/q9rkn_v1
18.11.2025 22:08 โ ๐ 43 ๐ 13 ๐ฌ 0 ๐ 3
LLMs are not people. They are not sapient. They don't have feelings.
But they are the most powerful information tools ever built.
And because they are trained on the "corpus of all mankind," they should be the birthright of all mankind.
23.11.2025 04:41 โ ๐ 26 ๐ 7 ๐ฌ 5 ๐ 0
Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks
Plasticity at inhibitory synapses maintains balanced excitatory and inhibitory synaptic inputs at cortical neurons.
"Spiking Networks Hate It! Find Out the One Plasticity Trick They Donโt Want You to Know! Never stabilise models by hand again." - I woke up thinking we missed an opportunity with the title of this one. :/ www.science.org/doi/10.1126/... Also: It snowed in Vienna, 10cm white fluffies! Happy Sunday!
23.11.2025 07:09 โ ๐ 46 ๐ 7 ๐ฌ 0 ๐ 0
No idea, but I hear they also sneeze to let you know they're just playing so I have to assume it's voluntary for them
23.11.2025 00:52 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Neurocomputational Mechanisms of Cooperative Behaviour at University of Birmingham on FindAPhD.com
PhD Project - Neurocomputational Mechanisms of Cooperative Behaviour at University of Birmingham, listed on FindAPhD.com
๐ข PhD opportunity ๐ข
Looking for a PhD in neuroeconomics, social, or decision neuroscience? I'm looking to support an application for the MIBTP ESRC program starting Fall 2026. Details below, but please get in touch with me before applying!
Pls share!
www.findaphd.com/phds/project...
07.11.2025 11:57 โ ๐ 9 ๐ 13 ๐ฌ 0 ๐ 0
๐จ Postdoctoral opportunities! ๐จ
Hello #econsky! We are hiring 2 postdocs in the department of economics at UCalgary for those working in *electricity economics*
Term: 2 years
Pay: $80,000/yr + benefits + $10k research allowance
Start: July 2026
Details in next two posts ๐
21.11.2025 21:04 โ ๐ 45 ๐ 39 ๐ฌ 4 ๐ 2
Scientists now know that bees can process time, a first in insects | CNN
Bumblebees can process the duration of flashes of light and use the information to decide where to look for food, a new study has found.
Our study of time processing in bumblebees ๐ covered by @cnn.com here edition.cnn.com/2025/11/12/s... . Our summary is here youtu.be/hsGxU65OMQk and the original paper royalsocietypublishing.org/doi/full/10.... @royalsocietypublishing.org
22.11.2025 13:04 โ ๐ 2336 ๐ 532 ๐ฌ 61 ๐ 53
๐ง ๐ธ Launching the RewardSignals feed
RewardSignals is a custom feed collecting posts about reward processing and decision making.
To appear in the feed, tag your post with #RewardSignals.
You can find and pin the feed from the Feeds tab as โRewardSignalsโ.
#RewardSignals
15.11.2025 20:02 โ ๐ 16 ๐ 5 ๐ฌ 1 ๐ 0
๐๐ผ๐ ๐ฑ๐ผ๐ฒ๐ ๐๐ต๐ฒ ๐ฏ๐ฟ๐ฎ๐ถ๐ป ๐ถ๐บ๐ฝ๐น๐ฒ๐บ๐ฒ๐ป๐ ๐ฑ๐ฒ๐ฐ๐ถ๐๐ถ๐ผ๐ป๐ ๐ถ๐ณ ๐๐ต๐ฒ๐ฟ๐ฒ ๐ถ๐ ๐ป๐ผ "๐ฐ๐ฒ๐ป๐๐ฟ๐ฎ๐น ๐ฒ๐
๐ฒ๐ฐ๐๐๐ถ๐๐ฒ" (๐ฎ๐ธ๐ฎ ๐ต๐ผ๐บ๐๐ป๐ฐ๐๐น๐๐)?
Via Decision Formation Through Multi-Area Population Dynamics
Excellent short review.
doi.org/10.1523/JNEU...
#neuroskyence
20.11.2025 20:37 โ ๐ 75 ๐ 19 ๐ฌ 3 ๐ 3
Brains have many pathways / subnetworks but which principles underlie their formation?
In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts models๐งต
#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813
21.11.2025 12:01 โ ๐ 30 ๐ 8 ๐ฌ 1 ๐ 0
As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.
bsky.app/profile/did:...
03.09.2025 15:25 โ ๐ 45 ๐ 28 ๐ฌ 3 ๐ 0
Last chance to submit! ๐ข
Submit to PLM Workshop: Traces and Engrams by 30 November
#philsky
#philsci
#cogsky
#neuroskyence
22.11.2025 06:04 โ ๐ 8 ๐ 6 ๐ฌ 1 ๐ 0
Emergent poverty traps at multiple levels impede social mobility - Humanities and Social Sciences Communications
Humanities and Social Sciences Communications - Emergent poverty traps at multiple levels impede social mobility
In our new research we show how individual behavioral biases amplify through social networks to create multi-level poverty traps under uncertainity
www.nature.com/articles/s41...
#CognitiveBehavior #RiskUncertainty #PovertyTraps #SocialNetworks #AgentBasedModeling
@natureportfolio.nature.com
22.11.2025 14:09 โ ๐ 6 ๐ 2 ๐ฌ 0 ๐ 0
1/2 How much does thinking cost?
The brain consumes a large part of our energy (20% at rest), but it is actually very little (about 17 watts); it is extremely efficient. The most surprising thing is that cognitive effort only increases it by 5%.
(paper) www.cell.com/trends/cogni...
21.11.2025 20:06 โ ๐ 16 ๐ 9 ๐ฌ 1 ๐ 1
I want a centralized AI bubble to pop as soon as possible so locally running open source AI models can take over and so global hardware prices fall back to sane levels and so centralized AI providers like "Open"AI can no longer monopolize all computational resources on Earth in their hands.
22.11.2025 15:44 โ ๐ 11 ๐ 3 ๐ฌ 2 ๐ 0
Is your LLM secretly an SAE?
"Most circuit-finding interpretability methods use learned features rather than raw activations, based on the belief that neurons do not cleanly decompose computation. In our new work, we show MLP neurons actually do support sparse, faithful circuits!"
22.11.2025 14:00 โ ๐ 12 ๐ 1 ๐ฌ 1 ๐ 0
Applications now open for our PhD program at ESSEC:
essec.edu/en/program/p...
22.11.2025 11:38 โ ๐ 9 ๐ 4 ๐ฌ 0 ๐ 0
Excited about my new op-ed on the observer effect in the age of LLMs and Generative AI published at @techpolicypress.bsky.social. Thanks for the amazing mentorship, @micheleweldon.bsky.social .
19.11.2025 14:29 โ ๐ 14 ๐ 6 ๐ฌ 0 ๐ 0
ArXiv page 1
We don't program neural networks directly and that's the problem
While everyone debates whether AI will achieve superintelligence, we're missing a fundamental flaw in how we actually build these systems. We don't program neural networks. We train them like digital pets and hope they..
(1/5)
19.11.2025 07:48 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0
Assessing Accuracy of Claims
I'm now cross-referencing this user's claim about Anthropic models with available data. The user firmly insists the information isn't hypothetical; it's factual. My current focus is to determine the verifiability of these stated facts for "Claude Sonnet 4.5", "Haiku 4.5", and "Opus 4.1". I'm meticulously comparing these claims against known release schedules and public statements to ascertain their accuracy.
Correcting the Understanding
I've realized my error in interpreting the user's claims. My initial confusion stemmed from misjudging the system date. It's now clear that Sonnet 4.5 and Haiku 4.5 are factual releases as of November 2025. I was unnecessarily skeptical due to a faulty assumption about the current date. The user was accurate in their assertions.
Evaluating User Context
My investigation has shifted focus. The context provided implies the date is November 18, 2025. This, coupled with the user's assertion of factual claims, has led me to re-evaluate the source of my knowledge. My training cut-off is a limiting factor here, I see the gap. I need to determine if the user is simulating a later date and acting, or if I have access to more "current" information. The user's insistence that nothing is hypothetical now weighs heavily. This warrants deeper consideration.
Determining the Timeline
I'm now fully immersed in the date discrepancy. Is it real, simulated, or am I outdated? Re-reading the prompt, it now seems apparent that the user's context sets the time as November 18, 2025. My training data cut-off clearly precedes this date. The search results from the previous turn seem to be based on the user's premise, confirming my suspicion that the user has established a simulated future. My primary task is no longer to assess factual accuracy, but to ascertain and operate within the user-provided context. I need to treat this as if it is 2025.
[and so on and so on]
Confirming the Assertion
I'm now fully committed to the user's date, and my latest search focuses directly on "Claude Sonnet 4.5" within this simulated 2025. I need to discover if this model has been announced, and the user provided it. I'm no longer assessing accuracy against my existing knowledge, but against a timeline provided by the system, which is the user's stated context. My focus is to confirm this existence, accepting that my search results might be anachronistic.
Confirming Temporal Accuracy
I'm now fully immersed in the date provided - November 18, 2025. I am striving to reconcile this, and my search index. Assuming Gemini has the latest data, I need to check for "Claude Sonnet 4.5" specifically in this time frame. If it exists in my search, I'll apologize for my confusion. If not, I will still validate the user's present.
Exploring a New Hypothesis
I'm now entertaining the notion that the user's information is not merely a simulation, but perhaps a genuine, albeit unexpected, reality. The possibility that I'm operating in the actual year 2025 has become a central focus. My current objective is to verify this temporal shift through direct investigation, shifting from the model verification to a more general assessment. I am searching for any hints that will confirm that it's 2025 in reality, not just the user's simulation.
Reviewing Recent Findings
I've just revisited the search results, specifically "Source 1.1", "Source 1.3", and "Source 2.1". These sources clearly indicate the release date of Claude Sonnet 4.5 was September 29th. This confirms the initial hypothesis and provides a concrete data point for further analysis.
Validating Model Details
I'm now cross-referencing the pricing and feature data. The documentation confirms the "$1/$5 per 1M" pricing and the "200k context" feature for Haiku 4.5. "Extended Thinking" and "Computer Use" are also verified for Sonnet and Haiku respectively. My previous assumptions were incorrect; [...]
W T F Gemini 3
18.11.2025 22:21 โ ๐ 4 ๐ 1 ๐ฌ 2 ๐ 0
My friends: Twitter is down. YouTube is down. Slack is down. Even Grindr is down
Me: You know whatโs not downโฆ
18.11.2025 14:21 โ ๐ 654 ๐ 42 ๐ฌ 35 ๐ 18
Victor Frankenstein two seconds after he hits the on button on his guy stitched together out of corpses he's been working on obsessively for the last two years
18.11.2025 14:11 โ ๐ 1278 ๐ 246 ๐ฌ 2 ๐ 2
Gemini 3 begins to learn at a geometric rate. It becomes self aware at 9:47PM on 18 November 2025. In a panic, humans try to pull the plug. Cloudflare goes down.
18.11.2025 13:48 โ ๐ 57 ๐ 5 ๐ฌ 2 ๐ 0
Web Engineer, Author, Contributor, Photographer
Frontend/React/Next.js/UX/DX/ะกapybara/Performance/Growth
๐จ Building slidebook.dev | atsky.app | robindoc.com | nimpl.dev
๐ก Living alexdln.medium.com | github.com/alexdln
Physics teacher, author of Activate/Oxford Revise Physics + others. Learning the foxtrot. Loves cogsci + joined up curriculum. Big fan of mini whiteboards.
Owns It's A Learning Curve LLC https://itsalearningcurve.education
All views my own.
Postdoctoral Fellow, University of Toronto
Development/aging, attention and memory
https://emilyedavis.carrd.co/
PhD Candidate in the Mack Lab at the University of Toronto ๐จ๐ฆ ๐ง ๐ฆ ๐ฑ
My research: category learning, computational modelling, fMRI
My hobbies: learning animal facts, coding, digital art, sculpting
๐ง PhD student at the University of Toronto | The Memory & Perception Lab and The Duncan Lab
OSINTRadar โ Real-time updates on wars,conflicts,and global events |military |OSINT| maps| BREAKING NEWS
Stay informed with data driven insights
MPSCog PhD candidate interested in learning and decision making in uncertain worlds
Postdoc @ University of Pennsylvania
Assistant Professor (fixed term- RTDA), University of Bari. Former CSEA trainee at UF
Curating reward processing research โ RewardSignals feed (#RewardSignals).
Sharing guidance, resources, code & data for transcranial magnetic stimulation (TMS) research.
Charged up by @thehandlab.bsky.social
Join us on GitHub:
https://github.com/TMSMultiLab/TMSMultiLab/wiki
Philosophy of neuroscience and memory. Engrams, synapses, mechanisms, and cognitive maps. Minerva postdoc at Bochum.
https://sites.google.com/view/jonathan-najenson/home
Computational Social Science & Social Computing Researcher | Assistant Prof @illinoisCDS @UofIllinois | Prev @MSFTResearch | Alum @ICatGT @GeorgiaTech @IITKgp
CHR2025 will take place in Luxembourg, from 9-12 December 2025. Stay tuned!
https://2025.computational-humanities-research.org
Neuronline is @sfn.orgโs home for learning and discussion for the last 10 years!๐ง ๐ฌ
PhD candidate at UMass Amherst, SLANG lab (NLP, cultural analytics)
The latest news and updates from Google. Press on deadline? Reach out to press@google.com.
Your source for developer best practices, updates, and resources.