I made an exhaustive list of biologically implausible algorithms as a handy reference for reviewing and triaging papers:
Huge, huge bummer. This is one of the most promising technologies I've ever covered.
Ooh ohh but but but BUT... I think you'll find that correlation does NOT imply causation.
That's more of a comment than a question.
seeing as I'm not actually attending Cosyne, and that by next year the AIs will have taken over, I feel quite safe sitting in an undisclosed location, shitposting about the future of comp neuro...
so not that different from actually attending, apart from the absence of delicious Portuguese cuisine
oooh... that is controversial!
Modish? I never saw any mopeds or flick knife fights at cosyne, but I've only been a few times.
well, minimally I think we can agree that Cosyne doesn't know where it is going!
Does anyone out there still do physiology? I have a plea. All experiments have limitations that any competent physiologist knows about but can't spend 28 pages detailing.
So if you ding a paper or grant on textbook issues that the authors likely considered, don't complain that you're a dying breed.
Yes, but it does sample it
A single plot shows computational neuroscience doesn't really know where it is heading:
finally!
Yeah, but there's a problem: AI is THE technology that lets small states level the playing field, especially now that they feel targeted. No matter how many pacts big nations pretend to make, open source/distilled AIs (no matter how crappy) are far cheaper and easier to conceal than WMD programs.
If it's in English you just converge to the letter e
The most impt change at #NIH and to US science this year is bigger than grant cancellations— it’s how the agency is governed.
For 75 years NIH has been largely independent of presidential control. That’s changed this year. New piece from me and @nataliebaviles.bsky.social in @nature.com
🧪
Low-D you say...
Two-photon calcium imaging at 24,000 lines/s, with the resonant axis spanning 4x what other systems can do. Inertia-free. Diffraction-limited. No tradeoffs. Che-Hang Yu developed a 4x angle multiplier for laser scanning. His paper is out today: opg.optica.org/optica/fullt... 1/n #fluorescenceFriday
Embedding unreliable AI deep into the source code of the military without human oversight is the single stupidest thing we could do as a species.
And that is *exactly* what the United States is about to do.
Yes! Some clear thinking. Once we move away from "species x is a good/bad model for y because of z handpicked reasons" we can do science.
In 2023 a bill to prevent AI from autonomously launching nuclear weapons *failed to pass*. This was apparently not newsworthy.
www.congress.gov/bill/118th-c...
The urgent plea of @garymarcus.bsky.social demands rapid action: the US Military, in the hands of fascists, seeks to bend Anthropic to the knee in the aims of incorporating AI into military weaponry. Gary asks us to call our political representatives right now. I just did. Now it's your turn.
A summary of the last decade's news:
2016: the cunts are in charge now
2026: documents reveal that the cunts were always in charge
Calcium spikes know which way the wind blows!
Lily Nguyen and I wrote a dispatch on this fascinating work led by Itzel Ishida+@sethisachin.bsky.social+Gaby Maimon!
authors.elsevier.com/a/1mfk53QW8S...
LLMs should be a private cognitive tool, like a calculator, which is not currently possible under our existing model of corporate AI. Crucially, they do not think and have no agency, again in the same way as a calculator
for those who are paywalled or want to read the original research instead of the headline: arxiv.org/pdf/2602.14740
Broader engineering principles than those most of us are familiar with, for sure!
Finally, the bug is back with a round of the Guinness.
No! It's the other way around. Engineering principles *generate* complex systems:
www.science.org/doi/10.1126/...