Michael X Cohen on why he left academia/neuroscience.
mikexcohen.substack.com/p/why-i-left...
@profdata.bsky.social
Senior research scientist at Los Alamos National Laboratory. Former UCL, UTexas, Alan Turing Institute, Ellis EU. CogSci, AI, Comp Neuro, AI for scientific discovery https://bradlove.org
Michael X Cohen on why he left academia/neuroscience.
mikexcohen.substack.com/p/why-i-left...
moderation@blueskyweb.xyz, send to me, or send directly to the Met (London police) who are investigating www.met.police.uk. I could see this being super distressing for a vulnerable person, so hope this does not become more common. For me, it's been an exercise in rapidly learning to not care! 2/2
18.07.2025 22:14 β π 3 π 0 π¬ 0 π 0Some UK dude is trying to extort me, demanding money to not spread made-up stories. I reported to the poilice after getting flooded with phone messages I never listen to, etc. @bsky.app has been good about deleting his posts and accounts. If contacted, don't interact, but instead report to...1/2
18.07.2025 22:14 β π 4 π 0 π¬ 2 π 0New blog w @ken-lxl.bsky.social, βGiving LLMs too much RoPE: A limit on Suttonβs Bitter Lessonβ. The field has shifted from flexible data-driven position representations to fixed approaches following human intuitions. Hereβs why and what it means for model performance bradlove.org/blog/positio...
13.06.2025 14:09 β π 4 π 1 π¬ 3 π 1https://bradlove.org/blog/prob-llm-consistency
New blog, "Backwards Compatible: The Strange Math Behind Word Order in AI" w @ken-lxl.bsky.social It turns out the language learning problem is the same for any word order, but is that true in practice for large language models? paper: arxiv.org/abs/2505.08739 BLOG: bradlove.org/blog/prob-ll...
28.05.2025 14:15 β π 4 π 1 π¬ 3 π 0Bonus: I found it counterintuitive that (in theory) the learning problem is the same for any word ordering. Aligning proof and simulation was key. Now, new avenues open to address positional biases, better training and knowing when to trust LLMs. w @ken-lxl.bsky.social arxiv.org/abs/2505.08739
14.05.2025 15:02 β π 1 π 0 π¬ 2 π 0When LLMs diverge from one another because of word order (data factorization), it indicates their probability distributions are inconsistent, which is a red flag (not trustworthy). We trace deviations to self-attention positional and locality biases. 2/2 arxiv.org/abs/2505.08739
14.05.2025 15:02 β π 0 π 0 π¬ 3 π 0"Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies"
Oddly, we prove LLMs should be equivalent for any word ordering: forward, backward, scrambled. In practice, LLMs diverge from one another. Why? 1/2 arxiv.org/abs/2505.08739
with @ken-lxl.bsky.social , @robmok.bsky.social , Brett Roads
17.02.2025 15:23 β π 1 π 0 π¬ 2 π 0"Coordinating multiple mental faculties during learning" There's lots of good work in object recognition and learning, but how do we integrate the two? Here's a proposal and model that is more interactive than perception provides the inputs to cognition. www.nature.com/articles/s41...
17.02.2025 15:23 β π 31 π 9 π¬ 3 π 0Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at #ICLR2025!
21.01.2025 15:52 β π 37 π 14 π¬ 1 π 0Thanks @hossenfelder.bsky.social for covering our recent paper, doi.org/10.1038/s415... Also, I want to spotlight this excellent podcast (19 minutes long) with Nicky Cartridge covering how AI will impact science and healthcare in the coming years, touchneurology.com/podcast/brai...
13.12.2024 15:44 β π 14 π 1 π¬ 1 π 1A 7B is small enough to train efficiently on 4 A100s (thanks Microsoft) and at the time Mistral performed relatively well for its size.
27.11.2024 17:11 β π 7 π 0 π¬ 1 π 0Yes, the model weights and all materials are openly available. We really want to offer easy to use tools people can use through the web without hassle. To do that, we need to do more work (will be announcing an open source effort soon) and need some funding for hosting a model endpoint.
27.11.2024 17:09 β π 9 π 0 π¬ 1 π 0While BrainBench focused on neuroscience, our approach is science general, so others can adopt our template. Everything is open weight and open source. Thanks to the entire team and the expert participants. Sign up for news at braingpt.org 8/8
27.11.2024 14:13 β π 11 π 0 π¬ 3 π 0Finally, LLMs can be augmented with neuroscience knowledge for better performance. We tuned Mistral on 20 years of the neuroscience literature using LoRA. The tuned model, which we refer to as BrainGPT, performed better on BrainBench. 7/8
27.11.2024 14:13 β π 7 π 0 π¬ 1 π 0Indeed, follow-up work on teaming finds that joint LLM and human teams outperform either alone, because LLMs and humans make different types of errors. We offer a simple method to combine confidence-weighted judgements.
arxiv.org/abs/2408.08083 6/8
In the Nature HB paper, both human experts and LLMs were well calibrated - when they were more certain of their decisions, they were more likely to be correct. Calibration is beneficial for human-machine teaming. 5/8
27.11.2024 14:13 β π 6 π 0 π¬ 1 π 0There were no signs of leakage from the training to test set. We performed standard checks. In follow-up work, we trained an LLM from scratch to rule out leakage; even this smaller model was superhuman on BrainBench arxiv.org/abs/2405.09395 4/8
27.11.2024 14:13 β π 7 π 0 π¬ 1 π 0All 15 LLMs considered crushed human experts at BrainBench's predictive task. LLMs correctly predicted neuroscience results (across all sub areas) dramatically better than human experts, including those with decades of experience. 3/8
27.11.2024 14:13 β π 10 π 0 π¬ 1 π 0To test, we created BrainBench, a forward-looking benchmark that stresses prediction over retrieval of facts, avoiding LLM's "hallucination" issue. The task was to predict which version of a Journal of Neuroscience abstract gave the actual result. 2/6
27.11.2024 14:13 β π 8 π 0 π¬ 1 π 0"Large language models surpass human experts in predicting neuroscience results" w @ken-lxl.bsky.social
and braingpt.org. LLMs integrate a noisy yet interrelated scientific literature to forecast outcomes. nature.com/articles/s41... 1/8
Thanks Gary! I have no idea because I don't see how we get anyone to learn over more than a billion tokens. Maybe one could bootstrap some estimate from the perplexity difference between forward and backward, assuming we can get a sense of how that affects learning? Just off the top of my head...
20.11.2024 22:27 β π 0 π 0 π¬ 0 π 0i am not seeing the issue. every method is the same, but the text is reversed. we even tokenize separately for forward and backward to make comparable. Perplexity is calculated over the entire option for the benchmark items. The difficulty doesn't have to be the same - it just turned out that way.
19.11.2024 17:44 β π 0 π 0 π¬ 1 π 0For backward: Everything is reversed at the character level, including the benchmark items. So, the last character of the last word for each passage is the first and the first character of the first word is last. On the benchmark, as in the forward case, the option with lower perplexity is chosen.
19.11.2024 16:59 β π 0 π 0 π¬ 1 π 0Instead of viewing LLMs as models of humans or stochastic parrots, we view them as general and powerful pattern learners that can master a superset of what people can. arxiv.org/abs/2411.11061 2/2
19.11.2024 13:21 β π 7 π 1 π¬ 2 π 1"Beyond Human-Like Processing: Large Language Models Perform Equivalently on Forward and Backward Scientific Text" Our take is that large language models (LLMs) are neither stochastic parrots nor faithful models of human language processing. arxiv.org/abs/2411.11061 1/2
19.11.2024 13:21 β π 37 π 11 π¬ 3 π 0Has anyone tried this tool to follow back all of one's followers? github.com/jiftechnify/... It seems legit but I'm weary of giving a password to a third party website. So many people here so suddenly!
19.11.2024 13:06 β π 2 π 0 π¬ 1 π 0I fully support the last sentence of this abstract from @profdata.bsky.social :
elifesciences.org/reviewed-pre...
"...the complexity of the brain should be respected and intuitive notions of cell type, which can be misleading and arise in any complex network, should be relegated to history."
π§ π π§ͺ
π¨Submissions for #CCN2024 are now open at ccneuro.org π¨
We welcome submissions for 2-page papers (deadline: 12 April) and Generative Adversarial Collaborations (GACs), Keynote+Tutorials, and (new this year!) Community Events (deadline: 5 April).
Stay tuned: registration will open in early April!