Bradley Love's Avatar

Bradley Love

@profdata.bsky.social

Senior research scientist at Los Alamos National Laboratory. Former UCL, UTexas, Alan Turing Institute, Ellis EU. CogSci, AI, Comp Neuro, AI for scientific discovery https://bradlove.org

4,202 Followers  |  764 Following  |  60 Posts  |  Joined: 05.10.2023  |  2.0009

Latest posts by profdata.bsky.social on Bluesky

Personally, I will be looking to mentor projects with Mahindra Rautela on (1) Search and Evaluation for test-time AI Reasoning, and (2) model distillation to compress large physics foundation models.

Please feel free to get in touch with questions or to express interest.

28.01.2026 19:36 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Computing & Artificial Intelligence (CAI) Division Graduate Intern at Los Alamos National Laboratory Los Alamos National Laboratory is Hiring! Search available jobs or submit your resume now by visiting this link. Please share with anyone you feel would be a great fit.

Are you a graduate student interested in working at Los Alamos National Laboratory (LANL) this summer? LANL has student internships, apply here: lanl.jobs/search/jobde... Please apply ASAP and before February 13th (sorry for the rush) 1/2

28.01.2026 19:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

with @robmok.bsky.social and Xiaoliang "Ken" Luo

25.11.2025 19:35 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Intuitive cell types don't necessarily play the ascribed functional role in the overall computation. This is not a message the field wants to hear as it suggests better baselines, controls, and some reflection. elifesciences.org/reviewed-pre... 2/2

25.11.2025 19:29 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

"The inevitability and superfluousness of cell types in spatial cognition". Intuitive cell types are found in random artificial networks using the same selection criteria neuroscientists use with actual data. elifesciences.org/reviewed-pre... 1/2

25.11.2025 19:29 β€” πŸ‘ 45    πŸ” 15    πŸ’¬ 4    πŸ“Œ 3
Preview
Adaptive stretching of representations across brain regions and deep learning model layers - Nature Communications How the brain adapts its representations to prioritize task-relevant information remains unclear. Here, the authors show that both monkey brains and deep learning models stretch neural representations...

Working with monkey data, we found neural representations stretched across brain regions to emphasize task relevant features on a trial-by-trial basis. Spike timing mattered over spike rate. Deep nets did the same. nature.com/articles/s41... 2/2

25.11.2025 19:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Adaptive stretching of representations across brain regions and deep learning model layers - Nature Communications How the brain adapts its representations to prioritize task-relevant information remains unclear. Here, the authors show that both monkey brains and deep learning models stretch neural representations...

Exciting "new" work illustrating our broken publishing system. Seb presented this work online at neuromatch 2.0 at the height of the pandemic. Then, Xin-Ya worked years on addressing reviewer comments, which added some rigor but didn't change the message. 1/2

25.11.2025 19:19 β€” πŸ‘ 21    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

We developed a straightforward method of combining confidence-weighted judgments for any number of humans and AIs. w Felipe YÑñez, Omar Valerio Minero, @ken-lxl.bsky.social 2/2

25.11.2025 19:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Confidence-weighted integration of human and machine judgments for superior decision-making When AI surpasses human performance, what can humans offer? We demonstrate that the performance of teams increases by integrating human judgments with those of machines. Integration is achieved by a s...

When AI surpasses human performance, what's left for humans? We find that human judgment boosts performance of human-AI teams because humans and machines make different errors. cell.com/patterns/ful... 1/2

25.11.2025 19:05 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
How neuroscientists are using AI Eight researchers explain how they are using large language models to analyze the literature, brainstorm hypotheses and interact with complex datasets.

Researchers are using LLMs to analyze the literature, brainstorm hypotheses, build models and interact with complex datasets. Hear from @mschrimpf.bsky.social, @neurokim.bsky.social, @jeremymagland.bsky.social, @profdata.bsky.social and others.

#neuroskyence

www.thetransmitter.org/machine-lear...

04.11.2025 16:07 β€” πŸ‘ 26    πŸ” 9    πŸ’¬ 0    πŸ“Œ 2
Preview
Why I left academia and neuroscience Don't worry, this isn't yet another story of rage-quitting.

Michael X Cohen on why he left academia/neuroscience.
mikexcohen.substack.com/p/why-i-left...

06.10.2025 17:05 β€” πŸ‘ 95    πŸ” 36    πŸ’¬ 7    πŸ“Œ 14
Preview
Home Your local police force - online. Report a crime, contact us and other services, plus crime prevention advice, crime news, appeals and statistics.

moderation@blueskyweb.xyz, send to me, or send directly to the Met (London police) who are investigating www.met.police.uk. I could see this being super distressing for a vulnerable person, so hope this does not become more common. For me, it's been an exercise in rapidly learning to not care! 2/2

18.07.2025 22:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Some UK dude is trying to extort me, demanding money to not spread made-up stories. I reported to the poilice after getting flooded with phone messages I never listen to, etc. @bsky.app has been good about deleting his posts and accounts. If contacted, don't interact, but instead report to...1/2

18.07.2025 22:14 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Giving LLMs too much RoPE: A limit on Sutton’s Bitter Lesson β€” Bradley C. Love Introduction Sutton’s Bitter Lesson (Sutton, 2019) argues that machine learning breakthroughs, like AlphaGo, BERT, and large-scale vision models, rely on general, computation-driven methods that prior...

New blog w @ken-lxl.bsky.social, β€œGiving LLMs too much RoPE: A limit on Sutton’s Bitter Lesson”. The field has shifted from flexible data-driven position representations to fixed approaches following human intuitions. Here’s why and what it means for model performance bradlove.org/blog/positio...

13.06.2025 14:09 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 3    πŸ“Œ 2
https://bradlove.org/blog/prob-llm-consistency

https://bradlove.org/blog/prob-llm-consistency

New blog, "Backwards Compatible: The Strange Math Behind Word Order in AI" w @ken-lxl.bsky.social It turns out the language learning problem is the same for any word order, but is that true in practice for large language models? paper: arxiv.org/abs/2505.08739 BLOG: bradlove.org/blog/prob-ll...

28.05.2025 14:15 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0
Preview
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...

Bonus: I found it counterintuitive that (in theory) the learning problem is the same for any word ordering. Aligning proof and simulation was key. Now, new avenues open to address positional biases, better training and knowing when to trust LLMs. w @ken-lxl.bsky.social arxiv.org/abs/2505.08739

14.05.2025 15:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...

When LLMs diverge from one another because of word order (data factorization), it indicates their probability distributions are inconsistent, which is a red flag (not trustworthy). We trace deviations to self-attention positional and locality biases. 2/2 arxiv.org/abs/2505.08739

14.05.2025 15:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Preview
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...

"Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies"
Oddly, we prove LLMs should be equivalent for any word ordering: forward, backward, scrambled. In practice, LLMs diverge from one another. Why? 1/2 arxiv.org/abs/2505.08739

14.05.2025 15:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

with @ken-lxl.bsky.social , @robmok.bsky.social , Brett Roads

17.02.2025 15:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Coordinating multiple mental faculties during learning - Scientific Reports Scientific Reports - Coordinating multiple mental faculties during learning

"Coordinating multiple mental faculties during learning" There's lots of good work in object recognition and learning, but how do we integrate the two? Here's a proposal and model that is more interactive than perception provides the inputs to cognition. www.nature.com/articles/s41...

17.02.2025 15:23 β€” πŸ‘ 31    πŸ” 9    πŸ’¬ 3    πŸ“Œ 1

Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at #ICLR2025!

21.01.2025 15:52 β€” πŸ‘ 37    πŸ” 14    πŸ’¬ 1    πŸ“Œ 0

Thanks @hossenfelder.bsky.social for covering our recent paper, doi.org/10.1038/s415... Also, I want to spotlight this excellent podcast (19 minutes long) with Nicky Cartridge covering how AI will impact science and healthcare in the coming years, touchneurology.com/podcast/brai...

13.12.2024 15:44 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 2    πŸ“Œ 1

A 7B is small enough to train efficiently on 4 A100s (thanks Microsoft) and at the time Mistral performed relatively well for its size.

27.11.2024 17:11 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes, the model weights and all materials are openly available. We really want to offer easy to use tools people can use through the web without hassle. To do that, we need to do more work (will be announcing an open source effort soon) and need some funding for hosting a model endpoint.

27.11.2024 17:09 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
BrainGPT This is the homepage for BrainGPT, a Large Language Model tool to assist neuroscientific research.

While BrainBench focused on neuroscience, our approach is science general, so others can adopt our template. Everything is open weight and open source. Thanks to the entire team and the expert participants. Sign up for news at braingpt.org 8/8

27.11.2024 14:13 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Post image

Finally, LLMs can be augmented with neuroscience knowledge for better performance. We tuned Mistral on 20 years of the neuroscience literature using LoRA. The tuned model, which we refer to as BrainGPT, performed better on BrainBench. 7/8

27.11.2024 14:13 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Confidence-weighted integration of human and machine judgments for superior decision-making Large language models (LLMs) have emerged as powerful tools in various domains. Recent studies have shown that LLMs can surpass humans in certain tasks, such as predicting the outcomes of neuroscience...

Indeed, follow-up work on teaming finds that joint LLM and human teams outperform either alone, because LLMs and humans make different types of errors. We offer a simple method to combine confidence-weighted judgements.
arxiv.org/abs/2408.08083 6/8

27.11.2024 14:13 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

In the Nature HB paper, both human experts and LLMs were well calibrated - when they were more certain of their decisions, they were more likely to be correct. Calibration is beneficial for human-machine teaming. 5/8

27.11.2024 14:13 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Matching domain experts by training from scratch on domain knowledge Recently, large language models (LLMs) have outperformed human experts in predicting the results of neuroscience experiments (Luo et al., 2024). What is the basis for this performance? One possibility...

There were no signs of leakage from the training to test set. We performed standard checks. In follow-up work, we trained an LLM from scratch to rule out leakage; even this smaller model was superhuman on BrainBench arxiv.org/abs/2405.09395 4/8

27.11.2024 14:13 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

All 15 LLMs considered crushed human experts at BrainBench's predictive task. LLMs correctly predicted neuroscience results (across all sub areas) dramatically better than human experts, including those with decades of experience. 3/8

27.11.2024 14:13 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@profdata is following 20 prominent accounts