Benjamin Lowe's Avatar

Benjamin Lowe

@brainboyben.bsky.social

Cog neuro postdoc at Macquarie Uni, Sydney Activist for a free Palestine πŸ‡΅πŸ‡Έ

66 Followers  |  56 Following  |  36 Posts  |  Joined: 01.12.2024  |  2.2638

Latest posts by brainboyben.bsky.social on Bluesky

Preview
AI is not a peer, so it can’t do peer review If we still believe thatΒ science is a vocationΒ grounded in argument, curiosity and care, we can’t delegate judgement to machines, saysΒ Akhil Bhardwaj

'to treat peer review as a throughput problem is to misunderstand what is at stake. Review is not simply a production stage in the research pipeline; it is one of the few remaining spaces where the scientific community talks to itself.' 1/3

03.02.2026 08:17 β€” πŸ‘ 357    πŸ” 153    πŸ’¬ 6    πŸ“Œ 21

He’s simply a grifter

28.01.2026 18:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Interpreting EEG requires understanding how the skull smears electrical fields as they propagate from the cortex. I made a browser-based simulator for my EEG class to visualize how dipole depth/orientation change the topomap.
dbrang.github.io/EEG-Dipole-D...

Github page: github.com/dbrang/EEG-D...

20.01.2026 17:00 β€” πŸ‘ 122    πŸ” 49    πŸ’¬ 4    πŸ“Œ 1
Post image

Most popular decision-making models assume that cognitive processes are static over time. In our new paper in Psych Review, we offer a simple extension to evidence accumulation models that lets researchers account for systematic changes in parameters across time πŸ“ˆ

psycnet.apa.org/fulltext/202...

20.01.2026 22:25 β€” πŸ‘ 28    πŸ” 9    πŸ’¬ 1    πŸ“Œ 2
Preview
a cartoon of donald duck says " and a bah humbug " to you ALT: a cartoon of donald duck says " and a bah humbug " to you

Our publishing system does not prioritise or value the careful curation of research data to be FAIR nearly enough. I have been data editing for AP&P for a year now, and it is sad to see no reward for the clearly careful organisation of data and materials vs that which is thrown on OSF with no care!

18.01.2026 02:33 β€” πŸ‘ 24    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1

Thanks Junjie!

28.12.2025 09:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Also, I apologise for the poor figure quality in the HTML version of the article. Elsevier’s typesetting team made some nonsense changes that I did not consent to, which have somehow proved to be frustrating to fix on their end.

The PDF version is fine though!

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’d like to thank my co-authors (particularly Naohide and Jonny) and reviewers for helping me elevate the quality of this work 😊

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
Same but different: The latency of a shared expectation signal interacts with stimulus attributes Predictive coding theories assert that perceptual inference is a hierarchical process of belief updating, wherein the onset of unexpected sensory data…

This suggests that visual surprise may operate at the bound object level and/or is a domain-general response, which is identical to the conclusion drawn from our previous work! www.sciencedirect.com/science/arti...

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Perhaps the coolest result was that these surprise signals were *shared across attributes*. That is, classifiers trained to decode surprise for shape could reliably do so for colour (and vice versa), after accounting for latency shifts.

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Interestingly, we were still able to decode multivariate whole-scalpe representations of surprise (neutral vs. violation) separately for each attribute. Moreover, these signals were reliable from ~250 ms, suggesting that surprise is predominantly signalled after the initial feedforward sweep.

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We first looked at the evoked responses and found classic effects of adaptation via the constant vs. change sequence comparisons.

This said, we found no evidence for visual surprise after controlling for cortical adaptation (i.e., when comparing surprising changes to neutral changes).

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Here, we recorded EEG from participants who viewed sequences of a bound object that changed in either colour or shape over four steps. Crucially, the contexts of these changes were designed to appear random (and unsurprising) or violate the established trajectory (and cause surprise).

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But when does the visual system signal surprise? And do the dynamics of a surprise signal depend on which attributes (features) violate a prediction? This is important to think about, given the functionally segregated organisation of the visual system.

12.12.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Predictive coding theories assert that the brain uses prior knowledge when resolving percepts. Deviations between what is predicted and sensed generate surprise signals (so-called β€˜prediction errors’), which calibrate the relevant erroneous predictions.

12.12.2025 08:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And it's out now in Cortex: www.sciencedirect.com/science/arti...

Summary below 🧡

12.12.2025 08:11 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Redirecting

This suggests that visual surprise may operate at the bound object level and/or is a domain-general response.

This is identical to the conclusions drawn from our previous work :)

doi.org/10.1016/j.co...

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Perhaps the coolest result was that surprise signals were *shared across attributes*. That is, classifiers trained to decode surprise for shape could reliably do so for colour (and vice versa), after accounting for latency shifts.

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Interestingly, we were still able to decode whole-scalp multivariate representations of surprise (neutral vs. violation) separately for each attribute. Moreover, these signals were reliable from ~250 ms, suggesting that surprise is predominantly signalled after the initial feedforward sweep.

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We first looked at the evoked responses and found classic effects of adaptation via the constant vs. change sequence comparisons.

This said, we found no evidence for visual surprise after controlling for cortical adaptation (i.e., when comparing surprising changes to neutral changes).

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Here, we recorded EEG from participants who viewed sequences of a bound object that changed in either colour or shape over four steps. Crucially, the contexts of these changes were designed to appear random (and unsurprising) or violate the established trajectory (and cause surprise).

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But when does the visual system signal surprise? And do the dynamics of a surprise signal depend on which attributes (features) violate a prediction? This is important to think about, given the functionally segregated organisation of the visual system.

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Predictive coding theories assert that the brain uses prior knowledge to predict upcoming visual events when resolving percepts. Deviations between what is predicted and sensed generate surprise signals (so-called β€˜prediction errors’), which calibrate the relevant erroneous predictions.

12.12.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Rapid computation of high-level visual surprise Health sciences

High-level visual surprise is rapidly integrated during perceptual inference!

🚨 New paper 🚨 out now in @cp-iscience.bsky.social with @paulapena.bsky.social and @mruz.bsky.social

www.cell.com/iscience/ful...

Summary 🧡 below πŸ‘‡

05.12.2025 14:37 β€” πŸ‘ 34    πŸ” 17    πŸ’¬ 2    πŸ“Œ 0

And it was an absolute treat to run! Thanks everyone who attended :)
#ACNS2025

26.11.2025 05:58 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
A presenting displaying a triplet task structure for face recognition.

A presenting displaying a triplet task structure for face recognition.

Tim Cottier @tvcottier.bsky.social introduces a novel face triad task to explore whether super-recognisers decipher the identity, valence or gaze of faces. When asked which face is distinct out of the three, super-recognisers preference identity information more than controls! #ASPP2025

24.11.2025 04:27 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Road trippin’ to ACNS 2025, Melbourne!

@matthewod.bsky.social
@tvcottier.bsky.social
(Plus Ella and Seri)
@acnsau.bsky.social

23.11.2025 03:15 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Decades of neoliberalism have broken our universities
YouTube video by The Australia Institute Decades of neoliberalism have broken our universities

Maybe a bit of a downer, but I think this conversation may be of interest to a bunch of people on here: www.youtube.com/watch?v=dSbK...

11.11.2025 11:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Slide is titled: You don't need to use LLMs. 

Science is a process of collaborative meaning making, by which we try to understand the world

Even if AI were perfect, we rely on it at our peril β€” it is not science if we (i.e., humanity as a whole) do not understand and cannot recapitulate all parts of it

I very, very rarely use LLMs myself. You can give yourself permission not to. Don’t FOMO yourself into it

Slide is titled: You don't need to use LLMs. Science is a process of collaborative meaning making, by which we try to understand the world Even if AI were perfect, we rely on it at our peril β€” it is not science if we (i.e., humanity as a whole) do not understand and cannot recapitulate all parts of it I very, very rarely use LLMs myself. You can give yourself permission not to. Don’t FOMO yourself into it

Conclusion: Don't rely on something you don't understand and can't control

If you must use LLMS:

1. Treat them like you would an intern: only use them for things you can easily and thoroughly check

2. Make your process as robust as possible

3. Be aware of your own (human) cognitive biases

Conclusion: Don't rely on something you don't understand and can't control If you must use LLMS: 1. Treat them like you would an intern: only use them for things you can easily and thoroughly check 2. Make your process as robust as possible 3. Be aware of your own (human) cognitive biases

Getting nervous for the talk I'm about to give at a workshop about "using AI to drive impact" which features slides such as these.

06.11.2025 20:41 β€” πŸ‘ 379    πŸ” 90    πŸ’¬ 26    πŸ“Œ 11
Preview
State–Space Trajectories and Traveling Waves Following Distraction Abstract. Cortical activity shows the ability to recover from distractions. We analyzed neural activity from the pFC of monkeys performing working memory tasks with mid-memory delay distractions (a cu...

New paper! After a distraction, rotating traveling waves steer brain processing back to where it should be.
State–Space Trajectories and Traveling Waves Following Distraction
direct.mit.edu/jocn/article...
#neuroscience

31.10.2025 12:50 β€” πŸ‘ 26    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

@brainboyben is following 20 prominent accounts