Now out in @cp-trendscognsci.bsky.social: our short response to @neurosteven.bsky.social & Edward de Haan's recent paper on the binding problem. We argue that the binding problem arises because of tradeoffs faced by any information processing system, including the brain and DNNs. shorturl.at/RGXzt
16.01.2026 10:38 β
π 19
π 8
π¬ 2
π 1
bioRxiv
Welcome to the bioRxiv homepage.
An idea to post an expanded version on biorxiv.org?
10.09.2025 12:52 β
π 0
π 0
π¬ 0
π 0
That is deal with the statistical structure of the world on the basis of learning experience (sampling + evolution). To jump to human behaviour would be, I think, mixing up 2, or potentially 3 levels of observation.
10.09.2025 12:50 β
π 1
π 0
π¬ 0
π 0
DCNNs here confirm our ideas of vision at a neuroscience level and potentially expand these with a broader view of what these filters are and how they can emerge. Beyond that, it suggests a theory of what vision does at this stage.
10.09.2025 12:50 β
π 1
π 0
π¬ 1
π 0
I guess the theory that you would abstract from results such as these is that feedforward vision is dominated by features that naturally emerge through hierarchical processing and show these features can emerge in a convolutional tree.
10.09.2025 12:50 β
π 0
π 0
π¬ 1
π 0
A substantial amount of the neural activity that can be explained relates to these processes, and this part (my 5 cents, this data) is (the bulk) of what is explained by DCNNs. But a necessary part of processing for vision outside the lab.
09.09.2025 08:02 β
π 0
π 0
π¬ 1
π 0
For the model to classify objects it does a lot of 'stuff'. For instance, without a background a shallow network suffices (Seijdel et al., 2020, scholar.google.com/citations?vi...). Natural images force these networks to do a lot more than what we would label object recognition.
09.09.2025 08:02 β
π 0
π 0
π¬ 1
π 0
It has been surprising for me the last 7 years how easy it is to find signature of texture processing (Loke et al., 2024) and scene segmentation (Seijdel et al., 2020, 2021) in DCNNs and how little of it seems to relate to subsequent steps. We have really been trying.
08.09.2025 20:13 β
π 1
π 0
π¬ 0
π 0
DNNs still capture an impressive amount of variance. The most parsimonious account I think is that DNNs model the initial encoding well but miss, or perform differently, subsequent steps of object recognition.
08.09.2025 20:13 β
π 1
π 1
π¬ 2
π 0
Great work from my PhD Jessica Loke, together with @lynnkasorensen.bsky.social , @irisgroen.bsky.social and Nathalie Cappaert.
08.09.2025 18:32 β
π 4
π 0
π¬ 0
π 0
Trajectories make it visual:
π΅ Texture path β high alignment, low object info (upper-left quadrant)
π΄/π’ Natural & object-only paths β more object info but no extra alignment.
This explains why better object recognition β better brain prediction.
08.09.2025 18:32 β
π 1
π 0
π¬ 1
π 0
Cross-prediction: texture features predict brain responses to natural scenes almost as well as features from the originals themselves.
Local image statistics = the common representational currency between artificial and biological vision.
08.09.2025 18:32 β
π 1
π 0
π¬ 1
π 0
The key dissociation:
β’ EEG encodes object category across all conditions.
β’ But object info does not drive DNNβbrain alignment.
β’ Peak alignment occurs when object info is minimal (texture condition).
08.09.2025 18:32 β
π 1
π 0
π¬ 1
π 0
Three versions of each image:
π΄Natural scenes
π΅Texture-synthesized (global summaries of local stats only; no recognizable objects)
π’Object-only (objects without backgrounds). Counterintuitive result: strongest DNNβbrain alignment for texture-only images!
08.09.2025 18:32 β
π 2
π 0
π¬ 1
π 0
π§ New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: itβs not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
π Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
08.09.2025 18:32 β
π 113
π 37
π¬ 5
π 5
Our response is due in 3 weeks. Pondering.
07.09.2025 17:18 β
π 3
π 0
π¬ 0
π 0
Centering cognitive neuroscience on task demands and generalization - Nature Neuroscience
Task demands are a primary determiner of behavior and neurophysiology. Here the authors discuss how understanding their influence through multitask studies and tests of generalization is the key to ar...
Datasets like NSD & THINGS offer rich stimuli but often test a single task.
After great conversations at #CCN2025 on multi-task studies & generalization in brains & models, I thought I would repost our perspective for those interested in this topic. We need multiple tasks!π doi.org/10.1038/s415...
18.08.2025 07:49 β
π 50
π 14
π¬ 1
π 0
Thank you!!!!
17.08.2025 15:06 β
π 9
π 0
π¬ 0
π 0
Lynn Flannery, Kerry Miller, Jeff Wilson, Kevin Koenrades, Brenda Klappe and our Volunteers: Nina Fitzmaurice, Ole JΓΌrgensen, Denise Kittelmann, Elif Ayten
Maithe van Noort, Mohanna Hoveyda, Caroline Harbison, Yamil Vidal, Sotirios Panagiotou,Sofie Wahlberg, Danting Meng, Mobina Tousian.
17.08.2025 15:06 β
π 9
π 0
π¬ 1
π 0
mehrer.bsky.social
@claires012345.bsky.social @mheilbron.bsky.social Angela Radulescu @neuroprinciplist.bsky.social @dotadotadota.bsky.social Tyler bonnen Sneha Aenugu @hannesmehrer.bsky.social @debyee.bsky.social Julian Kosciessa @anne-urai.bsky.social @mdhk.net @tdado.bsky.social Shauney Wilson, Shawna Lampkin,
17.08.2025 15:06 β
π 13
π 0
π¬ 1
π 0
But could not have run without @lauragwilliams.bsky.social @jaspervdb.bsky.social @achterbrain.bsky.social @niklasmuller.bsky.social @eringrant.me @pebenjamters.bsky.social @shahabbakht.bsky.social @judithfan.bsky.social @jfeather.bsky.social Jiahui Guo @tknapen.bsky.social @lampinen.bsky.social
17.08.2025 15:06 β
π 11
π 0
π¬ 1
π 0
LinkedIn
This link will take you to a page thatβs not on LinkedIn
a reception in Hotel Arena, an epic-party in Ijver (with about 40% of attendees on the dance floor), and above all 929 community members who brought a lot of energy and hopefully had, on multiple dimensions a fantastic conference.
So proud to be, together with @irisgroen.bsky.social chair.
17.08.2025 15:06 β
π 15
π 1
π¬ 1
π 0
#CCN2025 is over. Over 5 days there were 6 fantastic keynotes, 550 posters, 3 community events, 3 keynote & tutorials, 3 generative adversarial collaborations, 8 Satellite events, 1 community lunch meeting, 1 cross-conference hackathon, 1 competition, coffee all day, stroopwafels on day 1,
17.08.2025 15:06 β
π 63
π 7
π¬ 1
π 2
That's a wrap for CCN2025 -- and so planning for CCN2026 in New York is starting today! Save travels to all participants and remember to fill out the feedback survey sent via email!
15.08.2025 16:29 β
π 59
π 7
π¬ 1
π 0
The moment we've all been waiting forβ #CCN2025 is HERE! Today we start with satellite events, and tomorrow the main conference begins in Amsterdam! We'll be sharing daily updates about each day's program, so follow the CCN account to stay in the loop. Can't wait to see everyone!
11.08.2025 08:29 β
π 35
π 4
π¬ 1
π 0
After preparing for a full year together with @neurosteven.bsky.social and all other amazing organizers
of @cogcompneuro.bsky.social, #CCN2025 is finally here!
While I'm proud of the entire program we put together, I'd now like to highlight my own lab's contributions, 6 posters total:
10.08.2025 15:20 β
π 52
π 7
π¬ 1
π 0
CCN2025 kicks off this Tuesday in Amsterdam (with satellite events Monday)! Fun fact: We might be getting the best conference weather Amsterdam has ever seen βοΈ
Can't wait to meet everyone and dive into the exciting program ahead. See you there!
06.08.2025 14:47 β
π 10
π 0
π¬ 0
π 0
Representation of locomotive action affordances in human behavior, brains, and deep neural networks | PNAS
To decide how to move around the world, we must determine which locomotive actions
(e.g., walking, swimming, or climbing) are afforded by the immed...
In these tumultuous times, still happy to report a scientific achievement: our preprint on affordance perception was just published in PNAS!
www.pnas.org/doi/10.1073/...
Using behavior, fMRI and deep network analyses, we report two key findings. To recapitulate (preprint π§΅lost on other place):
16.06.2025 11:33 β
π 72
π 27
π¬ 3
π 1