Rosanne Rademaker's Avatar

Rosanne Rademaker

@rademaker.bsky.social

Max Planck group leader at ESI Frankfurt | human cognition, fMRI, MEG, computation | sciences with the coolest (phd) students et al. | she/her

944 Followers  |  540 Following  |  86 Posts  |  Joined: 17.01.2024  |  2.3301

Latest posts by rademaker.bsky.social on Bluesky

The best part though: Working with amazing graduate student *Maria Servetnik* from our lab, who did all the heavy lifting. Not to mention lots of inspiration & input from @mjwolff.bsky.social. I'm am one very lucky PI ๐Ÿ˜Š n/n

21.01.2026 12:47 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Visual representations in the human brain rely on a reference frame that is in between allocentric and retinocentric coordinates Visual information in our everyday environment is anchored to an allocentric reference frame โ€“ a tall building remains upright even when you tilt your head, which changes the projection of the building on your retina from a vertical to a diagonal orientation. Does retinotopic cortex represent visual information in an allocentric or retinocentric reference frame? Here, we investigate which reference frame the brain uses by dissociating allocentric and retinocentric reference frames via a head tilt manipulation combined with electroencephalography (EEG). Nineteen participants completed between 1728โ€“2880 trials during which they briefly viewed (150 ms) and then remembered (1500 ms) a randomly oriented target grating. In interleaved blocks of trials, the participantโ€™s head was either kept upright, or tilted by 45ยบ using a custom rotating chinrest. The target orientation could be decoded throughout the trial (using both voltage and alpha-band signals) when training and testing within head-upright blocks, and within head-tilted blocks. Importantly, we directly addressed the question of reference frames via cross-generalized decoding: If target orientations are represented in a retinocentric reference frame, a decoder trained on head-upright trials would predict a 45ยบ offset in decoded orientation when tested on head-tilted trials (after all, a vertical building becomes diagonal on the retina after head tilt). Conversely, if target representations are allocentric and anchored to the real world, no such offset should be observed. Our analyses reveal that from the earliest stages of perceptual processing all the way throughout the delay, orientations are represented in between an allocentric and retinocentric reference frame. These results align with previous findings from physiology studies in non-human primates, and are the first to demonstrate that the human brain does not rely on a purely allocentric or retinocentric reference frame when representing visual information. ### Competing Interest Statement The authors have declared no competing interest. NIH Common Fund, https://ror.org/001d55x84, NEI R01-EY025872, NIMH R01-MH087214

Check out our *preprint* for some cool correlations with behavior (for foblique effect fans). For now, Iโ€™m just happy that these fun data are out in the world. Itโ€™s been a minute Chaipat Chunharas & I ventured to dissociate allocentric and retinocentric reference frames (7+ years ago?! ๐Ÿคซ)... 10/n

21.01.2026 12:45 โ€” ๐Ÿ‘ 12    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

No matter the exact time point, no matter how we quantified the shift, no matter if we looked at decoding or at representational geometry ยฌโ€“ the reference frame used by the brain to represent orientations was always smack dab in between retinocentric and allocentric 9/n

21.01.2026 12:41 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Post image

Well, throughout perception (when the orientation is on the screen) as well as the entire memory delay (the orientation is held in mind), we discovered a reference frame that is in between retinocentric and allocentric coordinates! 8/n

21.01.2026 12:35 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
two german shepherds are laying on the floor in front of a fireplace . ALT: two german shepherds are laying on the floor in front of a fireplace .

Conversely, if representations are allocentric and anchored to the real world, no such shift should be observed. In other words: Cross-generalized decoding to the rescue! If you had to guessโ€ฆ What reference frame do you think visual cortex uses for visual processing? 7/n

21.01.2026 12:35 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The trick? If orientations are represented in a retinocentric reference frame, a decoder trained on head-upright trials would predict a 45ยบ shift in decoded orientation when tested on head-tilted trials (after all, a vertical building becomes diagonal on the retina after head tilt). 6/n

21.01.2026 12:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Now, even if the pattern *completely shifts* with head tilt, standard (within time point) decoding can only ever infer the exact same label! After all, we as researchers do not know the underlying shift, only the orientation (and hence the label) that was on the screen. 5/n

21.01.2026 12:33 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We want to decode visual orientation from the EEG signal to uncover the reference frame used by the brain. But we have a problemโ€ฆ A decoder only learns the association between a label (e.g., 45ยบ) and a pattern of brain activity. Presented with a new pattern of activity, the label is inferred. 4/n

21.01.2026 12:32 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Do visual parts of the brain represent visual information in an allocentric or retinocentric reference frame? We used a simple orientation recall task while measuring electroencephalography (EEG) signals from human visual cortex. People had their head upright ๐Ÿ˜€ or tilted ๐Ÿซ ! 3/n

21.01.2026 12:31 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Visual information in our environment is anchored to an allocentric reference frame โ€“ a tall building remains upright even when you tilt your head. But head tilt changes the retinal projection of the building from vertical to diagonal. The building is diagonal in a retinocentic reference frame. 2/n

21.01.2026 12:29 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
a husky puppy is laying on the floor with its tongue out and wearing a blue collar . ALT: a husky puppy is laying on the floor with its tongue out and wearing a blue collar .

Hereโ€™s a thought that might make you tilt your head in curiosity: With every movement of your eyes, head, or body, the visual input to your eyes shifts! Nevertheless, it doesn't feel like the world does suddenly tilts sideways whenever you tilt your head. How can this be? TWEEPRINT ALERT! ๐Ÿšจ๐Ÿงต 1/n

21.01.2026 12:28 โ€” ๐Ÿ‘ 46    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

As someone who once tried to recruit Natalie, I can of course only recommend hiring this extremely smart scientist!!

16.01.2026 11:44 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐ŸŒ Applications are open! The IBRO Exchange Fellowships give early career #neuroscientists to conduct lab visits with several expenses covered during the exchange.

๐Ÿ—“ Apply by 15 Apr: https://ibro.org/grant/exchange-fellowships/

#grant #IBROinAsiaPacific #IBROinUSCanada #IBROinAfrica #IBROinLatAm

15.01.2026 12:01 โ€” ๐Ÿ‘ 6    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

#BrainMeeting ๐Ÿง  Alert! ๐ŸŽบ

This Friday, January 16th, the Brain Meeting speaker will be Janneke Jehee giving a talk entitled "Uncertainty in perceptual decision-making"

In person or online. For more information:
www.fil.ion.ucl.ac.uk/event

12.01.2026 09:14 โ€” ๐Ÿ‘ 17    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Please spread the word๐Ÿ”ŠMy lab is looking to hire two international postdocs. If you want to do comp neuro, combine machine learning and awesome math to understand neural circuit activity, then come work with us! Bonn is such a cool place for neuroscience now, you don't want to miss out.

10.01.2026 17:39 โ€” ๐Ÿ‘ 32    ๐Ÿ” 35    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

New preprint from the lab!

06.01.2026 21:53 โ€” ๐Ÿ‘ 8    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What if we could tell you how well youโ€™ll remember your next visit to your local coffee shop? โ˜•๏ธ

In our new Nature Human Behaviour paper, we show that the ๐—พ๐˜‚๐—ฎ๐—น๐—ถ๐˜๐˜† ๐—ผ๐—ณ ๐—ฎ ๐˜€๐—ฝ๐—ฎ๐˜๐—ถ๐—ฎ๐—น ๐—ฟ๐—ฒ๐—ฝ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ป๐˜๐—ฎ๐˜๐—ถ๐—ผ๐—ป can be measured with neuroimaging โ€“ and ๐˜๐—ต๐—ฎ๐˜ ๐˜€๐—ฐ๐—ผ๐—ฟ๐—ฒ ๐—ฝ๐—ฟ๐—ฒ๐—ฑ๐—ถ๐—ฐ๐˜๐˜€ ๐—ต๐—ผ๐˜„ ๐˜„๐—ฒ๐—น๐—น ๐—ป๐—ฒ๐˜„ ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐—ป๐—ฐ๐—ฒ๐˜€ ๐˜„๐—ถ๐—น๐—น ๐˜€๐˜๐—ถ๐—ฐ๐—ธ.

05.01.2026 18:43 โ€” ๐Ÿ‘ 68    ๐Ÿ” 24    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

This is very cool! The link between spikes and LFPโ€™s is something that comes up frequently in our (human neuroimaging) lab. Nice to learn more about it!

05.01.2026 20:47 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Research Specialist The Attention, Distractions, and Memory (ADAM) Lab at Rice University is recruiting a full-time Research Specialist (Research Specialist I). The ADAM Lab (PI: Kirsten Adam) conducts cognitive neurosci...

The ADAM lab is hiring a Research Specialist to join us! This role involves conducting human subjects research (EEG experiments on attention + working memory) and assisting with the execution and administration of ongoing projects.

Job posting: emdz.fa.us2.oraclecloud.com/hcmUI/Candid...

02.01.2026 15:21 โ€” ๐Ÿ‘ 11    ๐Ÿ” 14    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Noise in Competing Representations Determines the Direction of Memory Biases Our memories are reconstructions, prone to errors. Historically treated as a mere nuisance, memory errors have recently gained attention when found to be systematically shifted away from or towards no...

@shansmann-roth.bsky.social and I finally finished our paper confirming a unique prediction of the Demixing Model (DM): inter-item biases in #visualworkingmemory depend on the _relative_ noise of targets and non-targets, potentially going in opposing directions. ๐Ÿงต1/9
www.biorxiv.org/content/10.6...

26.12.2025 16:39 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

The neural basis of working memory has been debated. What we like to call โ€œThe Standard Modelโ€ of working memory posits that persistent discharges generated by neurons in the prefrontal cortex constitute the neural correlate of working memory (2/10)

29.12.2025 14:41 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿšจ New paper in @pnas.org to end 2025 with a bang!๐Ÿšจ

Behavioral, experiential, and physiological signatures of mind blanking
www.pnas.org/doi/10.1073/...

with Esteban Munoz-Musat, @arthurlecoz.bsky.social @corcorana.bsky.social, Laouen Belloli and Lionel Naccache

Illustration: Ana Yael.

1/n

29.12.2025 10:10 โ€” ๐Ÿ‘ 47    ๐Ÿ” 18    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2
Post image

No exciting plans for year-end yet?

Why not gear up for your next grant proposal? ๐Ÿ’ธ

Check out our website for recurring and one-time funding lines, awards, and programs! ๐Ÿ‘‰ bernstein-network.de/en/newsroom/...

23.12.2025 08:01 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Weโ€™re very happy to share that our work on 3D spatial memory was published in PNAS just before the end of the year! ๐ŸŽ‰
Link: www.pnas.org/doi/10.1073/...
(1/8)

22.12.2025 22:43 โ€” ๐Ÿ‘ 20    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
various computational neuroscience / MEEG / LFP short courses and summer schools

๐Ÿ“† updated for 2026!

list of summer schools & short courses in the realm of (computational) neuroscience or data analysis of EEG / MEG / LFP: ๐Ÿ”— docs.google.com/spreadsheets...

19.12.2025 16:37 โ€” ๐Ÿ‘ 99    ๐Ÿ” 60    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

๐Ÿš€ Excited to announce that I'm looking for people (PhD/Postdoc) to join my Cognitive Modelling group @uniosnabrueck.bsky.social.

If you want to join a genuinely curious, welcoming and inclusive community of Coxis, apply here:
tinyurl.com/coxijobs

Please RT - deadline is Jan 4โ€ผ๏ธ

18.12.2025 14:52 โ€” ๐Ÿ‘ 76    ๐Ÿ” 54    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5

All in all, we characterize human memory for speed, showing that speed is better recalled for spatiotemporally bound than texture-like stimuli (the added dimension of space helps!). Thanks for reading, and stay tuned for Giulianaโ€™s next adventures linking speed memory to motion extrapolation! 9/9

17.12.2025 16:41 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

We looked hysteresis effects (yes they exist in these data!), the role of eye movements (no they can't explain these findings), and more. But importantly, people are MUCH BETTER at recalling the speed of a single dot moving around fixation, then the speed of more texture-like dot motion!! 8/n

17.12.2025 16:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Another cool finding: The memory target and the probe could either move in congruent (e.g., both clockwise) or incongruent (e.g., target moved clockwise, the probe counterclockwise) directions. Speed recall was better for congruent motion! 7/n

17.12.2025 16:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The stimulus was presented for 4โ€“6 seconds, and remembered for 1โ€“8 seconds. This mattered not for dot motion (blue), but it *did* matter for the single dot (red) such that errors were lower when people had more time to encode the speed, and higher at longer delays. 6/n

17.12.2025 16:36 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@rademaker is following 20 prominent accounts