Thanks Nadine!
Many thanks again to my collaborators / supervisors @quiringehmacher.bsky.social, @peterkok.bsky.social, @drmattdavis.bsky.social, @clarepress.bsky.social for all their help with this project over the last two years. @uclpals.bsky.social, @imagingneuroucl.bsky.social, @mrccbu.bsky.social
… and when we explicitly judge time, the motor system reads out this temporally-precise content signal – calculating a change in sensory content as the basis for passing time, much like a tick of minute hand telling us that 60 seconds have passed.
Interestingly, these temporally-precise content predictions in visual areas appear regardless of task, unlike the motor signal. Given the two signatures’ alignment in time and strength, our predictions of time appear embedded in those concerning content …
At the same time, visual regions represent the predicted content – the orientation that would have appeared if the sequence had continued – in a temporally-precise way. Instead of overall visual activity entraining, specifically activity encoding predicted content rises and falls with the rhythm.
Following the stimulus-empty window, participants saw a probe which, varying by block, they had to judge in terms of its orientation or timing. When judging time, but not orientation, motor oscillations phase-couple (or ‘entrain’) to the tracked – but absent – rhythm, predicting task performance.
We presented visual sequences predictable in both time and content leading up to a stimulus-empty window. There, we asked how the brain keeps track of the rhythmic temporal structure, both in overall neural activity and in activity tuned to the predicted content.
Content and time are inherently linked in our perceptual experience, so much so that we use changes in content – like hands on a clock – to tell time. Here, we ask how the brain conjointly keeps track of content and time.
Excited to share that our MEG project is now out in Current Biology! We show how visual content codes relate to motor oscillations in telling time.
Huge thanks to Quirin Gehmacher, Peter Kok, Matt Davis and Clare Press (bsky links below).🧵
authors.elsevier.com/sd/article/S...
I want to thank my supervisors Joost and Peter for their guidance and trust in what was an exciting Master’s project for me. Thanks to Steve for his valuable feedback as well as computational and theoretical know-how.
...We show that this effect becomes stronger in both samples when we included participants that we initially excluded due to being minimally affected by sensory input.
Finally, we related the distinct effects of presence and content cues to hallucination-like perception. While we initially found a relation between hallucination-proneness and the effect of presence priors, it did not replicate in a second sample...
Surprisingly, we also found higher sensitivity to sensory input following absence cues than presence cues and speculate about how an asymmetry between gathering evidence in favour of stimulus presence and absence may account for this result.
Presence cues and valid content cues independently affected participants’ confidence in having seen a grating. We replicated these effects and reproduced them in simulations using a post-hoc adapted version of the higher-order state space (HOSS) model.
This moderating role of presence cues on content cues was also observed when participants falsely perceived a grating with high confidence even when none was actually displayed.
As expected, participants’ orientation responses were biased towards the content cue. Interestingly though, this bias was scaled by the accompanying presence cue so that stronger content cue effects were found if paired with a presence than with an absence cue.
We asked participants to judge both the presence and content (orientation) of noisy grating stimuli while preceding compound cues predicted the likelihood of both overall grating appearance as well as its orientation.
We investigated how expectations of presence and content combine to shape visual perception and show that expectations about stimulus presence act as a volume knob for the effect of content expectations on low-level perception.
Will I see something, and if so, what will it be? I am excited to share my very first paper, in collaboration with @jhaarsma.bsky.social , @smfleming.bsky.social and @peterkok.bsky.social. 🧵https://www.biorxiv.org/content/10.1101/2024.02.22.581334v1