Felix Quirmbach's Avatar

Felix Quirmbach

@fquirmbach.bsky.social

PhD Student (Cognitive Neuroscience) @TU Dresden in Jakub Limanowski's Lab. Studying the adaptation of own-body models via Neuroimaging & VR.

45 Followers  |  51 Following  |  2 Posts  |  Joined: 22.08.2023  |  1.7508

Latest posts by fquirmbach.bsky.social on Bluesky

Join the lab – Prediction in Communication Lab

JOB ALERT: Are you interested in how communication priors shape speech and pain perception during treatment? We offer 2 PhD positions in Hamburg. predcommlab.eu/join-the-lab/ Join us and apply! as part of the SFB/TRR289: treatment-expectation.de

06.06.2024 07:42 — 👍 4    🔁 3    💬 0    📌 0
Post image

You may have also seen me present the project & our first results at the poster session of the #PuG2024 in Hamburg last week in Hamburg - which unfortunately I forgot to announce here in advance 😅

05.06.2024 18:32 — 👍 1    🔁 0    💬 0    📌 0
Task design for our study: Participant’s real hand movements (hidden from view) were measured via data glove and fed to a virtual hand model presented on screen. From a half-opened starting position, participants executed one of two hand movements (open/close) after a variable delay. During the delay phase, participants saw a visual cue providing them with information about the upcoming movement: Cue shape predicted the movement type in half of the trials, while leaving it uncued in the other half. Cue colour indicated the to-be-expected visuomotor mapping; i.e. whether the virtual hand would move congruently with the real hand or incongruently. After a GO-signal, the virtual hand appeared on screen and subjects executed the indicated hand movement. During execution, they had to follow the virtual hand movement (tip of the virtual middle finger) with their gaze. In 25% of the trials, the cue color predictions were invalid, so that visuomotor expectations were violated.

Task design for our study: Participant’s real hand movements (hidden from view) were measured via data glove and fed to a virtual hand model presented on screen. From a half-opened starting position, participants executed one of two hand movements (open/close) after a variable delay. During the delay phase, participants saw a visual cue providing them with information about the upcoming movement: Cue shape predicted the movement type in half of the trials, while leaving it uncued in the other half. Cue colour indicated the to-be-expected visuomotor mapping; i.e. whether the virtual hand would move congruently with the real hand or incongruently. After a GO-signal, the virtual hand appeared on screen and subjects executed the indicated hand movement. During execution, they had to follow the virtual hand movement (tip of the virtual middle finger) with their gaze. In 25% of the trials, the cue color predictions were invalid, so that visuomotor expectations were violated.

Image of a participant performing the task. They have a data glove on their left hand (hidden from view), performing a movement while observing the virtual hand model on a screen in front of them. An eye tracker records their eye movements; their right hand rests on a response box to answer if the predictions were correct.

Image of a participant performing the task. They have a data glove on their left hand (hidden from view), performing a movement while observing the virtual hand model on a screen in front of them. An eye tracker records their eye movements; their right hand rests on a response box to answer if the predictions were correct.

In our current study, we combined eye tracking and virtual reality to examine how to generate "unusual" visuomotor predictions, and how this affects mental effort & performance in a delayed hand-eye movement task. You can find the pre-reg here, first results should follow soon! osf.io/6km9y

05.06.2024 18:24 — 👍 5    🔁 0    💬 1    📌 0

@fquirmbach is following 20 prominent accounts