Yichen Yuan's Avatar

Yichen Yuan

@yichen-yuan.bsky.social

PhD candidate at Utrecht University β€’ Interested in multisensory perception, working memory & attention

35 Followers  |  34 Following  |  9 Posts  |  Joined: 30.11.2024  |  2.0063

Latest posts by yichen-yuan.bsky.social on Bluesky

OSF

Huge thanks to Surya and Nathan for their help😊

Open access preprint: osf.io/preprints/so...
All materials and data: osf.io/54vms/

Feel free to reach out to me in case of any questions! 9/9

05.12.2024 12:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

(2) Observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference, and use only the most informative sensory modality to guide behavior, while nearly ignoring other modalities (even when they convey substantial information). 8/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In these four experiments, we concluded that (1) observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. 7/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In Exp. 3 the target did not move, but only briefly appeared as a static stimulus at the exact same endpoints as in Exp. 1 and 2. Here, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-optimal (MLE) integration. 6/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In Exp. 2, there was no occluder, so participants were required to simply report where the moving target disappeared from the screen. Here, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found either. 5/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In these two experiments, we showed that participants do not seem to benefit from audiovisual information when tracking occluded objects, but flexibly prioritize one sense (V in Exp 1A and A in Exp 1B) over the other, in anticipation of modality-specific interference. 4/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We asked whether observers optimally weigh the auditory & visual components of audiovisual stimuli. We therefore compared the observed data to maximum likelihood estimation (MLE) model predictions, which weighs the unisensory inputs according to their uncertainty (variance). 3/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In Exp. 1A, moving targets (auditory, visual or audiovisual) were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Exp. 1B was identical to Experiment 1A except that a visual-only occluder was used. 2/9

05.12.2024 12:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
OSF

New paper accepted in JEP-General (preprint: osf.io/preprints/socarxiv/uvzdh) with @suryagayet.bsky.social & Nathan van der Stoep. We show that observers use both hearing & vision for localizing static objects, but rely on a single modality to report & predict the location of moving objects. 1/9

05.12.2024 12:03 β€” πŸ‘ 18    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2

@yichen-yuan is following 20 prominent accounts