Yong Hoon Chung's Avatar

Yong Hoon Chung

@yonghoonchung.bsky.social

PhD student at Dartmouth

128 Followers  |  141 Following  |  37 Posts  |  Joined: 26.10.2024
Posts Following

Posts by Yong Hoon Chung (@yonghoonchung.bsky.social)

Thank you!

22.02.2026 18:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Here's my previous post for summary and preprint link: bsky.app/profile/yong...

22.02.2026 01:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Real-world Objects Scaffold Visual Working Memory for Features: Increased Neural Engagement When Colors Are Remembered as Part of Meaningful Objects Abstract. Visual working memory is a core cognitive function that allows active storage of task-relevant visual information. Contrary to the common assumption that the capacity of this system is fixed...

New paper with @timbrady.bsky.social and @violastoermer.bsky.social now out in JoCN! "Real-world Objects Scaffold Visual Working Memory for Features: Increased Neural Engagement When Colors Are Remembered as Part of Meaningful Objects" doi.org/10.1162/JOCN...

22.02.2026 01:29 β€” πŸ‘ 39    πŸ” 11    πŸ’¬ 2    πŸ“Œ 0

This highlights the role of semantics in VWM, challenging models that treat WM as primarily perceptual with fixed limits. Using naturalistic stimuli with careful evaluations may enable us to probe the rich structure of VWM, offering a deeper understanding of how VWM is used in the real-world. 11/

09.02.2026 21:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

How can real objects enhance VWM? We hypothesize that semantic distinctiveness is the key: beyond visual features, real objects engage conceptual knowledge, yielding more distinctive and potentially stable memory representations, ultimately aiding performance. 10/

09.02.2026 21:13 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Lastly, a large-sample (n=300) correlation analysis showed that subjective familiarity rating predicted memory performance for real objects, while colourfulness scaled with memory performance for counterfeit objects, despite both stimulus sets matched in colourfulness scores. 9/

09.02.2026 21:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

ERP pattern similarity analysis showed that encoding and remembering real objects resulted in earlier and more robust pattern stability than counterfeit objects. This may suggest that real objects result in quicker and more stable memory representations due to available visual-semantic templates. 8/

09.02.2026 21:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

CDA is one way to look at neural engagement. But this only captures an average amplitude over predefined time windows. Another way we can examine neural engagement is to look at how the patterns of neural activity dynamically change over time. 7/

09.02.2026 21:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also looked at underlying neural activities using EEG, specifically contralateral-delay-activity (CDA). The results showed increased neural engagement when remembering real than counterfeit objects, shown in both heightened and more spread-out lateralized ERP activities. 6/

09.02.2026 21:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Strikingly, despite this match VWM performance improved only for real objects, not counterfeit ones. Counterfeit objects resulted in similar performance to fully scrambled shapes. 5/

09.02.2026 21:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

First, we tested whether these counterfeit objects are well matched in perceptual similarity with real objects. Using CNNs, we extracted visual features of each object and quantified similarities among them. This confirmed that counterfeit objects are indeed visually matched to real objects. 4/

09.02.2026 21:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Here, we tackled this using β€œcounterfeit objects” (Cooper et al., 2023): GAN-generated images that match real objects in visual properties but are novel and unrecognizable. 3/

09.02.2026 21:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Oftentimes memory performance comparison is made across drastically different looking stimulus types such as real objects and colored circles or scrambled shapes. Specifically, real objects are visually more unique and discernible. Can these perceptual differences explain the memory benefit? 2/

09.02.2026 21:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
OSF

New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the β€œmeaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/

09.02.2026 21:06 β€” πŸ‘ 41    πŸ” 24    πŸ’¬ 1    πŸ“Œ 0
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

🚨New paper altert🚨

As a synthesis of my PhD research, we revisited the prevailing assumption about the mechanisms underlying repetition learning, and re-evaluated these assumption in light of recent findings.

Now out in Perspectives on Psychological Science:
doi.org/10.1177/1745...

03.02.2026 18:01 β€” πŸ‘ 17    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0
Preview
OpenWMData A collection of publicly available working memory datasets

Make it your New Year resolution to add a #workingmemory dataset to OpenWMData so that we can curate our field's precious data, start testing theories and benchmarking models across datasets, conduct secondary analyses and meta-research using the data itself, and help me feel like I'm, like, alive.

02.01.2026 04:37 β€” πŸ‘ 28    πŸ” 15    πŸ’¬ 1    πŸ“Œ 0

Has anyone attended any pre-data-collection poster sessions (i.e., poster sessions where people present their plans for experiments before data collection in order to get feedback when it's most useful) at conferences other than VSS?

20.12.2025 00:55 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0

Broadly, this suggests that the FFDE is relatively location-specific. Fun and quite shocking illusion to look at!

14.11.2025 18:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Our results consistently showed that the illusion was significantly disrupted with location shifts of the faces. We also look at how the illusion develops over time, something that hasn't been looked closely before.

14.11.2025 18:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

FFDE is a fun illusion where peripherally presented faces start looking monstrous. Here we test whether this illusion can be transferred to new locations during the face streams. Importantly, we use joysticks so that people can continuously rate how weird the faces get, capturing the whole dynamics.

14.11.2025 18:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Testing location invariance of the flashed face distortion effect - Yong Hoon Chung, Nicole C. Anaya Sosa, Viola S. StΓΆrmer, 2025 Spatially aligned faces presented in a continuous stream in the periphery appear distorted and grotesque. This flashed face distortion effect (β€œFFDE”) was first...

New paper with NicoleAnayaSosa and @violastoermer.bsky.social now out in Perception! "Testing location invariance of the flashed face distortion effect" journals.sagepub.com/eprint/WVUFW...

14.11.2025 16:48 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Tomorrow afternoon I'll be presenting my symposium talk at #ESCoP2025 titled "Meaningful and familiar stimuli support visual working memory for simple features"! See you there!

02.09.2025 20:53 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Post image

We are now recruiting STEM mentors for the 2025-2026 graduate school application cycle!

⏰ Mentor applications close on July 31st, 2025 ⏰

✨ APPLY HERE: dashboard.project-short.com ✨

Questions? Email us: contact@project-short.com

#gradschool #phd #phdapplication #gradadmissions

21.07.2025 15:43 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
OSF

And here is a preprint link: osf.io/preprints/ps...

10.07.2025 00:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
APA PsycNet

New paper with LaurenWilliams, @timbrady.bsky.social, and @violastoermer.bsky.social now out in JEP: General! "Limits of verbal labels in cognition: Category labels do not improve visual working memory performance for obfuscated objects" psycnet.apa.org/record/2026-...

10.07.2025 00:53 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Today afternoon I’ll be presenting my poster at #vss2025 titled β€œPerceptual and conceptual contributions of the real-world object benefit in visual working memory: Is looking like an object good enough to enhance memory?” See you at Pavillion!

18.05.2025 13:45 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Overall, our results add to the emerging evidence that meaningful and familiar stimuli can enhance visual working memory processes and also show that remembering meaningful stimuli and simple features share core active cognitive processes.

17.05.2025 20:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Additionally, the meaningfulness benefits showed up even in the first five trials of the task in our results, showing how robust the effect is in visual working memory.

17.05.2025 20:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We also find that the amount of working memory increase individuals get from using meaningful stimuli also correlates with fluid intelligence scores, suggesting a link between meaningfulness benefit and fluid intelligence abilities.

17.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In this paper we show that working memory performances for both real-world objects and colored circles reliably correlate with individual differences in fluid intelligence but not with crystallized intelligence measures.

17.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0