tal boger's Avatar

tal boger

@talboger.bsky.social

third-year phd student at jhu psych | perception + cognition https://talboger.github.io/

285 Followers  |  45 Following  |  53 Posts  |  Joined: 23.11.2024  |  1.8744

Latest posts by talboger.bsky.social on Bluesky

(from lapidow & @ebonawitz.bsky.social's awesome 2023 explore-exploit paper)

14.10.2025 21:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
methods from lapidow & bonawitz, 2023. children are "dropped"

methods from lapidow & bonawitz, 2023. children are "dropped"

a falling child

a falling child

can't believe the IRB approved this part β€” hope the children are ok!

14.10.2025 21:44 β€” πŸ‘ 65    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2

What a lovely 'spotlight' of @talboger.bsky.social's work on style perception! Written by @aennebrielmann.bsky.social in @cp-trendscognsci.bsky.social.

See Aenne's paper below, as well as Tal's original work here: www.nature.com/articles/s41...

08.10.2025 17:27 β€” πŸ‘ 28    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

When a butterfly becomes a bear, perception takes center stage.

Research from @talboger.bsky.social, @chazfirestone.bsky.social and the Perception & Mind Lab.

06.10.2025 20:02 β€” πŸ‘ 34    πŸ” 8    πŸ’¬ 2    πŸ“Œ 2

Out today!

www.cell.com/current-biol...

06.10.2025 14:56 β€” πŸ‘ 39    πŸ” 11    πŸ’¬ 1    πŸ“Œ 1

important question for dev people: when reporting demographics for a paper involving both kids and adults, we want some consistency in how we report that information. so do you call the kids "men" and "women", or do you call the adults "boys" and β€œgirls"?

01.10.2025 15:33 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

sami is such a creative, thoughtful, and fun mentor. anyone who gets to work with him is so lucky!

15.09.2025 18:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Preview
Can we β€œsee” value? Spatiotopic β€œvisual” adaptation to an imperceptible dimension In much recent philosophy of mind and cognitive science, repulsive adaptation effects are considered a litmus test β€” a crucial marker, that distinguis…

Visual adaptation is viewed as a test of whether a feature is represented by the visual system.

In a new paper, Sam Clarke and I push the limits of this test. We show spatially selective, putatively "visual" adaptation to a clearly non-visual dimension: Value!

www.sciencedirect.com/science/arti...

28.08.2025 20:18 β€” πŸ‘ 40    πŸ” 15    πŸ’¬ 2    πŸ“Œ 1

It's true: This is the first project from our lab that has a "Merch" page!

Get yours @ www.perceptionresearch.org/anagrams/mer...

19.08.2025 19:28 β€” πŸ‘ 33    πŸ” 4    πŸ’¬ 3    πŸ“Œ 1

The present work thus serves as a β€˜case study’ of sorts. It yields concrete discoveries about real-world size, and it also validates a broadly applicable tool for psychology and neuroscience. We hope it catches on!

19.08.2025 16:39 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Though we manipulated real-world size, you could generate anagrams of happy faces and sad faces, tools and non-tools, or animate and inanimate objects, overcoming low-level confounds associated with such stimuli. Our approach is perfectly general.

19.08.2025 16:39 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Overall, our work confronts the longstanding challenge of disentangling high-level properties from their lower-level covariates. We found that, once you do so, most (but not all) of the relevant effects remain.

19.08.2025 16:39 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(Never fear, though: As we say in our paper, that last result is consistent with the original work, which suggested that mid-level features β€” the sort preserved in β€˜texform’ stimuli β€” may well explain these search advantages.)

19.08.2025 16:39 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
whereas previous work shows efficient visual search for real-world size, we did not find a similar effect with anagrams. our study included a successful replication of these previous findings with ordinary objects (i.e., non-anagram images).

whereas previous work shows efficient visual search for real-world size, we did not find a similar effect with anagrams. our study included a successful replication of these previous findings with ordinary objects (i.e., non-anagram images).

Finally, visual search. Previous work shows targets are easier to find when they differ from distractors in their real-world size. However, in our experiments with anagrams, this was not the case (even though we easily replicated this effect with ordinary, non-anagram images).

19.08.2025 16:38 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
people prefer to view real-world large objects as larger than real-world small objects, even with visual anagrams.

people prefer to view real-world large objects as larger than real-world small objects, even with visual anagrams.

Next, aesthetic preferences. People think real-world large objects look better when displayed large, and vice versa for small objects. Our experiments show that this is true with anagrams too!

19.08.2025 16:37 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
results from the real-world size Stroop effect with anagrams. performance is better when displayed size is congruent with real-world size.

results from the real-world size Stroop effect with anagrams. performance is better when displayed size is congruent with real-world size.

First, the β€œreal-world size Stroop effect”. If you have to say which of two images is larger (on the screen, not in real life), it’s easier if displayed size is congruent with real-world size. We found this to be true even when the images were perfect anagrams of one another!

19.08.2025 16:36 β€” πŸ‘ 16    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Then, we placed these images in classic experiments on real-world size, to see if observed effects arise even under such highly controlled conditions.

(Spoiler: Most of these effects *did* arise with anagrams, confirming that real-world size per se drives many of these effects!)

19.08.2025 16:35 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
anagrams we generated, where rotating the object changes its real-world size.

anagrams we generated, where rotating the object changes its real-world size.

We generated images using this technique (see examples). Each pair differs in real-world size but are otherwise identical* in lower-level features, because they’re the same image down to the last pixel.

(*avg orientation, aspect-ratio, etc, may still vary. ask me about this!)

19.08.2025 16:35 β€” πŸ‘ 30    πŸ” 2    πŸ’¬ 4    πŸ“Œ 1
depiction of the "visual anagrams" model by Geng et al.

depiction of the "visual anagrams" model by Geng et al.

This challenge may seem insurmountable. But maybe it isn’t! To overcome it, we used a new technique from Geng et al. called β€œvisual anagrams”, which allows you to generate images whose interpretations vary as a function of orientation.

19.08.2025 16:34 β€” πŸ‘ 23    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
the mind encodes differences in real-world size. but differences in size also carry differences in shape, spatial frequency, and contrast.

the mind encodes differences in real-world size. but differences in size also carry differences in shape, spatial frequency, and contrast.

Take real-world size. Tons of cool work shows that it’s encoded automatically, drives aesthetic judgments, and organizes neural responses. But there’s an interpretive challenge: Real-world size covaries with other features that may cause these effects independently.

19.08.2025 16:33 β€” πŸ‘ 17    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1

The problem: We often study β€œhigh-level” image features (animacy, emotion, real-world size) and find cool effects. But high-level properties covary with lower-level features, like shape or spatial frequency. So what seem like high-level effects may have low-level explanations.

19.08.2025 16:33 β€” πŸ‘ 18    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1
Video thumbnail

On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90Β°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these imagesβ€”known as β€œvisual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ

19.08.2025 16:32 β€” πŸ‘ 348    πŸ” 106    πŸ’¬ 19    πŸ“Œ 30

Out today! www.nature.com/articles/s41...

05.08.2025 21:58 β€” πŸ‘ 60    πŸ” 18    πŸ’¬ 4    πŸ“Œ 2
Post image

Lab-mate got my ass on the lab when2meet

24.07.2025 15:56 β€” πŸ‘ 200    πŸ” 10    πŸ’¬ 5    πŸ“Œ 2

Amazing new work from @gabrielwaterhouse.bsky.social and @samiyousif.bsky.social! I'm convinced the crowd size illusion is real, but the rooms full of people watching Gabe give awesome talks at @socphilpsych.bsky.social and @vssmtg.bsky.social were no illusion!

26.06.2025 16:37 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

@sallyberson.bsky.social in action at @socphilpsych.bsky.social! #SPP2025

21.06.2025 00:10 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Susan Carey sitting in the front row of a grad student talk (by @talboger.bsky.social) and going back and forth during Q&A is what makes the @socphilpsych.bsky.social so special! Loved this interaction πŸ€—

19.06.2025 19:39 β€” πŸ‘ 35    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Now officially out! psycnet.apa.org/record/2026-...

(Free version here: talboger.github.io/files/Boger_...)

03.06.2025 13:50 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
A visual advertisement of VSS projects from the Firestone Lab

A visual advertisement of VSS projects from the Firestone Lab

It's @vssmtg.bsky.social! So excited to share this year's projects from the lab, including brand new research directions and some deep dives on foundational issues.

More info @ perception.jhu.edu/vss/.

See you on the πŸ–!

#VSS2025

17.05.2025 04:49 β€” πŸ‘ 44    πŸ” 5    πŸ’¬ 1    πŸ“Œ 6

Together, these results demonstrate multiple new phenomena of stylistic perception, and more generally introduce a scientific approach to the study of style. Stay tuned for more projects on this theme, including developmental work analyzing stylistic representation in kids!

14.05.2025 16:45 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@talboger is following 19 prominent accounts