Christoph Strauch's Avatar

Christoph Strauch

@cstrauch.bsky.social

Assistant professor @utrechtuniversity.bsky.social studying spatial attention, eye-movements, pupillometry, and more. Co-PI @attentionlab.bsky.social

439 Followers  |  422 Following  |  71 Posts  |  Joined: 07.11.2023  |  1.9876

Latest posts by cstrauch.bsky.social on Bluesky

I think there is a lot one doesn't think of intuitively. Lossy compression of audio files directly built on psychophysics, for instance (no (hardcore experimental)psychology, no spotify!). Or take all the work foundational for artificial neural networks that comes from cognitive psych&modeling

01.08.2025 20:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Together with @ajhoogerbrugge.bsky.social, Roy Hessels and Ignace Hooge - thanks all!

29.07.2025 07:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
data saturation for gaze heatmaps. Initially, any additional participant will bring the total NSS or AUC as measures for heatmap similarity a lot closer to the full sample. However, the returns diminish increasingly at higher n.

data saturation for gaze heatmaps. Initially, any additional participant will bring the total NSS or AUC as measures for heatmap similarity a lot closer to the full sample. However, the returns diminish increasingly at higher n.

Gaze heatmaps (are popular especially for eye-tracking beginners and in many applied domains. How many participants should be tested?
Depends of course, but our guidelines help navigating this in an informed way.

Out now in BRM (free) doi.org/10.3758/s134...
@psychonomicsociety.bsky.social

29.07.2025 07:37 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

good god

23.07.2025 10:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Vacatures bij de RUG

Jelmer Borst and I are looking for a PhD candidate to build an EEG-based model of human working memory! This is a really cool project that I've wanted to kick off for a while, and I can't wait to see it happen. Please share and I'm happy to answer any Qs about the project!
www.rug.nl/about-ug/wor...

03.07.2025 13:29 β€” πŸ‘ 12    πŸ” 21    πŸ’¬ 0    πŸ“Œ 0
Preview
PhD position β€” Rademaker lab

Curious about the visual human brain, a vibrant and collaborative lab, and pursuing a PhD in the heart of Europe? My lab is recruiting for a 3-year PhD position. More details: www.rademakerlab.com/job-add

01.07.2025 06:43 β€” πŸ‘ 46    πŸ” 46    πŸ’¬ 1    πŸ“Œ 4

so nice, they are lucky to have you over there!

19.06.2025 10:57 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We had a splendid day: great weather, got to wear peculiar/special clothes, and then Alex even defended his PhD (and nailed it!).

Congratulations dr. Alex, super proud of your achievements!!!

18.06.2025 14:50 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Now published in Attention, Perception & Psychophysics @psychonomicsociety.bsky.social

Open Access link: doi.org/10.3758/s134...

12.06.2025 07:21 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

@vssmtg.bsky.social
presentations today!

R2, 15:00
@chrispaffen.bsky.social:
Functional processing asymmetries between nasal and temporal hemifields during interocular conflict

R1, 17:15
@dkoevoet.bsky.social:
Sharper Spatially-Tuned Neural Activity in Preparatory Overt than in Covert Attention

18.05.2025 09:41 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Cool new preprint by Damian. Among other findings: eye pupillometry, EEG & IEMs show that the premotor theory of attention can't be the full story: eye movements are associated with an additional, separable, spatially tuned process compared to covert attention, hundreds of ms before shifts happen.

13.05.2025 08:17 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
A move you can afford Where a person will look next can be predicted based on how much it costs the brain to move the eyes in that direction.

Where a person will look next can be predicted based on how much it costs the brain to move the eyes in that direction.

04.05.2025 08:21 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Thanks!

26.04.2025 06:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Let me know if you're still unconvinced and, if so, why. I'm also happy to present it in more detail at a lab meeting or online.
Cheers!

25.04.2025 07:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Redirecting

All together, it's certainly correct that pupillometry requires care as it's just two output systems and in many (but not all) ways just one with multiple inputs. But they are well understood (shameless plug to my tins papers here):
doi.org/10.1016/j.ti...
doi.org/10.1016/j.ti...

25.04.2025 07:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The Costs of Paying Overt and Covert Attention Assessed With Pupillometry - Damian Koevoet, Christoph Strauch, Marnix Naber, Stefan Van der Stigchel, 2023 Attention can be shifted with or without an accompanying saccade (i.e., overtly or covertly, respectively). Thus far, it is unknown how cognitively costly these...

With pupil size you can also measure the costs of shifting covert attention, see our paper from 2023 (doi.org/10.1177/0956...).

25.04.2025 07:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Lastly, are there other physiological measures that point to similar effects? Yes. Saccade latencies show similar effects - providing convergent evidence to our bottom line of effort driving saccade selection. Latencies are just not as nice as they are not separate from the movement itself

25.04.2025 07:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There are very systematic investigations into pupil size across fixation location for methodological reasons (the foreshortening of pupil size relative to the eyetracker). Fixation location itself does not show up/down differences on pupil size.

25.04.2025 07:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The difference in pupil size when preparing upward vs downward directed saccades (~0.1z) is absolutely meaningless relative to the effect of a PLR (~3-4 is not uncommon, depending on luminance change). If there would be a hard-coded map built in, it wouldn't serve any meaningful protective role

25.04.2025 06:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

both the up/down and the cardinal/oblique differences are highly predictive of participants' free choices in the separate saccade selection task (no delay here, no pupil measurement here). But your point would then be that cardinal/oblique indeed reflects effort minimization, up/down doesn't?

25.04.2025 06:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's important to realize that our pupil measures are taken when the eye is perfectly still in the center. The only difference being that participants know which direction the will have to make a saccade to later.
Are you on board with larger pupil size = more effort here when preparing diagonals?

25.04.2025 06:48 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

the PLR has two primary functions, one of which is to protect the retina from excess luminance. The other is to optimize contrast perception. A hard-wired up/down map would jeopardize the later in many cases. Given how important contrast perception is for vision, I highly doubt that would be smart.

25.04.2025 06:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Together, a luminance effect is quite unlikely in our controlled paradigm - but let me know your thoughts, fun to discuss/think about it!

24.04.2025 12:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Usually, presaccadic PLR effects get stronger the closer one gets to the eye movement. The up/down effect we observed actually got weaker closer to the eye movement (Figure 1, left in the paper). This contrasts with the cardinal/oblique effects that are much more likely due to motor coordination.

24.04.2025 12:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In natural viewing, to all of our knowledge, presaccadic pupil changes depend only on the actual brightness at your saccade targetβ€”not on any built-in β€œup = bright” map. So it’s highly unlikely your pupils would start to constrict before upward glances because you expect upper stuff to be bright.

24.04.2025 12:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There are a couple of other effects that the effort account nicely predicted, including that people made less saccades when we induced an auditory dual task (as primary task), and that they cut especially the directions that we found to be costly using our pupil measure.
Hope that gets your point.

24.04.2025 12:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What is reflected in the pupil is presaccadic or covert attention to differently bright regions of the visual field (which 'prepares' the PLR), but that's why it's of utmost importance to keep luminance similar across directions - which we did in the study.

24.04.2025 12:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There are a bunch of studies trying pavlovian conditioning of the pupil light response, but they consistently show that this is impossible, making this quite unlikely in my opinion.

24.04.2025 12:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Now it could be, in principle, that luminance anisotropies in the environment let us built up a statistical model that influences the pupil because we know which directions are usually bright and which are not (note that this then ignored body position, head position and rotation etc.)

24.04.2025 12:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

and we also found that participants selected eye movements in free choice exactly according to this - preferring the upward directed ones (less pupil size during planning, less effort) over the downward directed ones (more pupil size during planning, more effort)

24.04.2025 12:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@cstrauch is following 20 prominent accounts