Sebastian Michelmann's Avatar

Sebastian Michelmann

@s-michelmann.bsky.social

Cognitive neuroscientist (Assistant Professor at NYU), human episodic memory, M/EEG, ECoG, and behavior. How do we reinstate temporally dynamic, information-rich memories?

766 Followers  |  397 Following  |  54 Posts  |  Joined: 08.09.2023  |  2.1736

Latest posts by s-michelmann.bsky.social on Bluesky

Ai2 Asta

I don't know that it works perfectly, but I have to say that the Asta search tool from @ai2.bsky.social is exactly what I want from an AI-powered research search tool for scientists: Describe a style of experiment or work and see if there are papers that have done that.
asta.allen.ai/chat

15.09.2025 19:45 β€” πŸ‘ 56    πŸ” 10    πŸ’¬ 4    πŸ“Œ 1
Preview
The dependence of children’s generalization on episodic memory varies with age and level of abstraction - Nature Communications Children’s ability to generalize from episodic memories varies by both age and the level of abstraction. Here, the authors show that lower level generalization increasingly depends on episodic memory with age, whereas higher level generalization shows no such relationship.

Thrilled to see this paper out! It's the culmination of a project begun in the depths of the pandemic with Sabrina Karjack and @zoengo.bsky.social . We continue our exploration of how children generalize when their episodic memory is not yet mature.
www.nature.com/articles/s41...

07.10.2025 13:47 β€” πŸ‘ 57    πŸ” 19    πŸ’¬ 1    πŸ“Œ 3

If you're interested in the cognitive neuroscience of memory feel free to email me!

I do experimental psychology, brain imaging (fMRI and MEG) and a bit of modelling. Lab is doing stuff on forgetting, aging, schemas, and event boundaries, but we're not limited to that.

#psychscisky #neuroskyence

06.10.2025 18:41 β€” πŸ‘ 44    πŸ” 26    πŸ’¬ 3    πŸ“Œ 0
codec lab

I'm recruiting grad students!! πŸŽ“

The CoDec Lab @ NYU (codec-lab.github.io) is looking for PhD students (Fall 2026) interested in computational approaches to social cognition & problem solving 🧠

Applications through Psych (tinyurl.com/nyucp) are due Dec 1. Reach out with Qs & please repost! πŸ™

06.10.2025 14:26 β€” πŸ‘ 44    πŸ” 38    πŸ’¬ 2    πŸ“Œ 2
Preview
Principles for proper peer review

doi.org/10.21428/8e6...

06.10.2025 23:20 β€” πŸ‘ 16    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
Post image

Job alert!🚨
Join us @uab.cat to investigate human memory representations with intracranial recordings, eye-tracking, immersive VR and deep learning. This is a fully funded, four-year PhD position at the Prediction and Memory Lab.
Feel free to reach out if you have any questions!

06.10.2025 11:05 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @usc.edu will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.

01.10.2025 22:39 β€” πŸ‘ 106    πŸ” 65    πŸ’¬ 8    πŸ“Œ 3
Preview
Careers at Drexel - Human Resources

The MAC lab at Drexel is looking for a new post-doc to work on NIH-funded projects investigating the intersection of prior knowledge and long-term memory consolidation. Please pass along to any interested lab members! careers.drexel.edu/cw/en-us/job...

25.09.2025 15:58 β€” πŸ‘ 26    πŸ” 27    πŸ’¬ 1    πŸ“Œ 1

Congratulations, Josh!!! πŸŽ‰πŸŽŠπŸ™Œ

24.09.2025 20:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
8x8 grid depicting the approach to stimulus creation. Feature pairs are on the axes and images are in the cells. The x-axis represents the high-level feature pairs: setting (green) and object (teal). For example, the first column of images all depict β€œtruck” (object) in β€œfield” (setting) rendered in various textures and patterns. The y-axis represents low-level feature pairs: texture (blue) and pattern (purple). For example, the first row of images all depict different objects and settings rendered as if drawn with crayon (texture) and containing large horizontal edges (pattern).

8x8 grid depicting the approach to stimulus creation. Feature pairs are on the axes and images are in the cells. The x-axis represents the high-level feature pairs: setting (green) and object (teal). For example, the first column of images all depict β€œtruck” (object) in β€œfield” (setting) rendered in various textures and patterns. The y-axis represents low-level feature pairs: texture (blue) and pattern (purple). For example, the first row of images all depict different objects and settings rendered as if drawn with crayon (texture) and containing large horizontal edges (pattern).

Excited to release the SPOT grid: a new image set that factorially crosses scene-object & texture-pattern pairings.

We hope these stimuli will be useful to researchers aiming to (partially) disentangle the contributions of lower- and higher-level visual features to behavior & brain activity.

1/

22.09.2025 19:34 β€” πŸ‘ 64    πŸ” 17    πŸ’¬ 3    πŸ“Œ 1
Preview
Assistant Professor - Cognitive Sciences University of California, Irvine is hiring. Apply now!

Come work with us! UC Irvine Cognitive Sciences is looking for a new Assistant Professor to join our team: recruit.ap.uci.edu/JPF09896

I'm not on the committee, but happy to talk if you're interested.

11.09.2025 18:35 β€” πŸ‘ 83    πŸ” 57    πŸ’¬ 1    πŸ“Œ 2

(end) Regarding (4), we already have a discussion point that speaks to this. You can constrain Dxy to a value other than 0, so if there is a known correlation r in your data, you can define your constraining space as wx’*Dxy*wy = r

08.09.2025 18:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction Canonical correlation analysis (CCA) has proven an effective tool for two-view dimension reduction due to its profound theoretical foundation and success in practical applications. In respect of multi...

(1) Hi @ar0mcintosh.bsky.social I think a tensor version of CRM should be feasible; similar work has been done with CCA (arxiv.org/abs/1502.02330). @schottdorflab.bsky.social what are your thoughts on this?

08.09.2025 18:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

(7) I would say that partial correlation is a closer analogy to CRM than semi-partial because we find weights for both sides. Technically, CRM is rather steering the solution away from variance shared with the confound than regressing it out.

06.09.2025 15:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(6) It is true that CRM would find a lower bound of maximal correlation because the max includes the confound and should be higher.

06.09.2025 15:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(5) There are many other situations where the strongest correlations are *not* of interest. E.g., ERPs have typical shapes; if you wanted to find a condition specific waveform, you could estimate Cxy between ERPs from the same condition and compute Dxy from ERPs belonging to different conditions.

06.09.2025 15:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(4) Another straightforward choice for Dxy is to recompute the same cross-covariance matrix as in Cxy but on data bandpass filtered around the line-noise band (we use this in example 3 in the paper).

06.09.2025 15:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(3) The key to using CRM is to find a useful Dxy. This can be different data (e.g., from a baseline period), or the same pseudo-randomized data where only the association of interest is eliminated.

06.09.2025 15:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(2) CRM is different because it adds the constraint that the projected cross-covariance wx’*Dxy*wy should remain zero. The motivating idea is that not all correlations are of interest, so if we can narrow down what the noise/confound signal looks like, we can meaningfully restrict our optimization.

06.09.2025 15:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(1) regarding the relationship to PLS: my understanding is that PLS and CCA can be viewed as very similar optimization problems and even as the same problem if the variance of the projected data is 1. Agoston Mihalik (Mihalik et al. 2022) has a great comparison of CCA vs. PLS.

06.09.2025 15:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hi @hritz.bsky.social, @ar0mcintosh.bsky.social
, @pascualmarqui.bsky.social ,and @martinhebart.bsky.social
Thank you all for your interest and for these great comments! I will try to answer your questions below:

06.09.2025 15:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
| bioRxiv bioRxiv - the preprint server for biology, operated by Cold Spring Harbor Laboratory, a research and educational institution

Our latest project find shared representations while controlling for confounds is out www.biorxiv.org/content/10.1... Check @s-michelmann.bsky.social 's thread for the executive summary. Code in python and matlab: github.com/s-michelmann... β€” Now is play time πŸ‘¨β€πŸ’»

05.09.2025 17:37 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Canonical Representational Mapping for Cognitive Neuroscience https://www.biorxiv.org/content/10.1101/2025.09.01.673485v1

05.09.2025 15:15 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

www.biorxiv.org/content/10.1...

05.09.2025 16:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
GitHub - s-michelmann/crm Contribute to s-michelmann/crm development by creating an account on GitHub.

We believe that CRM can be broadly useful for everyone studying representations in cognitive neuroscience. Our code is openly available as a toolbox (github.com/s-michelmann...), it includes MATLAB and Python versions with examples and simulations

05.09.2025 16:18 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
(e)HPC CRM component plotted on a 2-D grid as a function of position in the maze. Warmer colors appear away from the home box of the animal (indicated by a black arrow); the home box is located at X position = 600 cm and Y position = 250 cm. (f) Same as e, but components computed with CCA. No relationship with the maze-position is apparent, consistent with CCA components being dominated by 60 Hz noise

(e)HPC CRM component plotted on a 2-D grid as a function of position in the maze. Warmer colors appear away from the home box of the animal (indicated by a black arrow); the home box is located at X position = 600 cm and Y position = 250 cm. (f) Same as e, but components computed with CCA. No relationship with the maze-position is apparent, consistent with CCA components being dominated by 60 Hz noise

3.3 - Only the CRM component displays a significant association with behavior: The shared spectral pattern is increased when the animal is close to a decision point in the maze and is lowest in the home box

05.09.2025 16:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Shown are spectrograms of LFPs recorded in the HPC (left, a) and mPFC (middle, b) from awake and behaving rats solving a spatial navigation task (compare Rosenblum 2025). The weights computed with CRM for the data shown in a/b are plotted in (blue/orange) as a function of frequency.  Insets show the same weights computed with CCA. CCA weights peak at a shared noise frequency around 60 Hz, CRM weights have negative values in the lower theta band and positive values in the beta band. CRM weights have no peak around 60 Hz

Shown are spectrograms of LFPs recorded in the HPC (left, a) and mPFC (middle, b) from awake and behaving rats solving a spatial navigation task (compare Rosenblum 2025). The weights computed with CRM for the data shown in a/b are plotted in (blue/orange) as a function of frequency. Insets show the same weights computed with CCA. CCA weights peak at a shared noise frequency around 60 Hz, CRM weights have negative values in the lower theta band and positive values in the beta band. CRM weights have no peak around 60 Hz

3.2 - With CRM, we maximize the correlation between power spectra while constraining correlations in the 60Hz band to zero. This reveals shared frequency-coupled representations between the regions that could not be captured with CCA (which latches on to shared noise in the recording)

05.09.2025 16:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3.1 - Recordings from rodent mPFC and HPC may capture similar representations in their spectral activity patterns; however, shared line noise can be strongly correlated between the regions (data from Rosenblum et al. 2025)

05.09.2025 16:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023).  Correlations across components (indicated by warm colors) are high between trials of the same context (e.g., CafΓ© 1 or supermarket 2), but not between trials of different context within the same schema (e.g., CafΓ© 1 and CafΓ© 2)

Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023). Correlations across components (indicated by warm colors) are high between trials of the same context (e.g., CafΓ© 1 or supermarket 2), but not between trials of different context within the same schema (e.g., CafΓ© 1 and CafΓ© 2)

Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023).  Correlations across components (indicated by warm colors) are high between trials of the same situation (e.g., videos of Tommy in CafΓ© 1), but not between trials of different content within the same context (e.g., videos of Tommy in CafΓ© 1 and Lisa in CafΓ© 1)

Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023). Correlations across components (indicated by warm colors) are high between trials of the same situation (e.g., videos of Tommy in CafΓ© 1), but not between trials of different content within the same context (e.g., videos of Tommy in CafΓ© 1 and Lisa in CafΓ© 1)

2.2 - CRM can maximize correlations between runs of the same context while keeping correlations between different contexts in the schema at zero (b). We also maximize correlations of the same situation constraining within context correlations (c). This effectively factorizes neural representations

05.09.2025 16:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023).  Correlations (indicated by warm colors) are high between trials of the same schema (CafΓ© or supermarket)

Representational Similarity Matrix between all trials of watching videos in Reagh et al (2023). Correlations (indicated by warm colors) are high between trials of the same schema (CafΓ© or supermarket)

2.1 - BOLD patterns in mPFC represent schematic information (see Reagh et al. 2023): During movie-viewing the representational similarity between runs from the same schema (cafΓ©/grocery videos) are highly correlated (panel a). Is specific information (e.g., of each cafΓ©) also represented?

05.09.2025 16:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@s-michelmann is following 20 prominent accounts