How does the brain generate predictive models of own actions?
We will soon open a **Postdoc position** to address this question in my lab. If you are interested, please write to moritz.wurm@unitn.it.
More research is needed to show (1) what is happening at the neural level with these representations, (2) which other factors can affect action spaces, and (3) which circumstances can lead to a change in representational structure - if any.
We think that being in a specific environment may modulate action spaces of corresponding actions by increasing robustness of represented actions, but only if agents already are familiar with the actions.
We observed higher inter-individual consistency sorting kitchen actions while being situated in virtual kitchens - but not the other way around. Similar effects have been observed in ornithologists when sorting birds, compared to novices. However, no structural changes in action spaces.
Actions are thought to be represented in multi-dimensional action spaces. We asked participants to perform multi-arrangement tasks, sorting kitchen and workshop actions, while being situated in virtual kitchens and workshops.
New preprint! My first time using #virtualreality: we investigate how scene context / being in a (virtual) environment affects action representations.
Together with @leonkroczek.bsky.social, Michael Roidl, and Angelika Lingnau @uniregensburg.bsky.social
Read here: doi.org/10.31234/osf...
We think our results support ideas suggesting that action recognition entails the integration of concrete contextual properties, and that the lateral visual pathway integrates information from a variety of sources to form an integrated representation of observed actions.
Next, we looked at spatiotemporal characteristics of these representations. fMRI-EEG fusion data suggest that striate and extrastriate areas along the lateral visual pathway encode lower-level visual and body-related properties, and that contextual and semantic information is integrated in the LOTC.
First, we examined the temporal order in which action-related features emerge using EEG-based representational similarity analysis. Results suggest a temporally ordered hierarchical buildup of neural representations related to visual, contextual, body-related, and semantic action information.
Recognizing actions requires us to extract abstract information from highly individual and flexible features of concrete actions. Here we investigated the spatiotemporal dynamics of (some of) these features using EEG-based RSA and fMRI-EEG fusion.
Excited to share our new paper in Imaging Neuroscience!
doi.org/10.1162/IMAG...
#neuroscience
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
@martinhebart.bsky.social
www.nature.com/articles/s44...
“Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions”
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.
www.nature.com/articles/s44...
5/5 These results provide important constraints for biologically plausible models of action recognition and evidence for a critical role of the third, lateral visual pathway during the recognition and representation of observed actions and associated action-faetures.
4/n fMRI-EEG fusion analysis suggest that striate and extrastriate area along the lateral visual pathway encode lower-level visual and body-related properties of actions, and further, that contextual and semantic information is subsequently integrated in the LOTC
3/n Multiple-regression RSA suggests a temporally ordered hierarchical buildup of neural representations related to visual, contextual, body-related, and semantic information during the recognition of actions.
2/n We combined representational similarity analysis of new recordings of EEG data during an action recognition paradigm with recent fMRI recordings in an fMRI-EEG fusion approach. This allowed us to characterize the spatiotemporal dynamics of neural representations.
Delighted to share our new preprint where we provide a spatiotemporal characterization of the neural processes enabling us to recognize goal-directed actions.
🧵 1/n
With Angelika Lingnau @uniregensburg.bsky.social
Link: osf.io/preprints/os...
#neuroskyence
Come to Coimbra, Portugal, this September for SAW (Seeing and Acting Workshop). A wonderful meeting in a beautiful place. Submit your poster abstracts by July 31. Registration deadline is Aug 31. Check out the great lineup of speakers. Website here: www.uc.pt/cogbooster/s...
Please re-post.🧪🧠
Our "two-brain microstates" paper is out in Journal of Neuroscience Methods! 🎉
www.sciencedirect.com/science/arti...
With open access code: lab.compute.dtu.dk/glia/two-bra...
We present a new hyperscanning-EEG method, to detect both synchronous and asymmetric inter-brain spatiotemporal dynamics
A starter pack for people interested in inter-brain synchrony and simultaneous brain recording of all kind!
Check if you're on it and, if not, let me know!
#hyperscanning #dualeeg
go.bsky.app/T1LTrQS
Could you add me as well please? Thanks!