ERCbravenewword's Avatar

ERCbravenewword

@ercbravenewword.bsky.social

Exploring how new words convey novel meanings in ERC Consolidator project #BraveNewWord๐Ÿง Unveiling language and cognition insights๐Ÿ”Join our research journey! https://bravenewword.unimib.it/

198 Followers  |  318 Following  |  45 Posts  |  Joined: 25.01.2025  |  1.7191

Latest posts by ercbravenewword.bsky.social on Bluesky

Making sense from the parts: What Chinese compounds tell us about reading | Cheng-Yu Hsieh | Milan
YouTube video by Mbs Vector Space Lab Making sense from the parts: What Chinese compounds tell us about reading | Cheng-Yu Hsieh | Milan

For those who couldn't attend, the recording of Hsieh Cheng-Yu's seminar is now available on our YouTube channel.

Watch the full presentation here: youtu.be/v7DHox_6duE

03.11.2025 08:29 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Compositionality in the semantic network: a model-driven representational similarity analysis Abstract. Semantic composition allows us to construct complex meanings (e.g., โ€œdog houseโ€, โ€œhouse dogโ€) from simpler constituents (โ€œdogโ€, โ€œhouseโ€). Neuroim

How does the brain handle semantic composition?

Our new Cerebral Cortex paper shows the left inferior frontal gyrus (BA45) does it automatically, even when task-irrelevant. We used fMRI + computational models.

Congrats Marco Ciapparelli, Marco Marelli & team!

doi.org/10.1093/cerc...

31.10.2025 06:18 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿšจ New publication: How to improve conceptual clarity in psychological science?

Thrilled to see this article with @ruimata.bsky.social out. We discuss how LLMs can be leveraged to map, clarify, and generate psychological measures and constructs.

Open access article: doi.org/10.1177/0963...

23.10.2025 07:27 โ€” ๐Ÿ‘ 43    ๐Ÿ” 18    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Preview
Italian blasphemy and German ingenuity: how swear words differ around the world Once swearwords were dismissed as a sign of low intelligence, now researchers argue the โ€˜powerโ€™ of taboo words has been overlooked

A fascinating read in @theguardian.com on the psycholinguistics of swearing!

Did you know Germans averaged 53 taboo words, while Brits e Spaniards listed only 16?
Great to see the work of our colleague Simone Sulpizio & Jon Andoni Duรฑabeitia highlighted! ๐Ÿ‘

www.theguardian.com/science/2025...

23.10.2025 12:27 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Join us for our next seminar! We're excited to host Hsieh Cheng-Yu (University of London)

He'll discuss "Making sense from the parts: What Chinese compounds tell us about reading," exploring how we process ambiguity & meaning consistency

๐Ÿ—“๏ธ 27th Oct โฐ 2PM (CET)๐Ÿ“UniMiB ๐Ÿ’ป meet.google.com/zvk-owhv-tfw

19.10.2025 07:35 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

I'm sharing a Colab notebook on using large language models for cognitive science! GitHub repo: github.com/MarcoCiappar...

It's geared toward psychologists & linguists and covers extracting embeddings, predictability measures, comparing models across languages & modalities (vision). see examples ๐Ÿงต

18.07.2025 13:39 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.

Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.

New paper! ๐Ÿšจ I argue that LLMs represent a synthesis between distributed and symbolic approaches to language, because, when exposed to language, they develop highly symbolic representations and processing mechanisms in addition to distributed ones.
arxiv.org/abs/2502.11856

30.09.2025 13:15 โ€” ๐Ÿ‘ 27    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Compositionality in the semantic network: a model-driven representational similarity analysis Abstract. Semantic composition allows us to construct complex meanings (e.g., โ€œdog houseโ€, โ€œhouse dogโ€) from simpler constituents (โ€œdogโ€, โ€œhouseโ€). Neuroim

Important fMRI/RSA study by @marcociapparelli.bsky.social et al. Compositional (multiplicative) representations of compounds/phrases in left IFG (BA45), mSTS, ATL; left AG encodes constituents, not their composition, weighing the right element more, vice versa IFG ๐Ÿง ๐Ÿงฉ
academic.oup.com/cercor/artic...

26.09.2025 09:29 โ€” ๐Ÿ‘ 9    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image Post image

Great week at #ESLP2025 in Aix-en-Provence! Huge congrats to our colleagues for their excellent talks on computational models, sound symbolism, and multimodal cognition. Proud of the team and the stimulating discussions!

25.09.2025 10:28 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“ฃThe chapter "๐—ฆ๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐˜๐˜†: ๐— ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฐ ๐—ฎ๐—ป๐—ฑ ๐—ก๐—ผ๐—ฟ๐—บ๐˜€"
w/@mariannabolog.bsky.social is now online & forthcoming in the #ElsevierEncyclopedia of Language & Linguistics
๐Ÿ” Theoretical overview, quantification tools, and behavioral evidence on specificity.
๐Ÿ‘‰ Read: urly.it/31c4nm
@abstractionerc.bsky.social

18.09.2025 08:57 โ€” ๐Ÿ‘ 5    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
OSF

The dataset includes over 240K fixations and 150K word-level metrics, with saccade, fixation, and (word) interest area reports. Preprint osf.io/preprints/os..., data osf.io/hx2sj/. Work conducted with @davidecrepaldi.bsky.social and Maria Ktori. (2/2)

22.08.2025 18:49 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

How can we reduce conceptual clutter in the psychological sciences?

@ruimata.bsky.social and I propose a solution based on a fine-tuned ๐Ÿค– LLM (bit.ly/mpnet-pers) and test it for ๐ŸŽญ personality psychology.

The paper is finally out in @natrevpsych.bsky.social: go.nature.com/4bEaaja

11.03.2025 10:57 โ€” ๐Ÿ‘ 52    ๐Ÿ” 29    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5
Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory
YouTube video by Mbs Vector Space Lab Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory

For those who couldn't attend, the recording of Abhilasha Kumar's seminar on exploring form-meaning interactions in novel word learning and memory search is now available on our YouTube channel!!

Watch the full presentation here:
www.youtube.com/watch?v=VJTs...

12.09.2025 11:42 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Happy to share that our work on semantic composition is out now -- open access -- in Cerebral Cortex!

With Marco Marelli (@ercbravenewword.bsky.social), @wwgraves.bsky.social & @carloreve.bsky.social.

doi.org/10.1093/cerc...

12.09.2025 09:15 โ€” ๐Ÿ‘ 12    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

Great presentation by @fabiomarson.bsky.social last Saturday at #AMLAP2025! He shared his latest research using EEG to study how we integrate novel semantic representations, โ€œlinguistic chimerasโ€, from context.

Congratulations on a fascinating talk!

09.09.2025 11:10 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
The Computational Approach to Morphological Productivity | Harald Baayen at Bicocca
YouTube video by Mbs Vector Space Lab The Computational Approach to Morphological Productivity | Harald Baayen at Bicocca

For those who couldn't attend, the recording of Prof. Harald Baayen's seminar on morphological productivity and the Discriminative Lexicon Model is now available on our YouTube channel.

Watch the full presentation here:
www.youtube.com/watch?v=zN7G...

09.09.2025 10:45 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

New seminar announcement!

Exploring form-meaning interactions in novel word learning and memory search
Abhilasha Kumar (Assistant Professor, Bowdoin College)

A fantastic opportunity to delve into how we learn new words and retrieve them from memory.

๐Ÿ’ป Join remotely: meet.google.com/pay-qcpv-sbf

27.08.2025 11:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข Upcoming Seminar!

A computational approach to morphological productivity using the Discriminative Lexicon Model
Professor Harald Baayen (University of Tรผbingen, Germany)

๐Ÿ—“๏ธ September 8, 2025
2:00 PM - 3:30 PM
๐Ÿ“ UniMiB, Room U6-01C, Milan
๐Ÿ”— Join remotely: meet.google.com/dkj-kzmw-vzt

25.08.2025 12:52 โ€” ๐Ÿ‘ 4    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
hidden state representation during training

hidden state representation during training

Iโ€™d like to share some slides and code for a โ€œMemory Model 101 workshopโ€ I gave recently, which has some minimal examples to illustrate the Rumelhart network & catastrophic interference :)
slides: shorturl.at/q2iKq
code (with colab support!): github.com/qihongl/demo...

26.05.2025 11:56 โ€” ๐Ÿ‘ 31    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
ChiWUG: A Graph-based Evaluation Dataset for Chinese Lexical Semantic Change Detection

๐ŸŽ‰We're thrilled to welcome Jing Chen, PhD to our team!
She investigates how meanings are encoded and evolve, combining linguistic and computational approaches.
Her work spans diachronic modeling of lexical change in Mandarin and semantic transparency in LLMs.
๐Ÿ”— research.polyu.edu.hk/en/publicati...

08.07.2025 10:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English - Psychonomic Bulletin & Review Auditory iconic words display a phonological profile that imitates their referentsโ€™ sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this articl...

๐Ÿ“ข New paper out! We show that auditory iconicity is not marginal in English: word sounds often resemble real-world sounds. Using neural networks and sound similarity measures, we crack the myth of arbitrariness.
Read more: link.springer.com/article/10.3...

@andreadevarda.bsky.social

04.07.2025 12:16 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLM...

1/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...

19.03.2025 14:09 โ€” ๐Ÿ‘ 13    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib
YouTube video by Mbs Vector Space Lab Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib

Here's the video of the seminar for those who missed it. Enjoy!

youtu.be/p2YXb6WHCi4

18.03.2025 10:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Compositional processing in the recognition of Chinese compounds: Behavioural and computational studies - Psychonomic Bulletin & Review Recent research has shown that the compositional meaning of a compound is routinely constructed by combining meanings of constituents. However, this body of research has focused primarily on Germanic ...

1st post here! Excited to share this work with Marelli & @kathyrastle.bsky.social. We've found readers "routinely" combine constituent meanings for Chinese compound meaning, despite variability in constituent meaning and word structure, even when they're not asked to. See threads๐Ÿ‘‡ for more details:

10.03.2025 15:36 โ€” ๐Ÿ‘ 7    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Link to the seminar: meet.google.com/vwm-hsug-niv
๐Ÿ“… Donโ€™t miss it!

03.03.2025 13:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข Upcoming Seminar

Words are weird? On the role of lexical ambiguity in language
๐Ÿ—ฃ Gemma Boleda (Universitat Pompeu Fabra, Spain)
Why is language so ambiguous? Discover how ambiguity balances cognitive simplicity and communicative complexity through large-scale studies.
๐Ÿ“ UniMiB, Room U6-01C, Milan

03.03.2025 13:41 โ€” ๐Ÿ‘ 13    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
(PDF) False memories from nowhere: humans falsely recognize words that are not attested in their vocabulary PDF | Semantic knowledge plays an active role in many well-known false memory phenomena, including those emerging from the Deeseโ€“Roedigerโ€“McDermott... | Find, read and cite all the research you need o...

โš ๏ธ New Study Alert โš ๏ธ

Humans can falsely recognize meaningless pseudowords when they resemble studied words. ๐Ÿง โœจ This research shows that our brains detect hidden patterns, even without prior knowledge, leading to false memories.

๐Ÿ”— Read more:
www.researchgate.net/publication/...

03.03.2025 12:18 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข Upcoming Seminar
The Power of Words: The contribution of co-occurrence regularities of word use to the development of semantic organization
๐Ÿ—ฃ Olivera Savic (BCBL)
How do children grasp deeper word connections beyond simple meanings? Discover how word co-occurrence shapes semantic development

18.02.2025 15:14 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Double descent of prediction error. Degree-one, degree-three, degree-twenty, and degree-one-thousand polynomial regression fits (magenta; from Left to Right) to data generated from a degree-three polynomial function (green). Low prediction error is achieved by both degree-three and degree-one-thousand models.

Double descent of prediction error. Degree-one, degree-three, degree-twenty, and degree-one-thousand polynomial regression fits (magenta; from Left to Right) to data generated from a degree-three polynomial function (green). Low prediction error is achieved by both degree-three and degree-one-thousand models.

One of the most-viewed PNAS articles in the last week is โ€œIs Ockhamโ€™s razor losing its edge? New perspectives on the principle of model parsimony.โ€ Explore the article here: www.pnas.org/doi/10.1073/...

For more trending articles, visit ow.ly/Me2U50SkLRZ.

11.02.2025 19:36 โ€” ๐Ÿ‘ 15    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

Our @RBonandrini was bestowed a "Giovani talenti" award for his studies on word processing.

Congrats Rolando for this achievement! https://x.com/unimib/status/1870046485947265302

21.12.2024 09:33 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@ercbravenewword is following 20 prominent accounts