ERCbravenewword's Avatar

ERCbravenewword

@ercbravenewword.bsky.social

Exploring how new words convey novel meanings in ERC Consolidator project #BraveNewWord🧠Unveiling language and cognition insightsπŸ”Join our research journey! https://bravenewword.unimib.it/

170 Followers  |  290 Following  |  35 Posts  |  Joined: 25.01.2025  |  1.7176

Latest posts by ercbravenewword.bsky.social on Bluesky

hidden state representation during training

hidden state representation during training

I’d like to share some slides and code for a β€œMemory Model 101 workshop” I gave recently, which has some minimal examples to illustrate the Rumelhart network & catastrophic interference :)
slides: shorturl.at/q2iKq
code (with colab support!): github.com/qihongl/demo...

26.05.2025 11:56 β€” πŸ‘ 31    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Preview
ChiWUG: A Graph-based Evaluation Dataset for Chinese Lexical Semantic Change Detection

πŸŽ‰We're thrilled to welcome Jing Chen, PhD to our team!
She investigates how meanings are encoded and evolve, combining linguistic and computational approaches.
Her work spans diachronic modeling of lexical change in Mandarin and semantic transparency in LLMs.
πŸ”— research.polyu.edu.hk/en/publicati...

08.07.2025 10:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English - Psychonomic Bulletin & Review Auditory iconic words display a phonological profile that imitates their referents’ sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this articl...

πŸ“’ New paper out! We show that auditory iconicity is not marginal in English: word sounds often resemble real-world sounds. Using neural networks and sound similarity measures, we crack the myth of arbitrariness.
Read more: link.springer.com/article/10.3...

@andreadevarda.bsky.social

04.07.2025 12:16 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLM...

1/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...

19.03.2025 14:09 β€” πŸ‘ 13    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib
YouTube video by Mbs Vector Space Lab Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib

Here's the video of the seminar for those who missed it. Enjoy!

youtu.be/p2YXb6WHCi4

18.03.2025 10:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Compositional processing in the recognition of Chinese compounds: Behavioural and computational studies - Psychonomic Bulletin & Review Recent research has shown that the compositional meaning of a compound is routinely constructed by combining meanings of constituents. However, this body of research has focused primarily on Germanic ...

1st post here! Excited to share this work with Marelli & @kathyrastle.bsky.social. We've found readers "routinely" combine constituent meanings for Chinese compound meaning, despite variability in constituent meaning and word structure, even when they're not asked to. See threadsπŸ‘‡ for more details:

10.03.2025 15:36 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0

Link to the seminar: meet.google.com/vwm-hsug-niv
πŸ“… Don’t miss it!

03.03.2025 13:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“’ Upcoming Seminar

Words are weird? On the role of lexical ambiguity in language
πŸ—£ Gemma Boleda (Universitat Pompeu Fabra, Spain)
Why is language so ambiguous? Discover how ambiguity balances cognitive simplicity and communicative complexity through large-scale studies.
πŸ“ UniMiB, Room U6-01C, Milan

03.03.2025 13:41 β€” πŸ‘ 13    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Preview
(PDF) False memories from nowhere: humans falsely recognize words that are not attested in their vocabulary PDF | Semantic knowledge plays an active role in many well-known false memory phenomena, including those emerging from the Deese–Roediger–McDermott... | Find, read and cite all the research you need o...

⚠️ New Study Alert ⚠️

Humans can falsely recognize meaningless pseudowords when they resemble studied words. 🧠✨ This research shows that our brains detect hidden patterns, even without prior knowledge, leading to false memories.

πŸ”— Read more:
www.researchgate.net/publication/...

03.03.2025 12:18 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“’ Upcoming Seminar
The Power of Words: The contribution of co-occurrence regularities of word use to the development of semantic organization
πŸ—£ Olivera Savic (BCBL)
How do children grasp deeper word connections beyond simple meanings? Discover how word co-occurrence shapes semantic development

18.02.2025 15:14 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Double descent of prediction error. Degree-one, degree-three, degree-twenty, and degree-one-thousand polynomial regression fits (magenta; from Left to Right) to data generated from a degree-three polynomial function (green). Low prediction error is achieved by both degree-three and degree-one-thousand models.

Double descent of prediction error. Degree-one, degree-three, degree-twenty, and degree-one-thousand polynomial regression fits (magenta; from Left to Right) to data generated from a degree-three polynomial function (green). Low prediction error is achieved by both degree-three and degree-one-thousand models.

One of the most-viewed PNAS articles in the last week is β€œIs Ockham’s razor losing its edge? New perspectives on the principle of model parsimony.” Explore the article here: www.pnas.org/doi/10.1073/...

For more trending articles, visit ow.ly/Me2U50SkLRZ.

11.02.2025 19:36 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 0    πŸ“Œ 2

Our @RBonandrini was bestowed a "Giovani talenti" award for his studies on word processing.

Congrats Rolando for this achievement! https://x.com/unimib/status/1870046485947265302

21.12.2024 09:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

🎀 Just presented at #AMLaP2024! "Be the wapple of my eye: Predicting the sensorimotor pattern of novel words from language-based representations" by @giulialoca_, Simona Amenta & @MarelliMar. πŸ“ˆ The presentation was very well received. Thanks for the incredible feedback!

05.09.2024 20:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00527/123792/Meaning-beyond-lexicality-Capturing-Pseudoword

30.08.2024 14:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Can a word like "quocky" actually hold meaning, even if it’s not in the dictionary? πŸš€ Our Lab's new paper shows people might be better at defining these made-up words than you'd think, highlighting the surprising flexibility of language. #AI #Linguistics #LanguageModels​.πŸ‘‡πŸ‘‡

30.08.2024 14:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

New BraveNewWord paper! https://x.com/CognitionJourn/status/1817927823513825507

29.07.2024 20:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Automating Jingle-Jangle Detection in Psychology | Dirk U. Wulff | Psychology | Milan-Bicocca
Automating Jingle-Jangle Detection in Psychology | Dirk U. Wulff | Psychology | Milan-Bicocca

Here's the video of the seminar for those who missed it. Enjoy!

https://www.youtube.com/watch?v=PgthqKS1M0g

23.07.2024 08:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Here's the link to the paper:
https://www.sciencedirect.com/science/article/pii/S0010027724001689?dgcid=coauthor

17.07.2024 11:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Join us as @dirkuwulff discusses using embeddings of psychometric items, scales, and construct labels to predict empirical relations, detect jingle-jangle fallacies, and refine psychological taxonomies. Don't miss insights on tackling taxonomic incommensurability in psychology.

08.07.2024 10:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New seminar by @ERCbravenewword on July 15th at 2:00 PM! Link to join: https://meet.google.com/hyr-habw-vnj

08.07.2024 10:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Over the past weekend at the International Word Processing Conference, we presented our findings on how adults process novel words, integrating psychology, linguistics, and neuroscience. @ERCbravenewword (@MarelliMar, @RBonandrini, Iva Saban, @fabio_marson, @giulialoca_)

08.07.2024 09:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Read more: https://psycnet.apa.org/record/2024-52125-001

01.07.2024 14:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Verb endings affect our mental timeline (past/right-hand, future/left-hand) with both tensed verbs and pseudo-verbs. Subtle cues shape temporal perception automatically, showing sublexical strings carry crucial semantic info in spatial-temporal associations. #CognitiveScience πŸ‘‡

01.07.2024 14:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Link to the book of abstract: https://moproc2024.net/wp-content/uploads/2024/06/woproc24_boa.pdf

Link WoProc 2024: https://moproc2024.net/

27.06.2024 08:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Building foundation models of human cognition - Marcel Binz
Building foundation models of human cognition - Marcel Binz

If you were unable to attend the seminar, you can watch it at the following link: πŸŽ₯

https://www.youtube.com/watch?v=-SknOXBD5gQ

Enjoy! 😊

25.06.2024 20:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Link to the book of abstract: https://t.co/i5DYDqsqA9%E2%80%A6 Link WoProc 2024:
https://moproc2024.net/

21.06.2024 14:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Link to the book of abstract: https://moproc2024.net/wp-content/uploads/2024/06/woproc24_boa.pdf

Link WoProc 2024:
https://moproc2024.net/

18.06.2024 08:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is semantic knowledge automatically extracted from nonwords? Do individual differences in language experience affect sensitivity to semantics?
We analyzed data from 300k participants to answer these questions. On 6th July @fabio_marson will talk about these topics in WoProc2024.

18.06.2024 08:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image 10.06.2024 21:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Building Foundation Models of Human Cognition
Thursday, June 20 Β· 2:00 (Europe/Rome)
πŸ–₯️ Google Meet: https://meet.google.com/zez-fvey-vvp

28.05.2024 07:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@ercbravenewword is following 20 prominent accounts