The countless diversions are over and the preprint is out!
Experimental neuroscience has identified neural dynamics that may facilitate speech processing in humans.
We use computational modeling to ask *how* (learning mechanism) and *why* (domain specificity) some of these properties arise.
LLMs learn to use language from text. Children acquire language through speech.
Although we don’t quite have models that learn to use language from speech alone, there’s still a lot to learn from looking at what the existing models are and are not doing.
The chinese ones also have very pretty covers.
Very excited for my first encounter with Eileen Chang in English!
Also my first nyrb classics. The typesetting is very classy.
Hi Matt, the link is not working.
the feeling when I keep coming up with ideas of additional experiments when trying to write up a paper
I’m not satisfied with the translation but I guess that’s the point.
生命像圣经,从希伯来文译成希腊文,从希腊文译成拉丁文,从拉丁文译成英文,从英文译成国语。翠远读它的时候,国语又在她脑子里译成了上海话。那未免有点隔膜。
“Life is like the bible, translated from Hebrew to Greek, from Greek to Latin, from Latin to English, from English to Mandarin. When Cuiyuan read it, it’s translated into Shanghainese in her mind. Some things do not come through.”
I get the feeling that my cat is as concerned as my supervisors about how things are going
Oh what would I do without my cat? Who’s gonna keep on eye on my draft even after I’ve gone to bed?
genAI saves time alright but at cost of actual learning
I wish more aspiring academics really appreciated this aspect, that academia is dialogical. You have to be able to extemporaneously defend your ideas and discuss them in great detail, orally. You have to know it well enough to have it at your fingertips.
NEW PREPRINT!
Language is not just a formal system—it connects words to the world. But how do we measure this connection in a cross-linguistic, quantitative way?
🧵 Using multimodal models, we introduce a new approach: groundedness ⬇️