I thought it was very good! Some people strongly prefer Babel for its perspective (the POV character of BoBH is a white woman), but I had the same criticisms as you and I liked BoBH better, especially in terms of character development. It also talks a lot more about research as a career!
06.12.2025 23:08 β π 1 π 0 π¬ 0 π 0
Have you read Blood over Bright Haven? (No translation magic there, unfortunately, but much better on both other points IMO)
06.12.2025 14:26 β π 1 π 0 π¬ 1 π 0
Surprising to me that on the chart it's labelled as being darker than The Secret History!
06.12.2025 14:23 β π 1 π 0 π¬ 1 π 0
References to two papers next to one another in a bibliography section:
Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025.
Making βfetchβ happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.
Accidental bibliography achievement unlocked!
(I highly recommend checking out both papers)
04.12.2025 21:05 β π 6 π 1 π¬ 0 π 0
Congratulations!!!
08.11.2025 00:14 β π 1 π 0 π¬ 0 π 0
Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extentβ¦
Gillian Hadfield - Alignment is social: lessons from human alignment for AI
The recording of my keynote from #COLM2025 is now available!
06.11.2025 21:35 β π 10 π 3 π¬ 0 π 0
Btw the PI of this work, Dr Kelly Lambert, has a cool book called "The Lab Rat Chronicles" that describes lots of behavioral findings from rat experiments! (Written pre-driving rats, unfortunately)
06.11.2025 16:30 β π 1 π 0 π¬ 0 π 0
two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive
the only kind of Rat Race I'm down for
06.11.2025 14:43 β π 18 π 1 π¬ 2 π 0
Congratulations! Took me a second to understand you weren't talking about Lexical Functional Grammar though...
05.11.2025 13:22 β π 3 π 0 π¬ 1 π 0
Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores
30.10.2025 20:33 β π 144 π 170 π¬ 13 π 50
Finally out in TACL:
πEWoK (Elements of World Knowledge)π: A cognition-inspired framework for evaluating basic world knowledge in language models
tl;dr: LLMs learn basic social concepts way easier than physical&spatial concepts
Paper: direct.mit.edu/tacl/article...
Website: ewok-core.github.io
20.10.2025 17:36 β π 70 π 10 π¬ 1 π 2
π Excited to share a major update to our βMixture of Cognitive Reasonersβ (MiCRo) paper!
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brainβs functional specialization?
More below π§ π
cognitive-reasoners.epfl.ch
20.10.2025 12:05 β π 29 π 9 π¬ 2 π 1
DM'd you, thanks!
19.10.2025 14:04 β π 1 π 0 π¬ 0 π 0
The organizers mentioned that the videos will be up a few weeks after the conference! I expect it'll be at www.youtube.com/@colm_conf
19.10.2025 00:19 β π 1 π 0 π¬ 1 π 0
I still have that card! Still working on that second ice cream π₯²
17.10.2025 17:59 β π 1 π 0 π¬ 0 π 0
It used to be 5 "no"s for ice cream/pizza! Has the exchange rate gone up?
17.10.2025 17:36 β π 1 π 0 π¬ 1 π 0
I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!
16.10.2025 23:19 β π 7 π 6 π¬ 2 π 0
Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans donβt cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.
15.10.2025 23:00 β π 15 π 3 π¬ 1 π 0
Inspired to share some papers that I found at #COLM2025!
"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation" by Amanda Myntti et al. arxiv.org/abs/2504.01542
14.10.2025 18:16 β π 26 π 8 π¬ 1 π 0
Title: Large Language Models Assume People are More Rational than We Really are
Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths
Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University
Email: ryanliu at princeton.edu and jiayig at princeton.edu
LLMs Assume People Are More Rational Than We Really Are by Ryan Liu* & Jiayi Geng* et al.:
LMs are bad (too rational) at predicting human behaviour, but aligned with humans in assuming rationality in othersβ choices.
arxiv.org/abs/2406.17055
14.10.2025 00:43 β π 4 π 0 π¬ 0 π 0
Title: Neologism Learning for Controllability and Self-Verbalization
Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim
Affiliation: Google DeepMind
Email: {johnhew, oyvindt, geirhos, beenkim} at google.com
Neologism Learning by John Hewitt et al.:
Training new token embeddings on examples with a specific property (e.g., short answers) leads to finding βmachine-only synonymsβ for these tokens that elicit the same behaviour (short answers=βlackβ).
arxiv.org/abs/2510.08506
14.10.2025 00:43 β π 0 π 0 π¬ 1 π 0
Title: Hidden in plain sight: VLMs overlook their visual representations
Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell
Affiliation: UC Berkeley
Hidden in Plain Sight by Stephanie Fu et al. [Outstanding paper award]:
VLMs are worse than vision-only models on vision-only tasks β LMs are biased and underutilize their (easily accessible) visual representations!
hidden-plain-sight.github.io
14.10.2025 00:43 β π 1 π 0 π¬ 1 π 0
Title: UnveiLing: What Makes Linguistics Olympiad Puzzles Tricky for LLMs?
Authors: Mukund Choudhary*, KV Aditya Srivatsa*, Gaurja Aeron , Antara Raaghavi Bhattacharya, Dang Khoa Dang Dinh, Ikhlasul Akmal Hanif, Daria Kotova, Ekaterina Kochmar, Monojit Choudhury
Affiliations: Mohamed Bin Zayed University of Artificial Intelligence, IIT Gandhinagar, Harvard University, VinUniversity, Universitas Indonesia
UnveiLing by Mukund Choudhary* & KV Aditya Srivatsa* et al.:
Linguistic olympiad problems about certain linguistic features (e.g., morphological ones) are tougher for LMs, but morphological pre-tokenization helps!
arxiv.org/abs/2508.11260
14.10.2025 00:43 β π 0 π 0 π¬ 1 π 0
Title: A Taxonomy of Transcendence
Authors: Natalie Abreu, Edwin Zhang, Eran Malach, & Naomi Saphra
Affiliations: Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University
Email: {natalieabreu, ezhang} at g.harvard.edu and {emalach, nsaphra} at fas.harvard.edu
A Taxonomy of Transcendence by Natalie Abreu et al.:
LMs outperform the experts they are trained on through skill denoising (averaging out expertsβ errors), skill selection (relying on the most appropriate expert), and skill generalization (combining expertsβ knowledge).
arxiv.org/abs/2508.17669
14.10.2025 00:43 β π 3 π 0 π¬ 1 π 0
Title: The Zero Body Problem: Probing LLM Use of Sensory Language
Authors: Rebecca M. M. Hicke, Sil Hamilton & David Mimno
Affiliations: Department of Computer Science & Department of Information Science, Cornell University, Ithaca, New York, USA
Email: {rmh327, srh255, mimno} at cornell.edu
The Zero Body Problem by @rmmhicke.bsky.social et al.:
LMs use sensory language (olfactory, auditory, β¦) differently from people + evidence that RLHF may discourage sensory language.
arxiv.org/abs/2504.06393
14.10.2025 00:43 β π 4 π 0 π¬ 1 π 0
Title: Readability β Learnability: Rethinking the Role of Simplicity in Training Small Language Models
Authors: Ivan Lee & Taylor Berg-Kirkpatrick
Affiliation: UC San Diego
Email: {iylee, tberg} at ucsd.edu
Readability β Learnability by Ivan Lee & Taylor Berg-Kirkpatrick:
Developmentally plausible LM training works not because of simpler language but because of lower n-gram diversity! Warning against anthropomorphizing / equating learning in LMs and in children.
openreview.net/pdf?id=AFMGb...
14.10.2025 00:43 β π 5 π 0 π¬ 1 π 0
β A thread for some cool recent work I learned about at #COLM2025, either from the paper presentations or from the keynotes!
14.10.2025 00:43 β π 13 π 6 π¬ 1 π 1
π
13.10.2025 17:43 β π 3 π 1 π¬ 0 π 0
Grad App Aid β Queer in AI
We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. Weβll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5
09.10.2025 00:37 β π 7 π 9 π¬ 1 π 0
Computational neuroscientist. Building neural networks to reverse-engineer the brain and human intelligence. Assistant Professor @UofT Research Scientist @UHN #compneuro
https://kite-uhn.com/scientist/brokoslaw-laschowski
Mexican Historian & Philosopher of Biology β’ Postdoctoral Fellow at @theramseylab.bsky.social (@clpskuleuven.bsky.socialβ¬) β’ Book Reviews Editor for @jgps.bsky.social β’ https://www.alejandrofabregastejeda.com β’ #PhilSci #HistSTM #philsky β’ Escribo y edito
Assistant Professor of Machine Learning, Carnegie Mellon University (CMU)
Building a Natural Science of Intelligence π§ π€β¨
Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab
Personal Website: https://cs.cmu.edu/~anayebi
How do we move? I study brains and machines at York University (Assistant Professor). Full-time human.
Neural Control & Computation Lab
www.ncclab.ca
The Centre for Logic and Philosophy of Science (CLPS) at the Institute of Philosophy (@kuleuvenuniversity.bsky.social) focuses on #logic and #philsci, with a concentration on the philosophies of the special sciences β’ https://hiw.kuleuven.be/clps #philsky
Cognitive Computational Neuroscientist in Training
Neuro + ML PhD @ CMU'29 | BS in Math and in CS @ MIT' 23 & MEng' 24
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience
asaakyan.github.io
Faculty at the Indian Institute of Science in Bangalore. Interested in LLMs, NLP, culture and people.
PhD grad from UofT CompLing. Interested in narrative understanding, affective computing, language variation and style, and generally using NLP technologies to understand humans and society.
priya22.github.io
Linguist, Cognitive Scientist, Occasional AI Researcher, Immigrant in NYC, Co-Author w/ Ingeborg Glimmer of 'Why We Fear AI' - out now: https://bookshop.org/a/114797/9781945335174
natural language processing and computational linguistics at google deepmind.
Asst prof @ University of Utah Β· NLP Β· she/her ππ·
I do research in social computing and LLMs at Northwestern with @robvoigt.bsky.social and Kaize Ding.
PhD student at @gesis.org & @hhu.de, computational linguist, researching linguistic factors in (annotation) disagreement and language model behavior.
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
assistant prof at umich-flint
nlp and computational social science
steverw.com
Interested in ML, comp bio, immunology, and just about anything one hop away from either.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). June 2026 in Montreal, Canada π¨π¦ #FAccT2026
https://facctconference.org/
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / η€ΎδΌη§ε¦
Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com