Maria Ryskina's Avatar

Maria Ryskina

@mryskina.bsky.social

Postdoc @vectorinstitute.ai | organizer @queerinai.com | previously MIT, CMU LTI | πŸ€ rodent enthusiast | she/they 🌐 https://ryskina.github.io/

121 Followers  |  154 Following  |  35 Posts  |  Joined: 10.07.2025  |  2.0954

Latest posts by mryskina.bsky.social on Bluesky

I thought it was very good! Some people strongly prefer Babel for its perspective (the POV character of BoBH is a white woman), but I had the same criticisms as you and I liked BoBH better, especially in terms of character development. It also talks a lot more about research as a career!

06.12.2025 23:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Have you read Blood over Bright Haven? (No translation magic there, unfortunately, but much better on both other points IMO)

06.12.2025 14:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Surprising to me that on the chart it's labelled as being darker than The Secret History!

06.12.2025 14:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
References to two papers next to one another in a bibliography section:

Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025.

Making β€œfetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

References to two papers next to one another in a bibliography section: Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025. Making β€œfetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

Accidental bibliography achievement unlocked!
(I highly recommend checking out both papers)

04.12.2025 21:05 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Congratulations!!!

08.11.2025 00:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Gillian Hadfield - Alignment is social: lessons from human alignment for AI
Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extent… Gillian Hadfield - Alignment is social: lessons from human alignment for AI

The recording of my keynote from #COLM2025 is now available!

06.11.2025 21:35 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Btw the PI of this work, Dr Kelly Lambert, has a cool book called "The Lab Rat Chronicles" that describes lots of behavioral findings from rat experiments! (Written pre-driving rats, unfortunately)

06.11.2025 16:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

the only kind of Rat Race I'm down for

06.11.2025 14:43 β€” πŸ‘ 18    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Congratulations! Took me a second to understand you weren't talking about Lexical Functional Grammar though...

05.11.2025 13:22 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores

30.10.2025 20:33 β€” πŸ‘ 144    πŸ” 170    πŸ’¬ 13    πŸ“Œ 50
Preview
Incomplete Contracting and AI Alignment We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a syste...

Isn't mis- (or at least under-)specification inevitable? (I'm thinking of arxiv.org/abs/1804.04268)

21.10.2025 19:22 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Finally out in TACL:
🌎EWoK (Elements of World Knowledge)🌎: A cognition-inspired framework for evaluating basic world knowledge in language models

tl;dr: LLMs learn basic social concepts way easier than physical&spatial concepts

Paper: direct.mit.edu/tacl/article...
Website: ewok-core.github.io

20.10.2025 17:36 β€” πŸ‘ 70    πŸ” 10    πŸ’¬ 1    πŸ“Œ 2
Post image

πŸš€ Excited to share a major update to our β€œMixture of Cognitive Reasoners” (MiCRo) paper!

We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?

More below πŸ§ πŸ‘‡
cognitive-reasoners.epfl.ch

20.10.2025 12:05 β€” πŸ‘ 29    πŸ” 9    πŸ’¬ 2    πŸ“Œ 1

DM'd you, thanks!

19.10.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The organizers mentioned that the videos will be up a few weeks after the conference! I expect it'll be at www.youtube.com/@colm_conf

19.10.2025 00:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I still have that card! Still working on that second ice cream πŸ₯²

17.10.2025 17:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It used to be 5 "no"s for ice cream/pizza! Has the exchange rate gone up?

17.10.2025 17:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!

16.10.2025 23:19 β€” πŸ‘ 7    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Post image

Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans don’t cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.

15.10.2025 23:00 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Inspired to share some papers that I found at #COLM2025!

"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation" by Amanda Myntti et al. arxiv.org/abs/2504.01542

14.10.2025 18:16 β€” πŸ‘ 26    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Title: Large Language Models Assume People are More Rational than We Really are
Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths
Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University
Email: ryanliu at princeton.edu and jiayig at princeton.edu

Title: Large Language Models Assume People are More Rational than We Really are Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University Email: ryanliu at princeton.edu and jiayig at princeton.edu

LLMs Assume People Are More Rational Than We Really Are by Ryan Liu* & Jiayi Geng* et al.:

LMs are bad (too rational) at predicting human behaviour, but aligned with humans in assuming rationality in others’ choices.

arxiv.org/abs/2406.17055

14.10.2025 00:43 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Title: Neologism Learning for Controllability and Self-Verbalization
Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim
Affiliation: Google DeepMind
Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Title: Neologism Learning for Controllability and Self-Verbalization Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim Affiliation: Google DeepMind Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Neologism Learning by John Hewitt et al.:

Training new token embeddings on examples with a specific property (e.g., short answers) leads to finding β€œmachine-only synonyms” for these tokens that elicit the same behaviour (short answers=’lack’).

arxiv.org/abs/2510.08506

14.10.2025 00:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Title: Hidden in plain sight: VLMs overlook their visual representations
Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell
Affiliation: UC Berkeley

Title: Hidden in plain sight: VLMs overlook their visual representations Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell Affiliation: UC Berkeley

Hidden in Plain Sight by Stephanie Fu et al. [Outstanding paper award]:

VLMs are worse than vision-only models on vision-only tasks – LMs are biased and underutilize their (easily accessible) visual representations!

hidden-plain-sight.github.io

14.10.2025 00:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Title: UnveiLing: What Makes Linguistics Olympiad Puzzles Tricky for LLMs? 
Authors: Mukund Choudhary*, KV Aditya Srivatsa*, Gaurja Aeron , Antara Raaghavi Bhattacharya, Dang Khoa Dang Dinh, Ikhlasul Akmal Hanif, Daria Kotova, Ekaterina Kochmar, Monojit Choudhury
Affiliations: Mohamed Bin Zayed University of Artificial Intelligence, IIT Gandhinagar, Harvard University, VinUniversity, Universitas Indonesia

Title: UnveiLing: What Makes Linguistics Olympiad Puzzles Tricky for LLMs? Authors: Mukund Choudhary*, KV Aditya Srivatsa*, Gaurja Aeron , Antara Raaghavi Bhattacharya, Dang Khoa Dang Dinh, Ikhlasul Akmal Hanif, Daria Kotova, Ekaterina Kochmar, Monojit Choudhury Affiliations: Mohamed Bin Zayed University of Artificial Intelligence, IIT Gandhinagar, Harvard University, VinUniversity, Universitas Indonesia

UnveiLing by Mukund Choudhary* & KV Aditya Srivatsa* et al.:

Linguistic olympiad problems about certain linguistic features (e.g., morphological ones) are tougher for LMs, but morphological pre-tokenization helps!

arxiv.org/abs/2508.11260

14.10.2025 00:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Title: A Taxonomy of Transcendence
Authors: Natalie Abreu, Edwin Zhang, Eran Malach, & Naomi Saphra 
Affiliations: Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University 
Email: {natalieabreu, ezhang} at g.harvard.edu and {emalach, nsaphra} at fas.harvard.edu

Title: A Taxonomy of Transcendence Authors: Natalie Abreu, Edwin Zhang, Eran Malach, & Naomi Saphra Affiliations: Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University Email: {natalieabreu, ezhang} at g.harvard.edu and {emalach, nsaphra} at fas.harvard.edu

A Taxonomy of Transcendence by Natalie Abreu et al.:

LMs outperform the experts they are trained on through skill denoising (averaging out experts’ errors), skill selection (relying on the most appropriate expert), and skill generalization (combining experts’ knowledge).

arxiv.org/abs/2508.17669

14.10.2025 00:43 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Title: The Zero Body Problem: Probing LLM Use of Sensory Language
Authors: Rebecca M. M. Hicke, Sil Hamilton & David Mimno
Affiliations: Department of Computer Science & Department of Information Science, Cornell University, Ithaca, New York, USA
Email: {rmh327, srh255, mimno} at cornell.edu

Title: The Zero Body Problem: Probing LLM Use of Sensory Language Authors: Rebecca M. M. Hicke, Sil Hamilton & David Mimno Affiliations: Department of Computer Science & Department of Information Science, Cornell University, Ithaca, New York, USA Email: {rmh327, srh255, mimno} at cornell.edu

The Zero Body Problem by @rmmhicke.bsky.social et al.:

LMs use sensory language (olfactory, auditory, …) differently from people + evidence that RLHF may discourage sensory language.

arxiv.org/abs/2504.06393

14.10.2025 00:43 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Title: Readability β‰  Learnability: Rethinking the Role of Simplicity in Training Small Language Models 
Authors: Ivan Lee & Taylor Berg-Kirkpatrick
Affiliation: UC San Diego 
Email: {iylee, tberg} at ucsd.edu

Title: Readability β‰  Learnability: Rethinking the Role of Simplicity in Training Small Language Models Authors: Ivan Lee & Taylor Berg-Kirkpatrick Affiliation: UC San Diego Email: {iylee, tberg} at ucsd.edu

Readability β‰  Learnability by Ivan Lee & Taylor Berg-Kirkpatrick:

Developmentally plausible LM training works not because of simpler language but because of lower n-gram diversity! Warning against anthropomorphizing / equating learning in LMs and in children.

openreview.net/pdf?id=AFMGb...

14.10.2025 00:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

⭐ A thread for some cool recent work I learned about at #COLM2025, either from the paper presentations or from the keynotes!

14.10.2025 00:43 β€” πŸ‘ 13    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1

πŸ‘€

13.10.2025 17:43 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Grad App Aid β€” Queer in AI

We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. We’ll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5

09.10.2025 00:37 β€” πŸ‘ 7    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

@mryskina is following 20 prominent accounts