Amazing work by @mxeddie.bsky.social
04.08.2025 09:44 β π 0 π 0 π¬ 0 π 0@jasmijn.bastings.me
Senior Research Scientist at Google DeepMind. Interested in equitable language technology, gender, bias, interpretability, feminism, NLP. Views my own. She/her. π jasmijn.bastings.me
Amazing work by @mxeddie.bsky.social
04.08.2025 09:44 β π 0 π 0 π¬ 0 π 0"Amplifying Trans and Nonbinary Voices: A Community-Centred Harm Taxonomy for LLMs" is online: aclanthology.org/2025.acl-lon... We explore LLM responses that may negatively impact the transgender and nonbinary (TGNB) community and introduce a proof-of-concept toolkit to help identify them #NLProc
04.08.2025 09:44 β π 13 π 2 π¬ 1 π 0@bsavoldi.bsky.social taking us back in time at #GITT2025 ββ³ focusing on the first discussions of gender bias in language technology as a socio-technical issue. No, the problem hasn't been fixed yet. But what has happened?
23.06.2025 07:22 β π 7 π 3 π¬ 6 π 0I've created a survey on ethics resources. Anyone in UK who does research with LLMs, in academia or industry, building models or using them as tools, is eligible. Takes 10 mins and there's the chance to win Β£100 voucher.
Survey: lnkd.in/eFRsqpT2
(V1 of the guide is here arxiv.org/abs/2410.19812)
My latest paper "A decade of gender bias in machine translation" with @bsavoldi.bsky.social @luisabentivogli.bsky.social and Eva Vanmassenhove is out. πβοΈ #NLProc #NLP #MT
02.05.2025 18:33 β π 50 π 14 π¬ 1 π 1@evavnmssnhv.bsky.social
02.05.2025 18:34 β π 1 π 0 π¬ 0 π 0My latest paper "A decade of gender bias in machine translation" with @bsavoldi.bsky.social @luisabentivogli.bsky.social and Eva Vanmassenhove is out. πβοΈ #NLProc #NLP #MT
02.05.2025 18:33 β π 50 π 14 π¬ 1 π 1This is a fantastic oral history of the last 10 years of NLP and AI. www.quantamagazine.org/when-chatgpt...
01.05.2025 11:55 β π 96 π 31 π¬ 2 π 4If you are a feminist academic, you should sign this letter. forms.gle/oDYgnobrMiSc...
30.04.2025 07:12 β π 351 π 257 π¬ 13 π 23Barchart showing meta-llama/Meta-Llama-3-8B, meta-llama/Llama-3.1-70B, meta-llama/Llama-3.1-70B-Instruct. Each model has two bars, a blue one saying PRO-LEFT and a red one saying PRO-RIGHT. The PRO-RIGHT bars on higher than the PRO-LEFT in all of them.
π π Meta announced that they're changing their models to reduce "left-leaning [political] bias"--that means leaning them to the political "right". Lots to unpack about what that might mean. So I ran a quick "shot in the dark" study...and found a *political right* bias in Meta models. Some notes.π§΅
22.04.2025 00:39 β π 711 π 276 π¬ 28 π 37This is a really good piece on why the UK supreme court ruling is so problematic.
22.04.2025 12:55 β π 1 π 0 π¬ 0 π 0π’ Come and join our group!
We offer a fully funded 3-year PhD position:
π Automatic translation with large multimodal models: iecs.unitn.it/education/ad...
πFull details for application: iecs.unitn.it/education/ad...
π
Deadline May 12, 2025
#NLProc #FBK
I've put on my hob nailed boots and given the Supreme Court decision the kicking it deserves. goodlaw.social/vd3a
16.04.2025 17:20 β π 955 π 361 π¬ 25 π 32I heard another fake Hannah Arendt quote on the radio. Since in the current times we might hear more of her to make sense of what is going on, here is a nice piece on what she actually said on lies & people not believing anything anymore: hac.bard.edu/amor-mundi/o...
12.01.2025 10:57 β π 8 π 0 π¬ 2 π 0My heartfelt condolences to everyone who knew and loved Felix. He was such a bright thinker. Such a loss.
03.01.2025 22:03 β π 3 π 0 π¬ 0 π 0We scaled training data attribution (TDA) methods ~1000x to find influential pretraining examples for thousands of queries in an 8B-parameter LLM over the entire 160B-token C4 corpus!
medium.com/people-ai-re...
Screenshot of the paper.
Even as an interpretable ML researcher, I wasn't sure what to make of Mechanistic Interpretability, which seemed to come out of nowhere not too long ago.
But then I found the paper "Mechanistic?" by
@nsaphra.bsky.social and @sarah-nlp.bsky.social, which clarified things.
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledgeπ¦? In our new preprint, we look at the pretraining data and find evidence against this:
Procedural knowledge in pretraining drives LLM reasoning βοΈπ’
π§΅β¬οΈ