Dirk Hovy's Avatar

Dirk Hovy

@dirkhovy.bsky.social

Professor @milanlp.bsky.social for #NLProc, compsocsci, #ML Also at http://dirkhovy.com/

536 Followers  |  317 Following  |  35 Posts  |  Joined: 29.11.2024  |  2.271

Latest posts by dirkhovy.bsky.social on Bluesky

Preview
Labeling Data with Language Models: Trick or Treat? Large language models are now labeling data for us.

See also @manoelhortaribeiro.bsky.social's post on this same topic: doomscrollingbabel.manoel.xyz/p/labeling-d...

19.11.2025 15:44 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Language Model Hacking - Granular Material

Trying an experiment in good old-fashioned blogging about papers: dallascard.github.io/granular-mat...

16.11.2025 19:52 โ€” ๐Ÿ‘ 27    ๐Ÿ” 9    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

#TBT #NLProc ' Attanasio et al. study asks 'Is It Worth the (Environmental) Cost?' analyzing continuous training for language models. Balances benefits, environmental impacts, for responsible use. #Sustainability'

20.11.2025 16:02 โ€” ๐Ÿ‘ 3    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The State of Profanity Obfuscation in Natural Language Processing Scientific Publications Debora Nozza, Dirk Hovy. Findings of the Association for Computational Linguistics: ACL 2023. 2023.

#MemoryModay #NLProc ' 'State of Profanity Obfuscation in NLP Scientific Publications' probes bias in non-English papers. @deboranozza.bsky.social & @dirkhovy.bsky.social (2023) propose 'PrOf' to aid authors & improve access.

17.11.2025 16:04 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation Flor Miriam Plaza-del-Arco, Debora Nozza, Dirk Hovy. Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024. 2024.

#TBT #NLProc Explore 'Wisdom of Instruction-Tuned LLM Crowds' by Plaza et al. LLM labels outperform single models in tasks & languages. But few-shot can't top zero-shot. Supervised models rule.

30.10.2025 16:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Universal Joy A Data Set and Results for Classifying Emotions Across Languages Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, Dirk Hovy. Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2021.

#MemoryModay #NLProc 'Universal Joy: A Data Set and Results for Classifying Emotions Across Languages' by Lamprinidis et al. (2021) explores how AI research affects our planet.

03.11.2025 16:02 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Explaining Speech Classification Models via Word-Level Audio Segments and Paralinguistic Features Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long...

#TBT #NLProc "Explaining Speech Classification Models" by Pastor et al. (2024) makes speech classification more transparent! ๐Ÿ” Their research reveals which words matter most and how tone and background noise impact decisions.

06.11.2025 16:04 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Measuring Harmful Representations in Scandinavian Language Models Samia Touileb, Debora Nozza. Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS). 2022.

#MemoryModay #NLProc 'Measuring Harmful Representations in Scandinavian Language Models' uncovers gender bias, challenging Scandinavia's equity image.

10.11.2025 16:03 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Bridging Fairness and Environmental Sustainability in Natural Language Processing Marius Hessenthaler, Emma Strubell, Dirk Hovy, Anne Lauscher. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.

#TBT #NLProc Hessenthaler et al.'s 2022 work delves into AI's link with fairness & energy reduction in English NLP models, challenging bias reduction theories. #AI #sustainability

13.11.2025 16:05 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
An image of the best paper slide at the EMNLP2025 conference, with the audience in the background

An image of the best paper slide at the EMNLP2025 conference, with the audience in the background

๐ŸŽ‰ Congratulations to all #EMNLP2025 award winners ๐ŸŽ‰

Starting with the โœจBest Paper award โœจ:

"Infini-gram mini: Exact n-gram Search at the Internet Scale with FM-Index"
by Hao Xu, Jiacheng Liu, Yejin Choi, Noah A. Smith, and Hannaneh Hajishirzi
aclanthology.org/2025.emnlp-m...

1/n

07.11.2025 22:30 โ€” ๐Ÿ‘ 36    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement Gavin Abercrombie, Tanvi Dinkar, Amanda Cercas Curry, Verena Rieser, Dirk Hovy. Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP. 2025.

Maybe it is time to report *intra*-annotator agreement?

aclanthology.org/2025.nlpersp...

11.11.2025 16:45 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Last week at @nlperspectives.bsky.social I presented work showing that annotators only provide the same label on ~75% of items across four NLP labelling tasks following a two week gap

11.11.2025 16:44 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

You missed one: G. Abercrombie, T. Dinkar, A. Cercas Curry, V. Rieser & @dirkhovy.bsky.social Consistency is Key: Disentangling label variation in NLP with Intra-Annotator Agreement. @nlperspectives.bsky.social

03.11.2025 02:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Excited to head to Suzhou for the 30th edition of #EMNLP2025! ๐ŸŽ‰ Had the great honor to serve as general chair this year. Looking forward to catching up with everyone and seeing some amazing #NLP research! ๐Ÿค“๐Ÿ“š

02.11.2025 05:54 โ€” ๐Ÿ‘ 28    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 5 โ€“ Main Conference Posters
Personalization up to a Point
๐Ÿง  In the context of content moderation, we show that fully personalized models can perpetuate hate speech, and propose a policy-based method to impose legal boundaries.
๐Ÿ“ Hall C | 11:00โ€“12:30

31.10.2025 14:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 5 โ€“ Main Conference Posters
๐Ÿ“˜ Biased Tales
A dataset of 5k short LLM bedtime stories generated across sociocultural axes with an evaluation taxonomy for character-centric attributes and context-centric attributes.
๐Ÿ“ Hall C | 11:00โ€“12:30

31.10.2025 14:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 5 - Demo
Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification
๐Ÿงฉ Co-DETECT โ€“ an iterative, human-LLM collaboration framework for surfacing edge cases and refining annotation codebooks in text classification.
๐Ÿ“ Demo Session 2 โ€“ Hall C3 | 14:30โ€“16:00

31.10.2025 14:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 6 โ€“ Findings Posters
The โ€œrโ€ in โ€œwomanโ€ stands for rights.
๐Ÿ’ฌ We propose a taxonomy of social dynamics in implicit misogyny (EN,IT), auditing 9 LLMs โ€” and they consistently fail. The more social knowledge a message requires, the worse they perform.
๐Ÿ“ Hall C | 12:30โ€“13:30

31.10.2025 14:06 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 7 โ€“ Main Conference Posters
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
๐Ÿง Discussing different applications for LLM persona prompting, and how to measure their success.
๐Ÿ“ Hall C | 10:30โ€“12:00

31.10.2025 14:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 7 โ€“ Main Conference Posters
TrojanStego: Your Language Model Can Secretly Be a Steganographic Privacy-Leaking Agent
๐Ÿ”’ LLMs can be fine-tuned to leak secrets via token-based steganography!
๐Ÿ“ Hall C | 10:30โ€“12:00

31.10.2025 14:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 8 โ€“ WiNLP Workshops
No for Some, Yes for Others
๐Ÿค– We investigate how sociodemographic persona prompts affect false refusal behaviors in LLMs. Model and task type are the dominant factors driving these refusals.

31.10.2025 14:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 8 โ€“ NLPerspectives Workshops
Balancing Quality and Variation
๐Ÿงฎ For datasets to represent diverse opinions, they must preserve variation while filtering out spam. We evaluate annotator filtering heuristics and show how they often remove genuine variation.

31.10.2025 14:07 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 8 โ€“ BabyLM Workshop
Teacher Demonstrations in a BabyLM's Zone of Proximal Development for Contingent Multi-Turn Interaction
๐Ÿ‘ถ ContingentChat, a Teacherโ€“Student framework that benchmarks and improves multi-turn contingency in a BabyLM trained on 100M words.

31.10.2025 14:07 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 8 โ€“ STARSEM Workshop
Generalizability of Media Frames: Corpus Creation and Analysis Across Countries
๐Ÿ“ฐ We investigate how well media frames generalize across different media landscapes. The 15 MFC frames remain broadly applicable, with minor revisions of the guidelines.

31.10.2025 14:07 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ—“๏ธ Nov 6 โ€“ Oral Presentation (TACL)
IssueBench: Millions of Realistic Prompts for Measuring Issue Bias in LLM Writing Assistance
โš–๏ธ A foundation for measuring LLM political bias in realistic user conversations.
๐Ÿ“ A303 | 10:30โ€“12:00

31.10.2025 14:07 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos ๐Ÿ‘‡

31.10.2025 14:04 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 12    ๐Ÿ“Œ 2

Thereโ€™s plenty of evidence for political bias in LLMs, but very few evals reflect realistic LLM use cases โ€” which is where bias actually matters.

IssueBench, our attempt to fix this, is accepted at TACL, and I will be at #EMNLP2025 next week to talk about it!

New results ๐Ÿงต

29.10.2025 16:11 โ€” ๐Ÿ‘ 32    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

Can LLMs learn to simulate individuals' judgments based on their demographics?

Not quite! In our new paper, we found that LLMs do not learn information about demographics, but instead learn individual annotators' patterns based on unique combinations of attributes!

๐Ÿงต

14.04.2025 13:18 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

LLMs are good at simulating human behaviours, but they are not going to be great unless we train them to.

We hope SimBench can be the foundation for more specialised development of LLM simulators.

I really enjoyed working on this with @tiancheng.bsky.social et al. Many fun results ๐Ÿ‘‡

28.10.2025 17:58 โ€” ๐Ÿ‘ 8    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Check out the paper and data for details!
Paper: arxiv.org/abs/2510.17516
Data: huggingface.co/datasets/pit...
Website: simbench.tiancheng.hu (9/9)

28.10.2025 16:53 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@dirkhovy is following 20 prominent accounts