Congrats to Pingjun, @beiduo.bsky.social , Siyao, Marie, and @barbaraplank.bsky.social for receiving the SAC Highlights reward!
13.11.2025 18:02 β π 5 π 1 π¬ 0 π 0@beiduo.bsky.social
ELLIS PhD student in NLP @MaiNLPlab, @CisLmu, @LMU_Muenchen https://mckysse.github.io/
Congrats to Pingjun, @beiduo.bsky.social , Siyao, Marie, and @barbaraplank.bsky.social for receiving the SAC Highlights reward!
13.11.2025 18:02 β π 5 π 1 π¬ 0 π 0What an incredible EMNLP experience β truly the most fulfilling conference Iβve ever attended!
β
Oral presentation
β
SAC Highlights Award
β
Panel discussion
Grateful to my amazing collaborators and to all the friends I had the chance to meet! π
#EMNLP2025 #NLP
Detailed programme now up on website. Looking forward to 14 research papers, results of the 3rd Shared Task on Learning with Disagreements (LeWiDi), a talk from @camachocollados.bsky.social, and a panel discussion feat. Jose, Eve Fleisig, and @beiduo.bsky.social. See you in Room A305 or online!
06.11.2025 19:00 β π 2 π 1 π¬ 0 π 2Our paper: arxiv.org/pdf/2505.23368
Our code: github.com/mainlp/CoT2EL
Thank you to my wonderful co-authors,
@janetlauyeung.bsky.social, Anna Korhonen, and @barbaraplank.bsky.social. Also to @mainlp.bsky.social , @cislmu.bsky.social @munichcenterml.bsky.social
See you in Suzhou!
#NLP #EMNLP2025
Matching exact probabilities for HLV is unstable. So, we propose a more robust rank-based evaluation that checks preference order. Our combined method outperforms baselines on 3 datasets that exhibit human label variation, showing it better aligns with diverse human perspectives.
24.10.2025 13:37 β π 0 π 0 π¬ 1 π 0Instead of unnatural post-hoc explanations, we look forward. A model's CoT already contains rationales for all options. We introduce CoT2EL, a pipeline that uses linguistic discourse segmenters to extract these high-quality, faithful units to explore human label variation.
24.10.2025 13:37 β π 0 π 0 π¬ 1 π 0π Our CoT2EL paper will be presented as an oral at #EMNLP2025 in Suzhou!
Humans often disagree on labels. Can a model's own reasoning (CoT) help us understand why? We developed a new method to extract these insights. Come join us!
ποΈ Friday, Nov 7, 14:00 - 15:30
π Room: A110
π Broader impact:
Our approach makes capturing disagreement scalable, helping build datasets that reflect real-world ambiguityβwithout requiring tons of human-written explanations.
Open-sourcing:
π github.com/mainlp/MJD-E...
π§ Whatβs this about?
Human annotations often disagree. Instead of collapsing disagreement into a single label, we model Human Judgment Distributions β how likely humans are to choose each label in NLI tasks.
Capturing this is crucial for interpretability and uncertainty in NLP.
π Paper link: arxiv.org/abs/2412.13942
π Huge thanks to our collaborators Logan Siyao Peng, @barbaraplank.bsky.social, Anna Korhonen from @mainlp.bsky.social, @lmumuenchen.bsky.social, βͺ@cambridgeltl.bsky.social β¬
π¨ Can LLMs generate explanations that are as useful as human ones for modeling label distributions in NLI?πΉ"A Rose by Any Other Name" shows that they can
π¬ We explore scalable, explanation-based annotation via LLMs.
πCome find us in Vienna π¦πΉ! (July 28, 18:00-19:30, Hall 4/5) #ACL2025NLP #acl2025
The hand-drawn sign from three years ago.
πMaiNLP is turning 3 today!ππ₯³ Weβve grown a lot since @barbaraplank.bsky.social started this group with nothing but three aspiring researches and a hand-drawn sign on the door. Huge thanks to all the amazing people who have joined or visited us since. Hereβs to many more years of exciting research!π
01.04.2025 10:40 β π 20 π 9 π¬ 1 π 2