Congrats and welcome to the DMV area!!!
17.06.2025 02:45 β π 0 π 0 π¬ 1 π 0@kaiserwholearns.bsky.social
Ph.D. student at @jhuclsp, human LM that hallucinates. Formerly @MetaAI, @uwnlp, and @AWS they/themπ³οΈβπ #NLProc #NLP Crossposting on X.
Congrats and welcome to the DMV area!!!
17.06.2025 02:45 β π 0 π 0 π¬ 1 π 0π οΈ Interested in how your LLM behaves under this circumstance? We released the code to generate the diagnostic data for your own LLM.
@mdredze @loadingfan
8/8
π Takeaways for practitioners
1. Check for knowledge conflict before prompting.
2. Add further explanation to guide the model in following the context.
3. Monitor hallucinations even when context is supplied.
7/8
π Implications:
β‘When using an LLM as a judge, its parametric knowledge could lead to incorrect judgment :(
β‘ Retrieval systems need mechanisms to detect and resolve contradictions, not just shove text into the prompt. 6/8
π§ Key finding #3:
βJust give them more explanation?β Providing rationales helpsβit pushes models to lean more on the contextβbut it still canβt fully silence the stubborn parametric knowledge. 5/8
βοΈ Key finding #2:
Unsurprisingly, LLMs prefer their own memories. Even when we explicitly instruct them to rely on the provided document, traces of the βwrongβ internal belief keep leaking into answers. 4/8
β οΈ Key finding #1:
If the task doesnβt require external knowledge (e.g., pure copy), conflict barely matters. However, as soon as knowledge is needed, accuracy tanks when context and memory disagree.
3/8
π οΈ We create diagnostic data thatβ¦
- Agrees/Contradicts with the modelβs knowledge
- Contradictions with different levels of plausibility
- Tasks requiring different levels of knowledge
2/8
π π arxiv.org/abs/2506.06485
16.06.2025 12:02 β π 1 π 0 π¬ 1 π 0What happens when an LLM is asked to use information that contradicts its knowledge? We explore knowledge conflict in a new preprintπ
TLDR: Performance drops, and this could affect the overall performance of LLMs in model-based evaluation.ππ§΅β¬οΈ 1/8
#NLProc #LLM #AIResearch
Paper Link: aclanthology.org/2025.repl4nl...
06.05.2025 23:27 β π 0 π 0 π¬ 0 π 0It was quite encouraging to find that many friends share my concern of "minor details" obstructing us from gaining reliable conclusions. Really hope that we all can provide well-documented experimentsl details and value the so-called "engineering contributions" more.
06.05.2025 23:25 β π 0 π 0 π¬ 0 π 0Had so many fruitful discussions and made many friends this #NAACL2025 π΅ποΈThanks for everyone who came to my poster or listened to me talking about my audacious thoughts! π
(I should have printed more stickers as they were more popular than I anticipatedπ
)
Dialects lie on continua of (structured) linguistic variation, right? And we canβt collect data for every point on the continuum...π€
π’ Check out DialUp, a technique to make your MT model robust to the dialect continua of its training languages, including unseen dialects.
arxiv.org/abs/2501.16581
The image is from a "Transparency Center" document and lists guidelines regarding acceptable and prohibited content for insults. It mentions: 1. Insults about: Character, such as cowardice, dishonesty, criminality, and sexual promiscuity or immorality. Mental characteristics, including but not limited to accusations of stupidity, intellectual capacity, and mental illness, as well as unsupported comparisons among politically correct (PC) groups based on inherent intellectual traits. 2. Highlighted section: The document allows allegations of mental illness or abnormality when tied to gender or sexual orientation, referencing political and religious discourse about transgenderism and homosexuality. It also acknowledges the non-serious use of terms like "weird."
Meta literally created a LGBTQ exception for calling someone mentally ill as an insult. You can't do it for any other group except LGBTQ people.
08.01.2025 01:51 β π 15428 π 6357 π¬ 651 π 1608with reasonable freedom, depending on the scale/focus of the business.
Case in point, we are looking to expand the research/foundation models team at Orby AI and are looking for highly motivated researchers and ML/Research engineers. Please reach out if you're interested in learning more!
/fin
Excited to start my #ARR #NLP reviews!
I'll try my best and see if I can get 100% of my reviews to be 'great' this round.
If you didn't see it already, ARR publishes how many of your reviews are considered to be 'great': stats.aclrollingreview.org
Join me for the challenge :)
π¨ I am on the faculty job market this year π¨
I will be presenting at #NeurIPS2024 and am happy to chat in-person or digitally!
I work on developing AI agents that can collaborate and communicate robustly with us and each other.
More at: esteng.github.io and in thread below
π§΅π
Is MMLU Western-centric? π€
As part of a massive cross-institutional collaboration:
π½Find MMLU is heavily overfit to western culture
π Professional annotation of cultural sensitivity data
π Release improved Global-MMLU 42 languages
π Paper: arxiv.org/pdf/2412.03304
π Data: hf.co/datasets/Coh...
π¨I too am on the job marketβΌοΈπ€―
I'm searching for faculty positions/postdocs in multilingual/multicultural NLP, vision+language models, and eval for genAI!
I'll be at #NeurIPS2024 presenting our work on meta-evaluation for text-to-image faithfulness! Let's chat there!
Papers inπ§΅, see more: saxon.me
A scatter plot comparing language models by performance (y-axis, measured in average performance on 10 benchmarks) versus training computational cost (x-axis, in approximate FLOPs). The plot shows OLMo 2 models (marked with stars) achieving Pareto-optimal efficiency among open models, with OLMo-2-13B and OLMo-2-7B sitting at the performance frontier relative to other open models like DCLM, Llama 3.1, StableLM 2, and Qwen 2.5. The x-axis ranges from 4x10^22 to 2x10^24 FLOPs, while the y-axis ranges from 35 to 70 benchmark points.
Excited to share OLMo 2!
π 7B and 13B weights, trained up to 4-5T tokens, fully open data, code, etc
π better architecture and recipe for training stability
π‘ staged training, with new data mix Dolminoπ added during annealing
π¦ state-of-the-art OLMo 2 Instruct models
#nlp #mlsky
links belowπ
Agree. Oth it might be helpful as a way to receive report and doubt. There is one user reported that the authors of a paper I was reviewing violate the anonymity policy by posting their submissions in public.
20.11.2024 22:48 β π 0 π 0 π¬ 1 π 0π
20.11.2024 06:57 β π 1 π 0 π¬ 0 π 0Putting together a JHU Center for Language and Speech Processing starter pack!
Please reply or DM me if you're doing research at CLSP and would like to be added - I'm still trying to find out which of us are on here so far.
go.bsky.app/JtWKca2
A starter pack for #NLP #NLProc researchers! π
go.bsky.app/SngwGeS
Finally, use your app password (https://buff.ly/3WkpGuu ) when using these tools for better security. I have no opinions about social media, the maintenance of which sometimes annoys me while I still want to keep connected with friends. Hope this can help save time.
18.11.2024 17:48 β π 0 π 0 π¬ 0 π 0Import previous tweets from X to π¦: https://buff.ly/40SlAMZ
Cross-Posting X, π¦, and other social media:
1. Buffer (https://buffer.com/)
2. TamperMonkey Script (https://buff.ly/48ZBrLU)
Transfer follow list from X to π¦.
1. Sky Follower Bridge (Browser Plugin): https://buff.ly/40g5FaU
(Code): https://buff.ly/4eE1fOP
2. Starter Packs: QueerinAI @jasmijnbastings.bsky.social: go.bsky.app/RkBEqxz
NLP Researchers @mariaa.bsky.social: https://buff.ly/4fQvdQD
Dealing with a new social media account can be vexatious. Here I compiled a thread of resources that might be helpful to transition to Bluesky π¦ . π§΅β¬οΈ Thread below
18.11.2024 17:48 β π 6 π 1 π¬ 1 π 0