Calling for participants in our workshop on retrieving videos. Large, pre-trained models do poorly on this and there’s a lot of potential to push the field without needing a ton of GPUs.
29.04.2025 18:50 — 👍 0 🔁 0 💬 0 📌 0@kentonmurray.bsky.social
Research Faculty at John’s Hopkins University. Focused on Multilingual NLP, Machine Translation, Low-Resource, Multimodal (videos)
Calling for participants in our workshop on retrieving videos. Large, pre-trained models do poorly on this and there’s a lot of potential to push the field without needing a ton of GPUs.
29.04.2025 18:50 — 👍 0 🔁 0 💬 0 📌 0#NAACL2025 30 April - 2pm, Hall 3, Special Theme
"Faux Polyglots: A study on Information Disparity in Multilingual Large Language Models".
Come visit and learn about how multilingual RALMs fail to handle multilingual information conflicts.
Teaser: youtu.be/aPS2Ntav1FE
#LLM #AI #NLProc
There’s a lot of PhD advisors who are neglecting key parts of their jobs. Your students should NEVER have 100s of words of text on a slide, nor a grainy image of a LaTeX table (bar charts please!) A good talk is now the exception not the norm. It’s all conferences - not just #AAAI2025
28.02.2025 17:07 — 👍 0 🔁 0 💬 0 📌 0Decided last minute to come to #AAAI2025 for the day. Hit me up if you are in Philly.
28.02.2025 14:55 — 👍 2 🔁 0 💬 0 📌 0Dialects lie on continua of (structured) linguistic variation, right? And we can’t collect data for every point on the continuum...🤔
📢 Check out DialUp, a technique to make your MT model robust to the dialect continua of its training languages, including unseen dialects.
arxiv.org/abs/2501.16581
We now have a Slack channel for the 2025 Workshop on Multimodal Augmented Generation via Multimodal Retrieval!
This channel will serve as the primary communication method between authors, participants, and organizers: join.slack.com/t/magmarshar...
New Workshop on Multimodal Augmented Generation via MultimodAl Retrieval (MAGMaR) to be held at @aclmeeting.bsky.social ACL in Vienna this summer. We have a new shared task that stumps most LLMs - including ones pretrained on our test collection. nlp.jhu.edu/magmar/
14.01.2025 19:05 — 👍 10 🔁 6 💬 0 📌 3