Congratulations again to John C. Malone Professor of Computer Science @mdredze.bsky.social on this accomplishment!
17.10.2025 17:10 โ ๐ 7 ๐ 2 ๐ฌ 1 ๐ 0@mdredze.bsky.social
John C Malone Professor at Johns Hopkins Computer Science, Center for Language and Speech Processing, Malone Center for Engineering in Healthcare. Parttime: Bloomberg LP #nlproc
Congratulations again to John C. Malone Professor of Computer Science @mdredze.bsky.social on this accomplishment!
17.10.2025 17:10 โ ๐ 7 ๐ 2 ๐ฌ 1 ๐ 0Headshots of Mark Dredze, Jason Eisner, Peter Kazanzides, and Tom Lippincott.
Congratulations to CS faculty @mdredze.bsky.social, Jason Eisner, Peter Kazanzides, and @tom-lippincott.bsky.social
on their @jhu.edu Nexus Awards! Learn more about their funded projects here: www.cs.jhu.edu/news/compute...
๐จ You are only evaluating a slice of your test-time scaling model's performance! ๐จ
๐ We consider how modelsโ confidence in their answers changes as test-time compute increases. Reasoning longer helps models answer more confidently!
๐: arxiv.org/abs/2502.13962
Please read and share this excellent FAQ on University indirect costs by my friend @broniatowski.bsky.social
He explains why these funds are essential and a critical investment for research in the United States.
www.linkedin.com/posts/david-...
I know I can improve my ARR reviews, but there really is no need for name calling. ๐
05.02.2025 14:13 โ ๐ 19 ๐ 0 ๐ฌ 1 ๐ 0Helpful
Insightful
Probing
Valuable
Thoughtful
Illuminating
Constructive
In author feedback, these are synonyms for "we hate your review."
Do reviewers purposely write confusing reviews with typos to demonstrate that the review wasn't written by a LLM?
27.01.2025 23:42 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Golden idea for an NLP paper: a group of llamas is called a "cria herd".
That would make a great name for a LLM method, model, or paper.
Just remember to acknowledge me in your paper.
You're welcome.
Idea for GenAI app: rewrite click bait headlines to normal headlines in the browser.
Input: youโll never guess this one company organizing the best deals of the year
Output: Amazon has a modest sale on phone chargers
Good idea!
20.01.2025 19:10 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0The ARR submission checklist is already pretty extensive, but I suggest we add an additional question:
"I certify that I know the difference between \citet and \citep."
ARR: Reviews are due today.
Me:
I feel seen. This is why I always access my API keys from my laptop.
17.01.2025 19:50 โ ๐ 10 ๐ 1 ๐ฌ 0 ๐ 1Do you have any of those fortune cookies that mock academics?
Sure!
Starting a new year and reflecting on how lucky I am to work at @hopkinsengineer.bsky.social with amazing people @jhucompsci.bsky.social @jhuclsp.bsky.social.
I was promoted to full professor in 2023, and my students presented me with this amazing poster of current and former PhD students.
Listen to @karaswisher.bsky.social's new podcast where she interviews @ruchowdh.bsky.social, @ghadfield.bsky.social and me about AI Ethics and Safety. The podcast was recorded before a live audience at @jhu.edu Bloomberg Center.
podcasts.apple.com/us/podcast/a...
Examining the generated QA pairs, you can really see the difference. Our generations (bottom) look harder and more interesting.
Try our strategy for your synthetic generation task? Check out our paper, being presented at #ML4H2024 .
arxiv.org/abs/2412.04573
Training a Clinical QA system on our data gives big improvements, whether we generate data from Llama or GPT-4o. These improvements are both in F1 and any overlap between the extracted and true answers.
22.12.2024 16:01 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0The generated pair has a lot of advantages: it doesn't use the same language as the report, it includes harder questions, and the answers are sometimes not in the report (unanswerable questions.) The result? Harder, more diverse and more realistic QA pairs.
22.12.2024 16:01 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Second, we use a summarize-then-generate strategy. The LLM first summarizes a given clinical record in a structured format. The summary keeps the key points but loses the details, such as specific terminology and content. We then use the summary to generate a new QA pair.
22.12.2024 16:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We explore two strategies. First, we craft instructions to encourage QA diversity. We formulate these as constraints on the answers to the questions. It helps, but we need more.
22.12.2024 16:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We can ask an LLM to write QA pairs, but they turn out to be too easy and repetitive. They don't come close to what you can get with real data. We need more diverse data! Typical methods (e.g. annealing) don't work. What can we do?
22.12.2024 16:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Paper at #ML42024!
Clinical QA can help doctors find critical information in patient records. But where do we get training data for these systems? Generating this data from an LLM is hard. ๐งต
Takeaways: If you can fine-tune a model on a specific clinical domain, that's great. If you can't, you should probably use models that are better overall, even if they aren't trained on clinical data.
Many more details in the paper!
arxiv.org/abs/2412.05845
It turns out that when you have just a little supervised data, the models trained on more data and tasks, even when out of domain, do BETTER on the new clinical domain.
22.12.2024 15:58 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Maybe the real advantage for domain-tuned models lies in the low resource setting. With lots of supervised data, an out of domain model can do well. What about with just a few training examples?
22.12.2024 15:58 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We try a new clinical task and dataset/domain. In this case, the clinical T5 benefits disappear.
22.12.2024 15:58 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Comparing 2 clinical with 3 general models on 6 clinical datasets, we find that some clinical models improve. However, these clinical test sets come from the same domain as the clinical training data. Maybe the clinical models are better on THIS clinical data, but not in general?
22.12.2024 15:58 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0T5 models are the workhorse of many clinical text applications (e.g. information extraction.) Several clinical T5 models have been trained using clinical data to improve performance on these tasks. Do these models work better than general T5 models?
22.12.2024 15:58 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Are Clinical T5 Models Better for Clinical Text? That's the question we asked in our #ML4H2024 paper.
Turns out clinical models may not be worth it. ๐งต
arxiv.org/abs/2412.05845