Mark Dredze's Avatar

Mark Dredze

@mdredze.bsky.social

John C Malone Professor at Johns Hopkins Computer Science, Center for Language and Speech Processing, Malone Center for Engineering in Healthcare. Parttime: Bloomberg LP #nlproc

2,738 Followers  |  377 Following  |  66 Posts  |  Joined: 09.07.2023  |  2.2171

Latest posts by mdredze.bsky.social on Bluesky

Preview
Mark Dredze named director of Johns Hopkins Data Science and AI Institute Dredze, a member of JHUโ€™s faculty since 2009, has been selected to lead the university institute dedicated to harnessing the power of AI to translate data-driven discovery into real-world impact.

Congratulations again to John C. Malone Professor of Computer Science @mdredze.bsky.social on this accomplishment!

17.10.2025 17:10 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Headshots of Mark Dredze, Jason Eisner, Peter Kazanzides, and Tom Lippincott.

Headshots of Mark Dredze, Jason Eisner, Peter Kazanzides, and Tom Lippincott.

Congratulations to CS faculty @mdredze.bsky.social, Jason Eisner, Peter Kazanzides, and @tom-lippincott.bsky.social
on their @jhu.edu Nexus Awards! Learn more about their funded projects here: www.cs.jhu.edu/news/compute...

09.09.2025 16:23 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿšจ You are only evaluating a slice of your test-time scaling model's performance! ๐Ÿšจ

๐Ÿ“ˆ We consider how modelsโ€™ confidence in their answers changes as test-time compute increases. Reasoning longer helps models answer more confidently!

๐Ÿ“: arxiv.org/abs/2502.13962

20.02.2025 15:14 โ€” ๐Ÿ‘ 12    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
David Broniatowski on LinkedIn: NIH has just announced that it will save $4 billion by capping universityโ€ฆ | 61 comments NIH has just announced that it will save $4 billion by capping university indirect costs on federal grants. Will this actually save money? Bottom line: No. Itโ€ฆ | 61 comments on LinkedIn

Please read and share this excellent FAQ on University indirect costs by my friend @broniatowski.bsky.social

He explains why these funds are essential and a critical investment for research in the United States.

www.linkedin.com/posts/david-...

19.02.2025 18:02 โ€” ๐Ÿ‘ 4    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I know I can improve my ARR reviews, but there really is no need for name calling. ๐Ÿ˜

05.02.2025 14:13 โ€” ๐Ÿ‘ 19    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Helpful
Insightful
Probing
Valuable
Thoughtful
Illuminating
Constructive

In author feedback, these are synonyms for "we hate your review."

30.01.2025 07:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Do reviewers purposely write confusing reviews with typos to demonstrate that the review wasn't written by a LLM?

27.01.2025 23:42 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Golden idea for an NLP paper: a group of llamas is called a "cria herd".

That would make a great name for a LLM method, model, or paper.

Just remember to acknowledge me in your paper.

You're welcome.

24.01.2025 21:39 โ€” ๐Ÿ‘ 9    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Idea for GenAI app: rewrite click bait headlines to normal headlines in the browser.

Input: youโ€™ll never guess this one company organizing the best deals of the year

Output: Amazon has a modest sale on phone chargers

21.01.2025 19:41 โ€” ๐Ÿ‘ 23    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Good idea!

20.01.2025 19:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The ARR submission checklist is already pretty extensive, but I suggest we add an additional question:

"I certify that I know the difference between \citet and \citep."

20.01.2025 14:49 โ€” ๐Ÿ‘ 22    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

ARR: Reviews are due today.

Me:

20.01.2025 13:29 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I feel seen. This is why I always access my API keys from my laptop.

17.01.2025 19:50 โ€” ๐Ÿ‘ 10    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

Do you have any of those fortune cookies that mock academics?

Sure!

14.01.2025 22:19 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Starting a new year and reflecting on how lucky I am to work at @hopkinsengineer.bsky.social with amazing people @jhucompsci.bsky.social @jhuclsp.bsky.social.

I was promoted to full professor in 2023, and my students presented me with this amazing poster of current and former PhD students.

02.01.2025 17:40 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
AI Ethics and Safety โ€” A Contradiction in Terms? Podcast Episode ยท On with Kara Swisher ยท 01/02/2025 ยท 53m

Listen to @karaswisher.bsky.social's new podcast where she interviews @ruchowdh.bsky.social, @ghadfield.bsky.social and me about AI Ethics and Safety. The podcast was recorded before a live audience at @jhu.edu Bloomberg Center.

podcasts.apple.com/us/podcast/a...

02.01.2025 17:38 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Examining the generated QA pairs, you can really see the difference. Our generations (bottom) look harder and more interesting.

Try our strategy for your synthetic generation task? Check out our paper, being presented at #ML4H2024 .
arxiv.org/abs/2412.04573

22.12.2024 16:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Training a Clinical QA system on our data gives big improvements, whether we generate data from Llama or GPT-4o. These improvements are both in F1 and any overlap between the extracted and true answers.

22.12.2024 16:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The generated pair has a lot of advantages: it doesn't use the same language as the report, it includes harder questions, and the answers are sometimes not in the report (unanswerable questions.) The result? Harder, more diverse and more realistic QA pairs.

22.12.2024 16:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Second, we use a summarize-then-generate strategy. The LLM first summarizes a given clinical record in a structured format. The summary keeps the key points but loses the details, such as specific terminology and content. We then use the summary to generate a new QA pair.

22.12.2024 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We explore two strategies. First, we craft instructions to encourage QA diversity. We formulate these as constraints on the answers to the questions. It helps, but we need more.

22.12.2024 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We can ask an LLM to write QA pairs, but they turn out to be too easy and repetitive. They don't come close to what you can get with real data. We need more diverse data! Typical methods (e.g. annealing) don't work. What can we do?

22.12.2024 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Paper at #ML42024!

Clinical QA can help doctors find critical information in patient records. But where do we get training data for these systems? Generating this data from an LLM is hard. ๐Ÿงต

22.12.2024 16:01 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Are Clinical T5 Models Better for Clinical Text? Large language models with a transformer-based encoder/decoder architecture, such as T5, have become standard platforms for supervised tasks. To bring these technologies to the clinical domain, recent...

Takeaways: If you can fine-tune a model on a specific clinical domain, that's great. If you can't, you should probably use models that are better overall, even if they aren't trained on clinical data.

Many more details in the paper!
arxiv.org/abs/2412.05845

22.12.2024 15:58 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

It turns out that when you have just a little supervised data, the models trained on more data and tasks, even when out of domain, do BETTER on the new clinical domain.

22.12.2024 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Maybe the real advantage for domain-tuned models lies in the low resource setting. With lots of supervised data, an out of domain model can do well. What about with just a few training examples?

22.12.2024 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We try a new clinical task and dataset/domain. In this case, the clinical T5 benefits disappear.

22.12.2024 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Comparing 2 clinical with 3 general models on 6 clinical datasets, we find that some clinical models improve. However, these clinical test sets come from the same domain as the clinical training data. Maybe the clinical models are better on THIS clinical data, but not in general?

22.12.2024 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

T5 models are the workhorse of many clinical text applications (e.g. information extraction.) Several clinical T5 models have been trained using clinical data to improve performance on these tasks. Do these models work better than general T5 models?

22.12.2024 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Are Clinical T5 Models Better for Clinical Text? That's the question we asked in our #ML4H2024 paper.

Turns out clinical models may not be worth it. ๐Ÿงต

arxiv.org/abs/2412.05845

22.12.2024 15:58 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@mdredze is following 20 prominent accounts