Maciej A. Mazurowski's Avatar

Maciej A. Mazurowski

@mazurowski.bsky.social

Associate Professor at Duke | Director of Duke Spark | AI in Medical Imaging

57 Followers  |  183 Following  |  30 Posts  |  Joined: 14.07.2025  |  1.9574

Latest posts by mazurowski.bsky.social on Bluesky

Paper: raw.githubusercontent.com/mlresearch/...

16.09.2025 17:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congrats to Hanxue Gu, who is the first author, and the interdisciplinary team of co-authors!

16.09.2025 17:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our method:
- automatically segments radius and ulna bones
- uses a pose estimation network to assess rotational parameters of the bones
- automatically detects fracture locations
- combines all the information to infer the 3D fracture angles

The paper has been published at MIDL.

16.09.2025 17:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We propose a deep learning-based method that allows for measuring 3D angles from standard non-orthogonal planar X-rays, which allows for patient movement between the images are acquired.

16.09.2025 17:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Precise 3D measurement of fracture angles would be of enormous help in orthopedics, and yet it's very challenging from standard X-rays. We have a solution!

16.09.2025 17:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our method:
- automatically segments radius and ulna bones
- uses a pose estimation network to assess rotational parameters of the bones
- automatically detects fracture locations
- combines all the information to infer the 3D fracture angles

The paper has been published at MIDL.

16.09.2025 17:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We propose a deep learning-based method that allows for measuring 3D angles from standard non-orthogonal planar X-rays, which allows for patient movement between the images are acquired.

16.09.2025 17:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - mazurowski-lab/ContourDiff: Contour-Guided Diffusion Models for Unpaired Image-to-Image Translation Contour-Guided Diffusion Models for Unpaired Image-to-Image Translation - mazurowski-lab/ContourDiff

Check out the arXiv here: arxiv.org/abs/2403.10786
And the code here: github.com/mazurowski-...

12.09.2025 16:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We addressed this by using contours from the image to guide the diffusion model and showed quite a good performance of the model!

Congrats to Yuwen Chen, who is the first author, and the other team members!

12.09.2025 16:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The issue for such translation is that for a given body part, the CT and MRI images often have a different field of view, resulting in different structures being portrayed in the image.

12.09.2025 16:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Want to make a CT out of an MRI? It's possible thanks to generative models, but it has issues which we're addressing in our ContourDiff model (code available)!

12.09.2025 16:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Code: github.com/mazurowski-...
Paper:
openaccess.thecvf.com/content/CVP...

01.09.2025 15:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- we explored different ways of integrating adapted models
- we validated our method with 24 source domain-target domain splits for 3 medical imaging datasets
- our method outperforms SOTA by 2.9% on average in terms of Dice similarity coefficient
- published in a CVPR workshop

01.09.2025 15:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Segmentation models may perform poorly when test images belong to a different domain (e.g., a different medical center). We developed a method of adapting the models using a single unlabeled image from the test domain!

01.09.2025 15:20 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - mazurowski-lab/SLM-SAM2: This is the official implementation of SLM-SAM 2 This is the official implementation of SLM-SAM 2. Contribute to mazurowski-lab/SLM-SAM2 development by creating an account on GitHub.

arXiv paper: arxiv.org/pdf/2505.01854
code: github.com/mazurowski-...

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congrats to Yuwen Chen, the lead author of the paper for this terrific work!

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our evaluation on multiple tasks showed strong improvements as compared to SAM 2 and a promise to significantly speed up the annotation process.

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Introducing Short-Long Memory SAM 2 (SLM-SAM 2) with a novel architecture combining short and long memory banks. The motivation was to reduce the propagation of error to slices far from the annotated ones.

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The recently released Segment Anything Model 2 (SAM 2) allows for extending annotations from one frame of a video to other frames. We leveraged this ability but discovered that it doesn't translate well to medical imaging and had to make a few changes.

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Detailed annotation of cross-sectional images is one of the main challenges in the development of segmentation models. It's very time-consuming and typically requires supervision by expert radiologists.

25.08.2025 14:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Tired of annotating entire volumes of medical images? We developed a model allowing for annotating one slice and extending the annotation to the entire volume.

25.08.2025 14:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We shared the resources, including a fine-tuning guide here:
github.com/mazurowski-...
The paper is here:
www.melba-journal.org/pdf/2025:00...

14.08.2025 15:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

3. Bigger networks do not lead to large performance gains
4. Self-supervised learning can
5. Some popular methods lead to inferior performance.

14.08.2025 15:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Here is what we've learned:

1. Fine-tuning SAM is slightly better in terms of performance than nnUNet
2. The best fine-tuning methods use parameter-efficient learning and adapt both the encoder and decoder.
...

14.08.2025 15:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Are vision foundation models worthwhile using? If so, how to fine-tune them? We studied this extensively in our recent paper now published in MELBA.

We evaluated various fine-tuning algorithms using 17 medical imaging datasets, with both task-specific fine-tuning and self-supervised learning.

14.08.2025 15:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1/9
Introducing FrΓ©chet Radiomic Distance (FRD): A Versatile Metric for Comparing Medical Imaging Datasets, led by
@nickkonz.bsky.social and Richard Osuala.

Our paper can be found at arxiv.org/abs/2412.01496, and you can easily compute FRD yourself with our code at github.com/RichardObi/f...

01.08.2025 17:37 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The model is publicly available here: github.com/mazurowski-...
The paper is here: arxiv.org/pdf/2502.09779

04.08.2025 12:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We announce BodyCompNet, a fully automated segmentation algorithm that identifies (1) muscles, (2) subcutaneous fat, (3) visceral fat, and (4) intermuscular fat. We evaluated our model with internal and external data and showed very strong performance for the primary metrics.

04.08.2025 12:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Body composition and frailty are emerging as very important predictors of health outcomes. They can be measured well in cross-sectional imaging, but such measurement requires accurate and consistent segmentation of different tissues, which can be a challenge.

04.08.2025 12:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Announcing another new public segmentation model from our team! It segments muscles and fat (different types) in CTs of the chest/abdomen/pelvis.

04.08.2025 12:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@mazurowski is following 20 prominent accounts