Javid Dadashkarimi's Avatar

Javid Dadashkarimi

@dadashkarimi.bsky.social

Postdoc at University of Pennsylvania, former developer at Martinos Center at MGH/Harvard, Yale '23, medical image analysis ๐Ÿง , deep learning, connectomics (he/him/his)

131 Followers  |  288 Following  |  13 Posts  |  Joined: 25.11.2024  |  1.9175

Latest posts by dadashkarimi.bsky.social on Bluesky

Jensen Huangโ€™s Advice for CEOs and Students | NVIDIA
YouTube video by PricePros Jensen Huangโ€™s Advice for CEOs and Students | NVIDIA

โ€œPeople who can suffer are ultimately the ones who are the most successfulโ€. Jensen Huangโ€™s, โ€”NVIDIAโ€™s CEOโ€” advice to students youtu.be/zqI-EWQG8ZI?...

24.01.2025 13:58 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I love this quote from Benjamin Franklin: โ€˜Either write things worth reading, or do things worth writing.โ€™ @upenn.edu

27.12.2024 03:16 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Now that those #OHBM abstracts are done - think about submitting to this connectivity workshop. Stellar lineup of speakers is set! Register (and submit abstracts) here -
medicine.yale.edu/mrrc/about/s...
deadline is Jan 10, 2025.

18.12.2024 12:20 โ€” ๐Ÿ‘ 27    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Post image

7/
We tested our method on two datasets:
- HASTE images.
- EPI scans.
And showed that it reaches State-of-the-art performance, especially in younger fetuses. Also, our model is contrast agnostic; it generalizes to various modalities. You can find our preprint at arxiv.org/pdf/2410.20532

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

6/
Testing: Step 2 (Fine-Level)
- Model B handles mid-sized patches (96ยณ) on the cropped volume. Same for model C with 64ยณ windows.
- The majority voting across A, B, and C defines consistent regions likely containing the brain.
- Model D refines the final binary mask to avoid edge effects.

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

5/
Testing: Step 1 (Breadth-Level)
Model A scans large patches (128ยณ) for the brain.
Model D tests tiny patches (32ยณ) to ensure fine-grained accuracy.
Combined masks crop the image to areas of interest for progressively further refinements.

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

4/
To tackle maternal tissues that usually confuse U-Nets, we train 4 U-Nets:
- Each is optimized for different patch sizes.
- Synthetic images include full, partial, and absent brains.
This multi-scale approach prepares us to handle complex scenarios during testing.

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

3/
Our synthesizer has two components:
- One controls the shape of the brain (applied on labels 1 to 7)
- One manages the background (label 0 and labels 8 to 24).

Separate parameters for each category allow us to have fine control over the variability of the shapes (e.g., warping, scaling, noise)

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

2/
During training, we augment label maps with random background shapes:
- A big ellipse (womb-like).
- Contours inside/outside the ellipse.
- Synthetic โ€œsticksโ€ and โ€œbonesโ€ mimicking maternal anatomy.
This creates diverse and realistic label maps.

29.11.2024 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿงต 1/ Do you have limited annotations and need a robust fetal brain extraction model with endless training data?
We introduce Breadth-Fine Search (BFS) and Deep Focused Sliding Window (DFS): a framework trained on infinite synthetic images derived from a small set of annotated seeds (label maps).

29.11.2024 19:23 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

5/
Testing: Step 1 (Breadth-Level)
Model A scans large patches (128ยณ) for the brain.
Model D tests tiny patches (32ยณ) to ensure fine-grained accuracy.
Combined masks crop the image to areas of interest for progressively further refinements.

29.11.2024 19:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

4/
To tackle maternal tissues that usually confuse U-Nets, we train 4 U-Nets:
- Each is optimized for different patch sizes.
- Synthetic images include full, partial, and absent brains.
This multi-scale approach prepares us to handle complex scenarios during testing.

29.11.2024 19:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

3/
Our synthesizer has two components:
- One controls the shape of the brain (applied on labels 1 to 7)
- One manages the background (label 0 and labels 8 to 24).

Separate parameters for each category allow us to have fine control over the variability of the shapes (e.g., warping, scaling, noise)

29.11.2024 19:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

2/
During training, we augment label maps with random background shapes:
- A big ellipse (womb-like).
- Contours inside/outside the ellipse.
- Synthetic โ€œsticksโ€ and โ€œbonesโ€ mimicking maternal anatomy.
This creates diverse and realistic label maps.

29.11.2024 19:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Testing the Tests: Using Connectome-Based Predictive Models to Reveal the Systems Standardized Tests and Clinical Symptoms are Reflecting Neuroimaging has achieved considerable success in elucidating the neurophysiological underpinnings of various brain functions. Tools such as standardized cognitive tests and symptom inventories have p...

Nice work from Anja Samardzija and team. Instead of using CPM to identify networks, networks are predefined and used to evaluate external measures. This provides a framework for the development of improved tests assessing specific brain networks.
www.biorxiv.org/content/10.1...

26.11.2024 12:53 โ€” ๐Ÿ‘ 20    ๐Ÿ” 6    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 1

@dadashkarimi is following 20 prominent accounts