Iโm recruiting postdocs who are excited to work with real clinical data and partner closely with clinicians.
If youโll be at ML4H or the first day of NeurIPS, letโs connect!
More about my work: web.stanford.edu/~jfries/
@jason-fries.bsky.social
Research scientist at Stanford University working on healthcare AI, foundation models, and data-centric AI. I focus on evaluating model reproducibility, training multimodal models with EHRs, and improving human-AI collaboration in medicine.
Iโm recruiting postdocs who are excited to work with real clinical data and partner closely with clinicians.
If youโll be at ML4H or the first day of NeurIPS, letโs connect!
More about my work: web.stanford.edu/~jfries/
My lab will focus on multimodal foundation models for healthcare, combining CS and clinical collaboration to understand and treat complex diseases like cancer.
Core interests: representation learning, synthetic data generation, longitudinal benchmarks, and agentic clinical AI.
Iโm excited to share that Iโll be joining Stanford as a tenure-track Assistant Professor of Biomedical Data Science and of Medicine on Dec 1, 2025. ๐
Iโll hold a joint appointment in DBDS and the Division of Computational Medicine.
AI in Clinical Science - amazing data being presented today by @jason-fries.bsky.social Sylvia Plevritis @roxanadaneshjou.bsky.social @akshay-chaudhari.bsky.social but still feel like we are just barely cracking the egg in this field. So impatient for the omeletteโฆ!
@stanford-cancer.bsky.social
๐ Headed to MLHC 2025 this weekend?
Swing by Poster #154 (Session C) on Saturday, Aug 16 to check out FactEHR โ our new benchmark for evaluating factuality in clinical notes. As LLMs enter the clinic, we need rigorous, source-grounded tools to measure what they get right (and wrong).
๐ Excited to present our #ICLR2025 workโleveraging future medical outcomes to improve pretraining for prognostic vision models.
๐ผ๏ธ "Time-to-Event Pretraining for 3D Medical Imaging"
๐ Hall 3+2B #23
๐ Sat 26 Apr, 10 AMโ12:30 PM
๐ iclr.cc/virtual/2025...
[4/4] This was a massive collaboration involving multiple offices and champions across Stanford University and @stanfordmedicine.bsky.social
Thanks to our research team: Michael Wornow, Ethan Steinberg, Zepeng Frazier Huo, Hejie Cui, Suhana Bedi, Alyssa Unell, Nigam Shah and many others.
[3/4] ๐ฆ๐๐ฎ๐ป๐ฑ๐ฎ๐ฟ๐ฑ๐ถ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ๐
Each dataset includes a set of standardized tasks exploring a technical challenge area in AI.
๐ฏ Few-shot Learning
๐ค Multimodal Learning & Time-to-Event Modeling
โ Long Context Instruction Following & Temporal Reasoning
[2/4] ๐๐ฎ๐๐ฎ๐๐ฒ๐ ๐ฆ๐๐บ๐บ๐ฎ๐ฟ๐ถ๐ฒ๐
๐ 3 longitudinal EHR datasets
โข Scale: 25,991 patients | 441,680 visits | 295M clinical events (median: 4,882 events/patient)
โข Timeframe: 1997โ2023 (median: 10 years/patient)
โข Multimodal: structured EHR data, 3D medical imaging, and clinical notes
[1/4] ๐ We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicineโnow freely available for non-commercial research use worldwide! ๐
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...
[1/4] Excited to share that our paper ๐๐ช๐ฎ๐ฆ-๐ต๐ฐ-๐๐ท๐ฆ๐ฏ๐ต ๐๐ณ๐ฆ๐ต๐ณ๐ข๐ช๐ฏ๐ช๐ฏ๐จ ๐ง๐ฐ๐ณ 3๐ ๐๐ฆ๐ฅ๐ช๐ค๐ข๐ญ ๐๐ฎ๐ข๐จ๐ช๐ฏ๐จ is accepted at ICLR 2025! ๐
We introduce ๐ง๐ง๐ ๐ฝ๐ฟ๐ฒ๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด, using EHR-linked imaging to improve AI-driven prognosisโessential for assessing disease progression.
๐ Paper: arxiv.org/abs/2411.09361
Excited to share our paper "Time-to-Event Pretraining for 3D Medical Imaging" is accepted at ICLR 2025! ๐
We introduce time-to-event pretraining for imaging, leveraging longitudinal EHRs to provide temporal supervision and enhance disease prognosis performance.
๐ Paper: arxiv.org/abs/2411.09361