The result is a fair, endβtoβend comparison that isolates what actually drives performance for radiology foundation models.
#AI #MedicalImaging #FoundationModels #ScalingLaws #Radiology
@maxilse.bsky.social
Working at microsoft research health futures. Interested in causal representation learning and generative modelling applied to medical data.
The result is a fair, endβtoβend comparison that isolates what actually drives performance for radiology foundation models.
#AI #MedicalImaging #FoundationModels #ScalingLaws #Radiology
including not just findings but also lines & tubes classification/segmentation and report generation. We also test the effect of adding structured labels alongside reports during CLIPβstyle pretraining, and study scaling laws under these controlled conditions.
23.09.2025 08:34 β π 0 π 0 π¬ 1 π 0That makes it hard to tell whether wins come from the model design or just from more data/compute or favorable benchmarks. We fix this by holding the pretraining dataset and compute constant and standardizing evaluation across tasks,
23.09.2025 08:34 β π 0 π 0 π¬ 1 π 0Why this matters: Prior comparisons of radiology encoders have often been applesβtoβoranges: models trained on different datasets, with different compute budgets, and evaluated mostly on small datasets of findingβonly tasks.
23.09.2025 08:34 β π 0 π 0 π¬ 1 π 0β
Pretrained on 3.5M CXRs to study scaling laws for radiology models
β
Compared MedImageInsight (CLIP-based) vs RAD-DINO (DINOv2-based)
β
Found that structured labels + text can significantly boost performance
β
Showed that as little as 30k in-domain samples can outperform public foundation models
π©»Excited to share our latest preprint: βData Scaling Laws for Radiology Foundation Modelsβ
Foundation vision encoders like CLIP and DINOv2 have transformed general computer vision, but what happens when we scale them for medical imaging?
π Read the full preprint here: arxiv.org/abs/2509.12818
What a damning abstract
30.04.2025 08:29 β π 5 π 0 π¬ 0 π 0I want to reshare @brandfonbrener.bsky.social's @NeurIPSConf 2024 paper on CoLoR-Filter: A simple yet powerful method for selecting high-quality data for language model pre-training!
With @hlzhang109.bsky.social @schwarzjn.bsky.social @shamkakade.bsky.social
Screenshot of 'SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models.' SHADES is in multiple grey colors (shades).
β«βͺ It's coming...SHADES. βͺβ«
The first ever resource of multilingual, multicultural, and multigeographical stereotypes, built to support nuanced LLM evaluation and bias mitigation. We have been working on this around the world for almost **4 years** and I am thrilled to share it with you all soon.
Weβre looking for a motivated researcher to apply for a Marie SkΕodowska-Curie postdoc with our Econometrics & Data Science group at SDU!
Focus: Causal Inference, Machine Learning, Big Data
Full support for promising projects
More info & apply:
www.sdu.dk/en/om-sdu/in...
Apply!
Assistant Professor (L/SL) in AI, including computer vision [DL 5 Mar] @BristolUni - awarded AI University of the Year in 2024.
DM to myself or @_SethBullock_ for inquiries (don't us send CV pls!, apply directly)
www.bristol.ac.uk/jobs/find/de...
π π« We are opening post-doc positions at the intersection of AI, data science, and medicine:
β’ Large Language Models for French medical texts
β’ Evaluating digital medical devices: statistics and causal inference
If you name your AI benchmark "Humanity's Last Exam" and get Kevin Roose to gush about it, you work in advertising, not in computer science.
www.nytimes.com/2025/01/23/t...
Happy to announce a collaboration with the Mayo Clinic to advance our research in radiology report generation!
newsnetwork.mayoclinic.org/discussion/m...
Tagging some of the core team: @valesalvatelli.bsky.social @fepegar.com @maxilse.bsky.social @sambondtaylor.bsky.social @anton-sc.bsky.social
The video of our talk "From Augustus to #Trump β Why #Disinformation Remains a Problem and What We Can Do About It Anyway" at #38c3, Europe's largest Hacker conference, was published, including the German original and English and Spanish translations:
media.ccc.de/v/38c3-von-a...
Internship in our group at Mila in reinforcement learning + graphs for reducing energy use in buildings.
More info and submit an application by Jan 13 here:
forms.gle/TCChXnvSAHqz...
Questions? Email donna.vakalis@mila.quebec with [intern!] in the subject line.
Screenshot of Table of Contents (Part 1) Contents 1 Introduction 217 2 Positionality 221 3 Overview of Risks and Harms Associated with Computer Vision Systems and Proposed Mitigation Strategies 223 3.1 Representational Harms . . . . . . . . . . . . . . . . . . . 223 3.2 Quality-of-Service and Allocative Harms . . . . . . . . . . 229 3.3 Interpersonal Harms . . . . . . . . . . . . . . . . . . . . . 237 3.4 Societal Harms: System Destabilization and Exacerbating Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 245 4 Frameworks and Principles for Computer Vision Researchers 266 4.1 Guidelines for Responsible Data and Model Development . 267 4.2 Measurement Modeling . . . . . . . . . . . . . . . . . . . 271 4.3 Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5 Reorientations of Computer Vision Research 276 5.1 Grounded in Historical Context and Considering Power Dynamics . . . . . . . . . . . . . . . . . . . . . . . 276 5.2 Small, Task Specific . . . . . . . . . . . . . . . . . . . . . 279 5.3 Community-Rooted . . . . . . . . . . . . . . . . . . . . . 280
Screenshot of Table of Contents (Part 2) 6 Systemic Change 285 6.1 Collective Action and Whistleblowing . . . . . . . . . . . . 285 6.2 Refusal/The Right not to Build Something . . . . . . . . . 287 6.3 Independent Funding Outside of Military and Multinational Corporations . . . . . . . . . . . . . . . . . . . . . . . . . 289 7 Conclusion 291 References 293
Dear computer vision researchers, students & practitionersπππ
Remi Denton & I have written what I consider to be a comprehensive paper on the harms of computer vision systems reported to date & how people have proposed addressing them, from different angles.
PDF: cdn.sanity.io/files/wc2kmx...
Posts, barbed wire fences and the main gate of the former Auschwitz II-Birkenau camp.
Help us commemorate victims, preserve memory & educate the world. Amplify our voice.
Your interaction here is more than just a click. It is an act of remembrance against forgetting. Like, share, or quote our posts.
Let people know that @auschwitzmemorial.bsky.social is present here.
New timeline, same problems, same solution
13.12.2024 14:22 β π 251 π 49 π¬ 10 π 3Kudos to @blackhc.bsky.social for calling out this neurips oral: openreview.net/forum?id=0NM... for not giving the rho-loss paper (arxiv.org/abs/2206.07137) the recognition it deserves!
12.12.2024 10:14 β π 8 π 0 π¬ 1 π 1In all of the reposts I see of articles criticising generative AI, I still don't see enough mention of work like Dr. Birhane's, which shows the biases against disadvantaged groups in the training datasets.
This is very good.
The next generation of probabilistic machine learning for weather called GenCast is published in @natureportfolio.bsky.social today π₯³. Amazing to see the collective progress in ML for weather as a field over the last 5 years. ποΈ www.nature.com/articles/s41...
04.12.2024 19:30 β π 157 π 29 π¬ 3 π 6One of the reasons the university sector has come so spectacularly off the rails is the fact it's so unfriendly to family life, people with caring responsibilities and parents. The attitude is often: 'Not working 24/7? You're not fully committed!'
www.science.org/content/arti...
This is just sad
27.11.2024 19:38 β π 3 π 1 π¬ 0 π 0Medically adapted foundation models (think Med-*) turn out to be more hot air than hot stuff. Correcting for fatal flaws in evaluation, the current crop are no better on balance than generic foundation models, even on the very tasks for which benefits are claimed.
arxiv.org/abs/2411.04118
What a surprise (not!). Yet again ... poor evaluations of specialized medical LLMs result in overhyped claims relative to the base LLMs. #bioMLeval
27.11.2024 02:16 β π 76 π 14 π¬ 1 π 1Not sure about IRM thought. They early stopped their experiments to get the colored MNIST results.
22.11.2024 22:54 β π 1 π 0 π¬ 1 π 0Good repost would have completely missed this :)
22.11.2024 14:12 β π 1 π 0 π¬ 0 π 0Thanks for the pack! Can you please add me :)
21.11.2024 13:22 β π 1 π 0 π¬ 1 π 0Screenshot of the paper.
Even as an interpretable ML researcher, I wasn't sure what to make of Mechanistic Interpretability, which seemed to come out of nowhere not too long ago.
But then I found the paper "Mechanistic?" by
@nsaphra.bsky.social and @sarah-nlp.bsky.social, which clarified things.