Stephanie Hyland

Stephanie Hyland

@hylandsl.bsky.social

machine learning for health at microsoft research, based in cambridge UK 🌻 she/her

2,310 Followers 888 Following 80 Posts Joined Nov 2024
1 month ago

the greatest joy of being a computational scientist is having the computer work for you while you do something else

13 1 0 1
3 months ago

“Interpretability plays a special role in machine learning because instead of focusing on making the AI smarter, we focus on improving human insight. I think this is the most important category of interpretability research, and we do not do enough of it.”

😎😎😎

4 0 0 0
3 months ago
A poster titled “a circular argument” which has been cut into a circular shape

It’s a CIRCULAR poster! #eurips presenters innovating in poster design / fine motor skills

0 0 0 0
3 months ago
a hand-written poster on a poster board, featuring a hand-drawn QR code (the code does not work)

remember to always include a QR code on your poster. spotted at #eurips

5 0 1 0
3 months ago
Video thumbnail

What coding with an LLM feels like sometimes.

267 64 10 6
3 months ago

when I ask candidates whether they've worked with "real medical data" this is the kind of thing that I mean

2 0 0 0
3 months ago

found a file from PhD days with the FORTY-EIGHT ways "ACE inhibitor" was encoded in the EHR system we were working wth

5 0 1 0
3 months ago

finally got around to booking my travel for #EurIPS2025! Looking forward to connecting with the European ML scene in Copenhagen

4 0 0 0
5 months ago

uv is so good

6 0 0 0
6 months ago
Post image

Some papers really have a good intro

16 1 4 0
6 months ago

The more rigorous peer review happens in conversations and reading groups after the paper is out with reputational costs for publishing bad work

49 5 2 3
6 months ago
Google's Gemini AI tells a Redditor it's 'cautiously optimistic' about fixing a coding bug, fails repeatedly, calls itself an embarrassment to 'all possible and impossible universes' before repeating 'I am a disgrace' 86 times in succession

I'll admit, I was skeptical when they said Gemini was just like a bunch of PhDs. But I gotta admit they nailed it.

7,254 1,657 70 161
6 months ago

what is the purpose of VQA datasets where text-only models do better than random?

1 0 0 0
7 months ago
Zotero screenshot showing four different papers with titles beginning with "MedAgent"

lads can we stop

4 0 0 0
7 months ago
diagram from Anthropic paper with an icon & label that says “subtract evil vector”

quick diagram of Bluesky’s architecture and why it’s nicer here

72 5 4 1
7 months ago

Emojis and massive try: except blocks. GitHub Copilot (at least Claude Sonnet 4) is very concerned about error handling.

2 0 1 0
7 months ago

if openreview were a lot fancier you could dynamically reallocate/cancel remaining reviews once a paper meets that expected minimum.

ideally you would mark these remaining reviews as optional rather than fully cancelled, in case that reviewer has already done work

3 0 0 1
7 months ago

it's frustrating how inefficient review assignments are: we target a minimum number of completed reviews per paper but in accounting for inevitable no-shows, some people end up doing technically unnecessary (if still beneficial) reviews

1 0 1 0
7 months ago

How many AI researchers fold their own laundry?

2 0 0 0
7 months ago
Post image

I am in the UK so feel free to discard, but I recently noticed Discord asking for age verification for some channels:

0 0 0 0
7 months ago
Preview
microsoft/maira-2-sae · Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

ALSO we have released the SAEs we trained, and the automated interp for all(!!)* features:
huggingface.co/microsoft/ma...

*all features for a subset of SAEs, we didn't run the full auto-interp pipeline on the widest SAE

4 0 0 0
7 months ago

We also found that the majority of the SAE features remained "uninterpretable", indicating room for improvement both in automated interpretability (we focused primarily on textual features!), but perhaps also questioning the SAE training and modelling assumptions. More work to be done here ✌️

2 0 1 0
7 months ago

... and in some cases we were able to steer MAIRA-2's generations, selectively introducing or removing concepts from its generated report.

But steering worked inconsistently! Sometimes it did nothing, or introduced off-target effects. We still don't fully understand when it will work.

1 0 1 0
7 months ago

We found interpretable and radiology-relevant concepts in MAIRA-2, like:
- "Aortic tortuosity or calcification"
- "Placement and position of PICC lines"
- "Presence of 'shortness of breath' in indication"
- "Describing findings without comparison to prior images"
- "Use of 'possible' or 'possibly'"

1 0 1 0
7 months ago
Post image

We performed the full pipeline of SAE training, automated interpretation with LLMs, steering, and automated steering evaluation.

1 0 1 0
7 months ago
Preview
Insights into a radiology-specialised multimodal large language model with sparse autoencoders Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic...

New work from my team! arxiv.org/abs/2507.12950
Intersecting mechanistic interpretability and health AI 😎

We trained and interpreted sparse autoencoders on MAIRA-2, our radiology MLLM. We found a range of human-interpretable radiology reporting concepts, but also many uninterpretable SAE features.

11 4 1 0
7 months ago

Mexico is an *official* NeurIPS event, it’s an additional location for the conference and is different to the endorsement of EurIPS.

1 0 1 0
7 months ago

It’s an endorsed event but is not actually officially NeurIPS! Maybe if this experiment works well there will be more distributed (official) NeurIPS locations in future.

1 0 1 0
7 months ago

We're excited to announce a second physical location for NeurIPS 2025, in Mexico City, which we hope will address concerns around skyrocketing attendance and difficulties in travel visas that some attendees have experienced in previous years.

Read more in our blog:
blog.neurips.cc/2025/07/16/n...

46 21 1 2
8 months ago
Post image

During the last couple of years, we have read a lot of papers on explainability and often felt that something was fundamentally missing🤔

This led us to write a position paper (accepted at #ICML2025) that attempts to identify the problem and to propose a solution.

arxiv.org/abs/2402.02870
👇🧵

12 5 1 1