Byron Wallace's Avatar

Byron Wallace

@byron.bsky.social

Assoc. Prof in CS @ Northeastern, NLP/ML & health & etc. He/him.

2,474 Followers  |  329 Following  |  10 Posts  |  Joined: 03.05.2023  |  1.806

Latest posts by byron.bsky.social on Bluesky

Post image

Can you solve this algebra puzzle? 🧩

cb=c, ac=b, ab=?

A small transformer can learn to solve problems like this!

And since the letters don't have inherent meaning, this lets us study how context alone imparts meaning. Here's what we found:πŸ§΅β¬‡οΈ

22.01.2026 16:09 β€” πŸ‘ 48    πŸ” 10    πŸ’¬ 2    πŸ“Œ 2
Post image Post image

Hello world πŸ‘‹
My first paper at UT Austin!

We ask: what happens when medical β€œevidence” fed into an LLM is wrong? Should your AI stay faithful, or should it play it safe when the evidence is harmful?

We show that frontier LLMs accept counterfactual medical evidence at face value.🧡

21.01.2026 18:45 β€” πŸ‘ 14    πŸ” 6    πŸ’¬ 3    πŸ“Œ 2

Check out @hibaahsan.bsky.social's paper on spotting (problematic) racial biases in LLMs for healthcare applications πŸ‘‡

05.11.2025 15:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

3/ πŸ₯ A separate team at Northeastern located where certain signals live inside Olmo and made targeted edits that reduced biased clinical predictions. This kind of audit is only possible because Olmo exposes all its components.
β†’ buff.ly/HkChr4Q

24.10.2025 18:36 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Chantal (and Vinith) find that you can jailbreak LLMs with syntax! Some examples: cshaib.github.io/syntax_domai...

24.10.2025 16:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Now to appear at #EMNLP2025 (Findings). We've added more models and experiments: arxiv.org/abs/2502.13319

22.10.2025 12:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Can we distill *circuits* from teacher models into smaller students? πŸ‘‡

30.09.2025 23:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Who is going to be at #COLM2025?

I want to draw your attention to a COLM paper by my student @sfeucht.bsky.social that has totally changed the way I think and teach about LLM representations. The work is worth knowing.

And you can meet Sheridan at COLM, Oct 7!
bsky.app/profile/sfe...

27.09.2025 20:54 β€” πŸ‘ 39    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

Can we quantify what makes some text read like AI "slop"? We tried πŸ‘‡

24.09.2025 13:28 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Our new paper asks: what is the goal of β€œnatural language verbalization” interpretability approaches? If a verbalizer is supposed to tell us something about what’s in the target LM and NOT just what’s in the verbalizer LM, how do we actually evaluate that?

17.09.2025 21:45 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

Wouldn’t it be great to have questions about LM internals answered in plain English? That’s the promise of verbalization interpretability. Unfortunately, our new paper shows that evaluating these methods is nuancedβ€”and verbalizers might not tell us what we hope they do. πŸ§΅πŸ‘‡1/8

17.09.2025 19:19 β€” πŸ‘ 26    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1
Preview
As AI expands into medicine, Northeastern study finds AI models influenced by medical biasΒ  - Khoury College of Computer Sciences Humans can be easily influenced by language that is one-sided, especially in complex fields like medicine. But a new Khoury-led study shows that large language models, too, can be tricked […]

Thrilled to share our research showing how LLM models can be influenced by bias from "spun" medical literature is now featured in Northeastern's Khoury news! This shows critical insights as AI enters healthcare.
The full paper can be found at arxiv.org/abs/2502.07963

25.08.2025 15:36 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
New England Mechanistic Interpretability Workshop
About:The New England Mechanistic Interpretability (NEMI) workshop aims to bring together academic and industry researchers from the New England and surround... New England Mechanistic Interpretability Workshop

This Friday NEMI 2025 is at Northeastern in Boston, 8 talks, 24 roundtables, 90 posters; 200+ attendees. Thanks to
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/

If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...

18.08.2025 18:06 β€” πŸ‘ 16    πŸ” 7    πŸ’¬ 1    πŸ“Œ 3

πŸ“’ How factual are LLMs in healthcare?
We’re excited to release FactEHR β€” a new benchmark to evaluate factuality in clinical notes. As generative AI enters the clinic, we need rigorous, source-grounded tools to measure what these models get right β€” and what they don’t. πŸ₯ πŸ€–

11.08.2025 17:25 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2

Chatted with @byron.bsky.social at icml about my recent work, so look out for his upcoming "Tokenization is More Than More Than Compression".

19.07.2025 21:11 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
An overview of our AI-in-the-loop expert study pipeline: given a claim from a subreddit, we extract the PIO elements and retrieve the evidence automatically. The evidence, its context, and the evidence are then presented to a medical expert to provide a judgment and a rationale for the factuality of the claim.

An overview of our AI-in-the-loop expert study pipeline: given a claim from a subreddit, we extract the PIO elements and retrieve the evidence automatically. The evidence, its context, and the evidence are then presented to a medical expert to provide a judgment and a rationale for the factuality of the claim.

Are we fact-checking medical claims the right way? πŸ©ΊπŸ€”

Probably not. In our study, even experts struggled to verify Reddit health claims using end-to-end systems.

We show whyβ€”and argue fact-checking should be a dialogue, with patients in the loop

arxiv.org/abs/2506.20876

🧡1/

01.07.2025 17:10 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Post image

[πŸ“„] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.

07.04.2025 13:54 β€” πŸ‘ 76    πŸ” 19    πŸ’¬ 1    πŸ“Œ 6
Preview
Oxford Word of the Year 2024 - Oxford University Press The Oxford Word of the Year 2024 is 'brain rot'. Discover more about the winner, our shortlist, and 20 years of words that reflect the world.

I'm searching for some comp/ling experts to provide a precise definition of β€œslop” as it refers to text (see: corp.oup.com/word-of-the-...)

I put together a google form that should take no longer than 10 minutes to complete: forms.gle/oWxsCScW3dJU...
If you can help, I'd appreciate your input! πŸ™

10.03.2025 20:00 β€” πŸ‘ 10    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

🌟Job ad🌟 We (@gregdnlp.bsky.social, @mattlease.bsky.social and I) are hiring a postdoc fellow within the CosmicAI Institute, to do galactic work with LLMs and generative AI! If you would like to push the frontiers of foundation models to help solve myths of the universe, please apply!

25.02.2025 22:09 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 0    πŸ“Œ 3
Post image

LLMs are known to perpetuate social biases in clinical tasks. Can we locate and intervene upon LLM activations that encode patient demographics like gender and race? 🧡

Work w/ @arnabsensharma.bsky.social, @silvioamir.bsky.social, @davidbau.bsky.social, @byron.bsky.social

arxiv.org/abs/2502.13319

22.02.2025 04:17 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 3    πŸ“Œ 2
Post image

🚨 Do LLMs fall for spin in medical literature? πŸ€”

In our new preprint, we find that LLMs are susceptible to biased reporting of clinical treatment benefits in abstractsβ€”more so than human experts. πŸ“„πŸ” [1/7]

Full Paper: arxiv.org/abs/2502.07963

πŸ§΅πŸ‘‡

15.02.2025 02:34 β€” πŸ‘ 63    πŸ” 25    πŸ’¬ 3    πŸ“Œ 4
Preview
Who Taught You That? Tracing Teachers in Model Distillation Model distillation -- using outputs from a large teacher model to teach a small student model -- is a practical means of creating efficient models for a particular task. We ask: Can we identify a stud...

πŸ“’ Can we trace a small distilled model back to its teacher? πŸ€”New work (w/ @chantalsh.bsky.social, @silvioamir.bsky.social & @byron.bsky.social) finds some footprints left by LLMs in distillation! [1/6]

πŸ”— Full paper: arxiv.org/abs/2502.06659

11.02.2025 17:16 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

DeepSeek R1 shows how important it is to be studying the internals of reasoning models. Try our code: Here @canrager.bsky.social shows a method for auditing AI bias by probing the internal monologue.

dsthoughts.baulab.info

I'd be interested in your thoughts.

31.01.2025 14:30 β€” πŸ‘ 28    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1

πŸ“£ 🌍 We're hiring for 2 Machine Learning researchers to join SOLACE-AI @kingscollegelondon.bsky.social , funded by @wellcometrust.bsky.social . This is your chance to develop cutting-edge AI to directly impact global health responses to climate emergencies. jobs.ac.uk/job/DLM377

27.01.2025 11:55 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

OLMo 2 is out πŸ₯³ 7B and 13B trained on 5T tokens, and meticulousy instruction tuned using Tulu 3 recipe.

Simply the best fully open models yet.

Really proud of the work & the amazing team at
@ai2.bsky.social

26.11.2024 21:12 β€” πŸ‘ 260    πŸ” 44    πŸ’¬ 9    πŸ“Œ 2

And Sheridan Feucht investigates the "implicit vocabulary" of LLMs via token erasure: arxiv.org/abs/2406.20086 (w/David Atkinson and @davidbau.bsky.social)

09.11.2024 21:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Somin Wadhwa has some intriguing findings on distillation with "chain of thought" sequences (e.g., this works better when "reasoning" follows labels, and individual tokens seem to be sufficient): arxiv.org/abs/2406.14511 (w/@Silvio Amir)

09.11.2024 21:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Chantal Shaib reports on syntactic "templates" that LLM's like to repeat: arxiv.org/abs/2407.00211 (w/@yanai.bsky.social and @jessyjli.bsky.social)

09.11.2024 21:21 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I'll be @ #EMNLP2024 if anyone wants to find snobby coffee / despair about election / or I guess talk research. Some work to be presentedπŸ‘‡

09.11.2024 21:21 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our work on reducing diagnostic errors with interpretable risk prediction is now on arXiv!

We retrieve evidence from a patient’s record, visualize how it informs a prediction, and test it in a realistic setting. πŸ‘‡ (1/6)

arxiv.org/abs/2402.10109
w/ @byron.bsky.social and @jwvdm.bsky.social

28.02.2024 18:52 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

@byron is following 20 prominent accounts