Rahul G. Krishnan's Avatar

Rahul G. Krishnan

@rahulgk.bsky.social

Assistant Professor at the University of Toronto βš’οΈ πŸ₯ Deep learning and causal inference for computational medicine

159 Followers  |  71 Following  |  23 Posts  |  Joined: 15.11.2024  |  2.1901

Latest posts by rahulgk.bsky.social on Bluesky

Post image

🚨 Introducing CausalPFN, a foundation model trained on simulated data for in-context causal effect estimation, based on prior-fitted networks (PFNs). Joint work with Hamid Kamkari, Layer6AI & @rahulgk.bsky.social 🧡[1/7]

πŸ“ arxiv.org/abs/2506.07918
πŸ”— github.com/vdblm/Causal...
πŸ—£οΈOral@ICML SIM workshop

11.06.2025 13:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2

Theres lots more to do to understand CFT better, and build on it to create better post-training methods to fine-tune large language models.

Reach out to me or Ethan if you're interested in collaborating on this or pushing this idea to new domains and problems!

23.04.2025 22:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“– We’ve also open-sourced OpenMedText, integrating 121K biomedical articles & 29 medical textbooks to push future research in domain-adaptive fine-tuning in biomedicine.

23.04.2025 22:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ”§ We "negative" and "adaptive" prompts, confirming that the semantic content of prompts changes and impacts fine-tuning effectiveness.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“Š Results: On medical benchmarks, CFT improves accuracy by ~2.25% over CPT; in finance, it boosts performance by ~4.32%! Importantly, these gains scale effectively with larger models. πŸ“ˆ

Check out Appendix E.1 for preliminary results on GEMINI Flash 1.5M!

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ₯ We tested this idea in biomedical (using newly curated OpenMedText dataset of journals & textbooks!) and financial dataβ€”CFT significantly outperforms continued pretraining (CPT) and instruction fine-tuning (IFT) in zero-shot settings.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸŽ“ Instead of using Q&A as in instruction tuning, CFT uses reflective instructions (e.g., "Reflect on how what you will see changes what you know...") motivated by how humans learn.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’‘Contextual finetuning (CFT) uses contextual prompts during fine-tuning to adaptively change the semantic understanding that LLMs leverage during the process of learning new information.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸš€ Problem: Language models struggle with rapidly evolving info and context in fields like medicine & finance. We need ways to teach LLMs new information and control how they absorb this knowledge.

πŸ” Insight: Why not explain and teach LLMs how to learn?

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My student, Ethan Choi, will be at #ICLR2025 presenting Contextual Finetuning (CFT) and teaching LLMs how to learn (joint work with Muhammad Adil Asif, Ziwen Han, John Willes @vectorinstitute.ai)

🌟Project page: younwoochoi.github.io/cft-iclr/
#239, April 26 10-12:30(Hall3,2B)

23.04.2025 22:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If it helps, I usually learn something new (either directly or from further digging) about the behavior of markets.

21.04.2025 21:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Rahul G. Krishnan | From associational to causal predictions with deep learning
YouTube video by Schwartz Reisman Institute Rahul G. Krishnan | From associational to causal predictions with deep learning

πŸ“£T-CAIREM member @rahulgk.bsky.social's presentation is online! From Associational to Causal Predictions with #DeepLearning: An examination of recent advances in bridging the gap between associative #neuralnetworks and causal reasoning.
πŸŽ₯ www.youtube.com/watch?v=yE6S...

24.02.2025 20:01 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Rocking that @ Gmail address!

31.01.2025 15:48 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Come by tomorrow to hear about what we have been up to!

28.01.2025 17:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I thought about this a bit, I think helping PhD students close the translational gap from research to deployment (in industry or their own startups), particularly if they don't want to go into academia, is one way forward.

21.12.2024 21:07 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

o3 is incredible!

Since we've maxed out scale and $$$ on scaling inference-time compute I hope we now get back to thinking about the right combination of neural nets and algorithm to performant models cheaper, faster, and more reliably.

21.12.2024 21:03 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

1/6
Presenting "Unlearning Tabular Data without a 'Forget Set'"! We explore a new unlearning algorithm RELOAD in tabular learning. Drop by @neuripsconf.bsky.social Workshop on Table Representation Learning (@trl-research.bsky.social):
- SAT 14 Dec from 2:30pm-3:15pm!
- East Meeting Room 11-12

14.12.2024 22:00 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 5    πŸ“Œ 0

Are you around at Neurips? Would love to say hi and catch up!

12.12.2024 18:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Come by our poster today to learn about decision making under unobserved confounding!

12.12.2024 16:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow

Finally, if you're interested in understanding how to leverage energy-based normalizing flows, check out Lance's work on Meow (chienfeng-hub.github.io/meow/)

He'll be presenting on Dec. 12, 11:00 AM–2:00 PM at West Ballroom A-D #6403

🧡(7/7)

11.12.2024 00:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
NATURAL

@nikitadhawan.bsky.social developed NATURAL (www.cs.toronto.edu/~nikita/natu...) with @cottascience.bsky.social , Karen & @cmaddis.bsky.social. Its an end-to-end pipeline that starts from raw-text data and ends with a causal (**) effect associated with an intervention.

(**) conditions apply
🧡(6/7)

11.12.2024 00:20 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 3

b] ~Billions of dollars each year are spent on trials to assess interventions.

Can we use crowdsourced data to know which intervention is likely to work ahead of time?

Doing so requires answering a causal question!

But the data to answer this question is locked in unstructured text.

🧡(5/7)

11.12.2024 00:20 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Find Vahid to learn more about in-context causal inference and lots of other cool problems that he spends his time thinking about!

🧡(4/7)

11.12.2024 00:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity We study the problem of online sequential decision-making given auxiliary demonstrations from experts who made their decisions based on unobserved contextual information. These demonstrations can be v...

In arxiv.org/abs/2404.07266, Vahid shows how to use offline expert data with unobserved confounding to guide decision making using a nonparametric prior to guide learning policies for bandits, MDPs, and POMDPs.

Thu 12 Dec 4:30 - 7:30 pm PST πŸ“· West Ballroom A-D Poster #6708

🧡(3/7)

11.12.2024 00:20 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

a] Today, we learn from data and treat it as ground truth -- should we?

A doctor often knows more about their patient than is represented in electronic medical records.

A teacher knows more about their students than what their grades suggest.

🧡(2/7)

11.12.2024 00:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

First post! I’ll be at @NeurIPSConf #NeurIPS2024 until Sunday. I'd love to chat about causality for medicine & science.

I'm also looking for a postdoc interested in experimental design for medicine, if that's you, send me a message.

I'll be presenting two papers at the main conference.

🧡(1/7)

11.12.2024 00:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

@rahulgk is following 20 prominent accounts