Rahul G. Krishnan's Avatar

Rahul G. Krishnan

@rahulgk.bsky.social

Assistant Professor at the University of Toronto βš’οΈ πŸ₯ Deep learning and causal inference for computational medicine

166 Followers  |  77 Following  |  30 Posts  |  Joined: 15.11.2024  |  2.3011

Latest posts by rahulgk.bsky.social on Bluesky

Preview
Employment Opportunities β€” Department of Computer Science, University of Toronto Are you looking for a thought-provoking and inventive career at a leading institution?

We're hiring tenure-stream faculty at all levels. web.cs.toronto.edu/employment-o...

If you'd like to learn more about what being faculty here is like, please do reach out!

29.11.2025 22:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Would love to connect with folks interested in automated reliable decision across industries!

Finally, if you're on the job market this year. Join us at the University of Toronto Department of Computer Science in Canada.

29.11.2025 22:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Applying to graduate school next year?

We're hiring!

GPUs, coffee, an incredible city and mental space to do blue sky research
@uoftcompsci.bsky.social
@vectorinstitute.ai
@uoftmedicine.bsky.social
cs.toronto.edu/~rahulgk/lin...

29.11.2025 22:46 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

(iii) D3M: a hypothesis test to leverage computation to detect deterioration of predictive models. @teivng.bsky.social cs.toronto.edu/~viet/d3m.html

29.11.2025 22:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Beyond Masked and Unmasked: Discrete Diffusion Models via Partial Masking MDM-Prime

(ii) MDM-Prime; a new class of discrete diffusion models that operates over subtokens.
chen-hao-chao.github.io/mdm-prime/

29.11.2025 22:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
CausalPFN: Amortized Causal Effect Estimation via In-Context Learning Causal effect estimation from observational data is fundamental across various applications. However, selecting an appropriate estimator from dozens of specialized methods demands substantial manual e...

(i) Spotlight presentation on causal foundation models - CausalPFNs. @vahidbalazadeh.bsky.social
arxiv.org/abs/2506.07918

29.11.2025 22:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My students and I will be at #NeurIPS2025 and EurIPS from Dec 2-8.

My students and l will be presenting three papers.

29.11.2025 22:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 Introducing CausalPFN, a foundation model trained on simulated data for in-context causal effect estimation, based on prior-fitted networks (PFNs). Joint work with Hamid Kamkari, Layer6AI & @rahulgk.bsky.social 🧡[1/7]

πŸ“ arxiv.org/abs/2506.07918
πŸ”— github.com/vdblm/Causal...
πŸ—£οΈOral@ICML SIM workshop

11.06.2025 13:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2

Theres lots more to do to understand CFT better, and build on it to create better post-training methods to fine-tune large language models.

Reach out to me or Ethan if you're interested in collaborating on this or pushing this idea to new domains and problems!

23.04.2025 22:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“– We’ve also open-sourced OpenMedText, integrating 121K biomedical articles & 29 medical textbooks to push future research in domain-adaptive fine-tuning in biomedicine.

23.04.2025 22:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ”§ We "negative" and "adaptive" prompts, confirming that the semantic content of prompts changes and impacts fine-tuning effectiveness.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“Š Results: On medical benchmarks, CFT improves accuracy by ~2.25% over CPT; in finance, it boosts performance by ~4.32%! Importantly, these gains scale effectively with larger models. πŸ“ˆ

Check out Appendix E.1 for preliminary results on GEMINI Flash 1.5M!

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ₯ We tested this idea in biomedical (using newly curated OpenMedText dataset of journals & textbooks!) and financial dataβ€”CFT significantly outperforms continued pretraining (CPT) and instruction fine-tuning (IFT) in zero-shot settings.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸŽ“ Instead of using Q&A as in instruction tuning, CFT uses reflective instructions (e.g., "Reflect on how what you will see changes what you know...") motivated by how humans learn.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’‘Contextual finetuning (CFT) uses contextual prompts during fine-tuning to adaptively change the semantic understanding that LLMs leverage during the process of learning new information.

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸš€ Problem: Language models struggle with rapidly evolving info and context in fields like medicine & finance. We need ways to teach LLMs new information and control how they absorb this knowledge.

πŸ” Insight: Why not explain and teach LLMs how to learn?

23.04.2025 22:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My student, Ethan Choi, will be at #ICLR2025 presenting Contextual Finetuning (CFT) and teaching LLMs how to learn (joint work with Muhammad Adil Asif, Ziwen Han, John Willes @vectorinstitute.ai)

🌟Project page: younwoochoi.github.io/cft-iclr/
#239, April 26 10-12:30(Hall3,2B)

23.04.2025 22:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If it helps, I usually learn something new (either directly or from further digging) about the behavior of markets.

21.04.2025 21:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Rahul G. Krishnan | From associational to causal predictions with deep learning
YouTube video by Schwartz Reisman Institute Rahul G. Krishnan | From associational to causal predictions with deep learning

πŸ“£T-CAIREM member @rahulgk.bsky.social's presentation is online! From Associational to Causal Predictions with #DeepLearning: An examination of recent advances in bridging the gap between associative #neuralnetworks and causal reasoning.
πŸŽ₯ www.youtube.com/watch?v=yE6S...

24.02.2025 20:01 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Rocking that @ Gmail address!

31.01.2025 15:48 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Come by tomorrow to hear about what we have been up to!

28.01.2025 17:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I thought about this a bit, I think helping PhD students close the translational gap from research to deployment (in industry or their own startups), particularly if they don't want to go into academia, is one way forward.

21.12.2024 21:07 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

o3 is incredible!

Since we've maxed out scale and $$$ on scaling inference-time compute I hope we now get back to thinking about the right combination of neural nets and algorithm to performant models cheaper, faster, and more reliably.

21.12.2024 21:03 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

1/6
Presenting "Unlearning Tabular Data without a 'Forget Set'"! We explore a new unlearning algorithm RELOAD in tabular learning. Drop by @neuripsconf.bsky.social Workshop on Table Representation Learning (@trl-research.bsky.social):
- SAT 14 Dec from 2:30pm-3:15pm!
- East Meeting Room 11-12

14.12.2024 22:00 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 5    πŸ“Œ 0

Are you around at Neurips? Would love to say hi and catch up!

12.12.2024 18:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Come by our poster today to learn about decision making under unobserved confounding!

12.12.2024 16:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow

Finally, if you're interested in understanding how to leverage energy-based normalizing flows, check out Lance's work on Meow (chienfeng-hub.github.io/meow/)

He'll be presenting on Dec. 12, 11:00 AM–2:00 PM at West Ballroom A-D #6403

🧡(7/7)

11.12.2024 00:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
NATURAL

@nikitadhawan.bsky.social developed NATURAL (www.cs.toronto.edu/~nikita/natu...) with @cottascience.bsky.social , Karen & @cmaddis.bsky.social. Its an end-to-end pipeline that starts from raw-text data and ends with a causal (**) effect associated with an intervention.

(**) conditions apply
🧡(6/7)

11.12.2024 00:20 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 3

b] ~Billions of dollars each year are spent on trials to assess interventions.

Can we use crowdsourced data to know which intervention is likely to work ahead of time?

Doing so requires answering a causal question!

But the data to answer this question is locked in unstructured text.

🧡(5/7)

11.12.2024 00:20 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Find Vahid to learn more about in-context causal inference and lots of other cool problems that he spends his time thinking about!

🧡(4/7)

11.12.2024 00:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@rahulgk is following 20 prominent accounts