David van Dijk's Avatar

David van Dijk

@vandijklab.bsky.social

Learning the rules of life. Assistant Professor of Medicine and Computer Science @ Yale

103 Followers  |  46 Following  |  51 Posts  |  Joined: 14.11.2024  |  1.9585

Latest posts by vandijklab.bsky.social on Bluesky

Right. We have done something similar in our previous work (cinema-ot) where we validated casual inferences using synthetic data where we know the ground truth.

19.04.2025 14:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Right, and I do believe this is possible based on other experiments we have done where we translate between biological language and natural language. Your proposed experiment may be more specific and I’m interested in trying it.

19.04.2025 13:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Zero shot is possible but obviously much harder and also very much depends on the specific system.

19.04.2025 13:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We have focussed on fine tuning on one immune cell cytokine stim dataset and on (bulk) L1000. In both cases we show generalization by leaving out conditions (eg cytokine combos).

19.04.2025 13:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And the reasoning here is that if they improve then that shows that our model generates meaningful data? That’s interesting. It’s a convenient way of validating without doing experiments I guess

19.04.2025 13:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I see. We haven’t done this specific experiment where we compare well studied vs poorly studied genes. It’s an interesting idea. We will look into it. I would expect that genes/cell types/tissues that have a lot of training data, both expression and meta data, generalize better.

19.04.2025 13:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes. We showed that natural language pretraining vs training on cell sentences from scratch, significantly boosts performance.
In addition, in the spatial reasoning task (fig.6) we did an ablation where we trained with and without metadata. With metadata performed significantly better.

19.04.2025 13:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Finally, asking a model to generate a β€œcell sentence” (e.g. for perturbation response prediction) is novel by design, since no LLM has encountered that representation in its training data.

18.04.2025 17:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Second, several test setsβ€”such as Dataset Interpretation on held-out studiesβ€”use scRNA-seq datasets published after each model’s pretraining cutoff, giving us strong assurance that those examples weren’t seen during training.

18.04.2025 17:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We took several steps to ensure robust evaluation. First, we tested both open- and closed-source LLMs (GPT-4o, Gemini, LLaMA-3) on our benchmarks and found they perform poorly out of the box, indicating minimal overlap with pretraining corpora.

18.04.2025 17:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

For this paper, we chose a prompt structure that helps the model learn perturbations effectively, but initial tests suggest the model handles prompt variations well as long as the data formatting is consistentβ€”so we don't expect prompt engineering to be a major issue.

18.04.2025 17:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We'll formally test prompt robustness in future work, but from experience with earlier Cell2Sentence models, we've found minimal performance loss when using new or varied prompts. In general, we always train on a wide variety of prompts to avoid overfitting.

18.04.2025 17:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you!

18.04.2025 17:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- For dataset interpretation, we evaluate on scRNA-seq studies published after the model was pretrained.
Performance drops in these settings let us estimate generalization gaps, but we're also interested in developing confidence measures in future work.

18.04.2025 17:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is still an open challenge - we don't yet have confidence estimation built into the model, but we do evaluate C2S-Scale in out-of-distribution regimes. For example:
- In perturbation prediction, we test on unseen cell type–drug combinations and combinatorial perturbations.

18.04.2025 17:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

So performance likely reflects both mechanistic pattern recognition and domain transfer from literature and metadata. Our training corpus was intentionally multimodal to support this integration, letting the model ground textual knowledge in expression-level representations.

18.04.2025 17:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Great question, it might be a combination of both. For tasks like scQA, the model must (i) interpret gene expression patterns from cell sentences (e.g., identify marker genes or activation signatures), and (ii) relate those to biological concepts learned from the textual domain.

18.04.2025 17:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Many downstream tasks (e.g. scQA) require the model to reason jointly over cell sentences and biological text/metadata. We also explored this in our spatial reasoning ablation studies, where interleaving training with gene interaction data improved accuracy over training with expression alone.

18.04.2025 17:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

C2S-Scale interleaves gene expression (as "cell sentences") with biological text during training to enable reasoning across both modalities. This multimodal integration is a key difference from expression-only models and is important for complex tasks.

18.04.2025 17:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We thank our amazing team at Yale, Google Research, and Google DeepMind

18.04.2025 14:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Scaling Large Language Models for Next-Generation Single-Cell Analysis Single-cell RNA sequencing has transformed our understanding of cellular diversity, yet current single-cell foundation models (scFMs) remain limited in their scalability, flexibility across diverse ta...

Dive into the details:
πŸ“„ Preprint: biorxiv.org/content/10.1...
πŸ“ Google AI Blog: research.google/blog/teachin...
πŸ’» Code/Models: huggingface.co/collections/... github.com/vandijklab/c...

18.04.2025 14:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What's next for C2S-Scale?
β€’ True Multimodality: Integrating proteomics, epigenomics, imaging data πŸ–ΌοΈ
β€’ Deeper Biology: Modeling cell interactions, dynamics, & development ⏳
β€’ Enhanced Trust: Improving interpretability & reliability βœ…
β€’ Community Tools: Building shared benchmarks & platforms πŸ†

18.04.2025 14:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Cell2Sentence Models - a vandijklab Collection Cell2Sentence models trained for single-cell tasks

Let's build together! πŸ› οΈ We're open-sourcing C2S-Scale to empower the community.
Models up to 1B parameters are already available on HF, and models up to 27B parameters will be released in the next few weeks!
huggingface.co/collections/... github.com/vandijklab/c...

18.04.2025 14:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Beyond standard training, we used Reinforcement Learning (RL) πŸ€– to fine-tune C2S-Scale.
Using GRPO + biological rewards, we specifically improved:
β€’ Perturbation prediction accuracy πŸ§ͺ
β€’ Biological Q&A relevance ❓
Aligning LLMs with biological goals! βœ…

18.04.2025 14:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Size matters! πŸ“ˆ We observed clear scaling laws: As model size increased from 410M β†’ 27 Billion parameters, performance consistently improved across tasks.
This confirms that LLMs learn better biological representations at scale using the C2S approach. Even works with efficient LoRA tuning! πŸ’ͺ

18.04.2025 14:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

And it works! πŸŽ‰ C2S-Scale achieves SOTA performance, surpassing specialized single-cell models AND general LLMs:
β€’ 🎯 Cell type annotation
β€’ πŸ§ͺ Predicting perturbation responses
β€’ ✍️ Generating dataset summaries from cells
β€’ πŸ—ΊοΈ Inferring spatial relationships
β€’ ❓ Answering complex biological questions

18.04.2025 14:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

To truly "teach" biology to LLMs, we built a massive corpus: Over 1 BILLION tokens! πŸ“š
This wasn't just cell sentences – it included:
β€’ 🧬 50M+ cell profiles (human/mouse)
β€’ 🏷️ Annotations & Metadata
β€’ πŸ“„ Biological Text (abstracts, etc.)
Result? One model, many tasks!

18.04.2025 14:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We enable LLMs to "read" biology via Cell2Sentence (C2S) πŸ§¬βž‘οΈπŸ“: ranking genes creates text.
This lets us leverage massive pre-trained models, unifying transcriptomic data with biological text (annotations, papers) for richer understanding.

18.04.2025 14:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Teaching machines the language of biology: Scaling large language models for next-generation single-cell analysis

What if LLMs could β€œread” & β€œwrite” biology? πŸ€”
Introducing C2S‑Scaleβ€”a Yale and Google collab: we scaled LLMs (up to 27B!) to analyze & generate single‑cell data 🧬 ➑️ πŸ“
πŸ”— Blog: research.google/blog/teachin...
πŸ”— Preprint: biorxiv.org/content/10.1...

18.04.2025 14:13 β€” πŸ‘ 18    πŸ” 10    πŸ’¬ 2    πŸ“Œ 0

Huge thanks to the team: Zhikai Wu, Shiyang Zhang, Sizhuang He, Sifan Wang, Min Zhu, Anran Jiao, Lu Lu! Let us know what you think! #OperatorLearning #LLM #AI4Science

13.02.2025 19:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@vandijklab is following 19 prominent accounts