Austin Wang's Avatar

Austin Wang

@austintwang.bsky.social

Stanford CS PhD student working on ML/AI for genomics with @anshulkundaje.bsky.social austintwang.com

181 Followers  |  376 Following  |  12 Posts  |  Joined: 15.10.2023  |  1.8063

Latest posts by austintwang.bsky.social on Bluesky

Post image

@saramostafavi.bsky.social (@Genentech) & I (@Stanford) r excited to announce co-advised postdoc positions for candidates with deep expertise in ML for bio (especially sequence to function models, causal perturbational models & single cell models). See details below. Pls RT 1/

19.06.2025 20:55 β€” πŸ‘ 55    πŸ” 40    πŸ’¬ 1    πŸ“Œ 3
Post image

Today was a big day for the lab. We had two back to back thesis defenses and the defenders defended with great science and character.

Congrats to DR. Kelly Cochran & DR. @soumyakundu.bsky.social on this momentous achievement.

Brilliant scientists with brilliant futures ahead. πŸŽ‰πŸŽ‰πŸŽ‰

15.05.2025 05:19 β€” πŸ‘ 77    πŸ” 7    πŸ’¬ 2    πŸ“Œ 0
Preview
Dissecting regulatory syntax in human development with scalable multiomics and deep learning Transcription factors (TFs) establish cell identity during development by binding regulatory DNA in a sequence-specific manner, often promoting local chromatin accessibility, and regulating gene expre...

Delighted to share our latest work deciphering the landscape of chromatin accessibility and modeling the DNA sequence syntax rules underlying gene regulation during human fetal development! www.biorxiv.org/content/10.1... Read on for more: 🧡 1/16 #GeneReg 🧬πŸ–₯️

03.05.2025 18:27 β€” πŸ‘ 129    πŸ” 60    πŸ’¬ 2    πŸ“Œ 3
Preview
Programmatic design and editing of cis-regulatory elements The development of modern genome editing tools has enabled researchers to make such edits with high precision but has left unsolved the problem of designing these edits. As a solution, we propose Ledi...

Our preprint on designing and editing cis-regulatory elements using Ledidi is out! Ledidi turns *any* ML model (or set of models) into a designer of edits to DNA sequences that induce desired characteristics.

Preprint: www.biorxiv.org/content/10.1...
GitHub: github.com/jmschrei/led...

24.04.2025 12:59 β€” πŸ‘ 115    πŸ” 37    πŸ’¬ 2    πŸ“Œ 3
Single cell – ENCODEHomo sapiens clickable body map

Very excited to announce that the single cell/nuc. RNA/ATAC/multi-ome resource from ENCODE4 is now officially public. This includes raw data, processed data, annotations and pseudobulk products. Covers many human & mouse tissues. 1/

www.encodeproject.org/single-cell/...

07.01.2025 21:29 β€” πŸ‘ 287    πŸ” 86    πŸ’¬ 6    πŸ“Œ 0

Our ChromBPNet preprint out!

www.biorxiv.org/content/10.1...

Huge congrats to Anusri! This was quite a slog (for both of us) but we r very proud of this one! It is a long read but worth it IMHO. Methods r in the supp. materials. Bluetorial coming soon below 1/

25.12.2024 23:48 β€” πŸ‘ 231    πŸ” 89    πŸ’¬ 8    πŸ“Œ 5

I think that’ll be interesting to look more into! The profile information does not convey overall accessibility since it’s normalized, but maybe this sort of multitasking could help.

14.12.2024 15:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you for the kind words! Yes, ChromBPNet uses unmodified models, which includes profile data and a bias model. However these evaluations use only the count head.

11.12.2024 06:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Excited to announce DART-Eval, our latest work on benchmarking DNALMs! Catch us at #NeurIPS!

11.12.2024 02:30 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1

New work! Come check out our poster tomorrow and take a look at the paper!

11.12.2024 02:30 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
DART-Eval: A Comprehensive DNA Language Model Evaluation Benchmark on Regulatory DNA Recent advances in self-supervised models for natural language, vision, and protein sequences have inspired the development of large genomic DNA language models (DNALMs). These models aim to learn gen...

(10/10) Come check out our poster (tomorrow Dec 11 at 11 AM) or read the paper for more details!

arxiv.org/abs/2412.05430

github.com/kundajelab/D...

neurips.cc/virtual/2024...

#machinelearning #NeurIPS2024 #genomics

11.12.2024 02:30 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

(9/10) How do we train more effective DNALMs? Use better data and objectives:
β€’ Nailing short-context tasks before long-context
β€’ Data sampling to account for class imbalance
β€’ Conditioning on cell type context
These strategies use external annotations, which are plentiful!

11.12.2024 02:30 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

(8/10) This indicates that DNALMs inconsistently learn functional DNA. We believe that the culprit is not architecture, but rather the sparse and imbalanced distribution of functional DNA elements.

Given their resource requirements, current DNALMs are a hard sell.

11.12.2024 02:30 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

(7/10) DNALMs struggle with more difficult tasks.
Furthermore, small models trained from scratch (<10M params) routinely outperform much larger DNALMs (>1B params), even after LoRA fine-tuning!
Our results on the hardest task - counterfactual variant effect prediction.

11.12.2024 02:30 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0
Post image

(6/10) We introduce DART-Eval, a suite of five biologically informed DNALM evaluations focusing on transcriptional regulatory DNA ordered by increasing difficulty.

11.12.2024 02:30 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0


(5/10) Rigorous evaluations of DNALMs, though critical, are lacking. Existing benchmarks:
β€’ Focus on surrogate tasks tenuously related to practical use cases
β€’ Suffer from inadequate controls and other dataset design flaws
β€’ Compare against outdated or inappropriate baselines

11.12.2024 02:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(4/10) An effective DNALM should:
β€’ Learn representations that can accurately distinguish different types of functional DNA elements
β€’ Serve as a foundation for downstream supervised models
β€’ Outperform models trained from scratch

11.12.2024 02:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(3/10) However, DNA is vastly different from text, being much more heterogeneous, imbalanced, and sparse. Imagine a blend of several different languages interspersed with a load of gibberish.

11.12.2024 02:30 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

(2/10) DNALMs are a new class of self-supervised models for DNA, inspired by the success of LLMs. These DNALMs are often pre-trained solely on genomic DNA without considering any external annotations.

11.12.2024 02:30 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
DART-Eval: A Comprehensive DNA Language Model Evaluation Benchmark on Regulatory DNA Recent advances in self-supervised models for natural language, vision, and protein sequences have inspired the development of large genomic DNA language models (DNALMs). These models aim to learn gen...

(1/10) Excited to announce our latest work! @arpita-s.bsky.social, @amanpatel100.bsky.social , and I will be presenting DART-Eval, a rigorous suite of evals for DNA Language Models on transcriptional regulatory DNA at #NeurIPS2024. Check it out! arxiv.org/abs/2412.05430

11.12.2024 02:30 β€” πŸ‘ 70    πŸ” 27    πŸ’¬ 1    πŸ“Œ 3

@austintwang is following 20 prominent accounts