Let me know if youโd like me to clarify anything. Iโm happy to talk!
25.05.2025 20:54 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@nathanielblalock.bsky.social
Graduate Research Assistant in Dr. Philip Romero's Lab at Duke/Wisconsin Reinforcement and Deep Learning for Protein Redesign | He/him
Let me know if youโd like me to clarify anything. Iโm happy to talk!
25.05.2025 20:54 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Me too ๐คช It is really exciting to be submitting! We definitely learned a lot along the way
10.05.2025 05:27 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Reinforcement learning with experimental feedback (RLXF) shifts protein language models so that they generate sequences with improved properties
@nathanielblalock.bsky.social @philromero.bsky.social
www.biorxiv.org/content/10.1...
Thank you for sharing our work @kevinkaichuang.bsky.social! It means a lot
10.05.2025 02:05 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Thank you for posting about our preprint!
08.05.2025 18:03 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0and our open-source code at github.com/RomeroLab/RLXF
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Want to learn more? Check out our preprint at www.biorxiv.org/content/10.1...
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We apply RLXF across five diverse protein classes to demonstrate its generalizability and effectiveness at generating optimized sequences by learning functional constraints beyond those captured during pre-training
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Experimental validation reveals the RLXF-aligned model generates a higher fraction of functional sequences, a greater number of sequences more fluorescent than CreiLOV, and the brightest oxygen-independent fluorescent protein variant reported to date
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We align ESM-2 to experimental fluorescence data from the CreiLOV flavin-binding fluorescent protein. The aligned model learns to prioritize mutations that enhance fluorescence, many of which are missed by the base model
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0RLXF follows a two-phase strategy inspired by RLHF. Supervised Fine-Tuning initializes the model in the right region of sequence space. Proximal Policy Optimization directly aligns sequence generation with feedback from a reward function like a sequence-function predictor
08.05.2025 18:02 โ ๐ 0 ๐ 1 ๐ฌ 1 ๐ 1Pre-trained pLMs generate highly diverse sequences mirroring statistical patterns from natural proteins. But here's the challenge: they lack an explicit understanding of function, often failing to generate proteins with enhanced or non-natural activities. RLXF bridges this gap!
08.05.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We are excited in the @philromero.bsky.social lab to share our new preprint introducing RLXF for the functional alignment of protein language models (pLMs) with experimentally derived notions of biomolecular function!
08.05.2025 18:02 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 1Great article, simple reminder about the value of higher education! engineering.wisc.edu/blog/why-we-...
04.04.2025 15:41 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0๐Congrats to Chase on her new preprint! She developed OMEGA--a simple method for assembling custom gene panels for as little as $1.50 per gene. Big step forward protein engineering and design!๐งฌ
www.biorxiv.org/content/10.1...
Post the amazing science things you have done with federal funding.
28.01.2025 20:51 โ ๐ 1558 ๐ 603 ๐ฌ 172 ๐ 319It was a pleasure meeting you! Y'all are doing super interesting and relevant work. It will be cool to see how we can continue to interact and maybe collaborate in the future!
20.12.2024 20:50 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Favorite foods! Tandoori chicken and chili momo's: everestkitchen.ca. Onigiri! www.onigiriya.ca. Pho: www.viethouserestaurant.com.
20.12.2024 16:59 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Papers #4: arxiv.org/abs/2406.17692 from the incredible
@gregdnlp.bsky.social. I really like how explore what happens during the alignment of LLM's with RLHF. This was so cool to see having observed similar outcomes in my research.
Papers #2-3: arxiv.org/abs/2402.10210 and arxiv.org/abs/2405.00675 from the incredible
@quanquangu.bsky.social. I really like how they explore new techniques for RLHF
Paper #1: arxiv.org/abs/2412.12979
Aligning autoregressive pLM's to generate EGFR binders via Direct Policy Optimization (DPO) from the incredible @noeliaferruz.bsky.social who gave a great talk as part of the MLSB workshop
My 1st NeurIPS was a wonderful experience - incredible to see so much research in protein design and reinforcement learning. Here are my favorite papers (and favorite places I got food in Vancouver ๐):
20.12.2024 16:42 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0Hey Kevin, could I be added? This is really helpful for joining Bluesky! Thank you for doing it
17.12.2024 18:38 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Three BioML starter packs now!
Pack 1: go.bsky.app/2VWBcCd
Pack 2: go.bsky.app/Bw84Hmc
Pack 3: go.bsky.app/NAKYUok
DM if you want to be included (or nominate people who should be!)