Gwen Cheni's Avatar

Gwen Cheni

@gwencheni.bsky.social

Building stealth AI+bio. Prev @KhoslaVentures @indbio @sosvπŸ§¬πŸ’» @ucsfπŸŒ‰ @jpmorgan @GoldmanSachs @yale @UChicago @LMU_Muenchen

33 Followers  |  21 Following  |  79 Posts  |  Joined: 01.11.2024  |  1.8794

Latest posts by gwencheni.bsky.social on Bluesky

β€œThe science of today is the technology of tomorrow.”
β€” Edward Teller

09.02.2025 17:19 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Preview
GitHub - deepseek-ai/DeepSeek-R1 Contribute to deepseek-ai/DeepSeek-R1 development by creating an account on GitHub.

Code and paper on Github: github.com/deepseek-ai/...

21.01.2025 02:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Emergent properties:

Thinking time steadily improved throughout the training process 😳

21.01.2025 02:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Uses Group Relative Policy Optimization (GRPO) instead of Proximal Policy Optimization (PPO): foregoes critic model same size as policy model, instead estimates baseline from group scores instead, using the average reward of multiple samples to reduce memory use.

21.01.2025 02:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The secret sauce is rewards: ground truth computed by hardcoded rules. Learned rewards can easily be hacked by RL.

21.01.2025 02:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In addition to open source, DeepSeek-R1 is significant because it’s complete reinforcement learning (RL), no supervised fine-tuning (SFT)(β€œcold start”). Reminiscent of AlphaZero (which mastered Go, Shogi, and Chess from scratch, without playing against human grandmasters).

21.01.2025 02:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

DeepSeek-R1: pure reinforcement learning (RL), no supervised fine-tuning (SFT), no chain-of-thought (CoT) #1minPapers πŸ§΅πŸ‘‡

21.01.2025 02:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

13. Janet Woodcock (former FDA): potential to look at prospective studies for certain rare indications, instead of only randomized controlled trials.

17.01.2025 02:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

12. Scott Gottlieb
@scottgottliebmd.bsky.social : 50% of oncology INDs at the FDA are from China.

17.01.2025 02:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

11. Bob Nelson: market is provisionally open. If have strong shareholder base already and book ready, then market’s open. Biotech IPOs are funding events: ARCH doesn’t view IPOs as exits, will stay past IPO for 3–4yrs till clinical milestone.

17.01.2025 02:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

10. Big pharmas acquiring model teams are rare (Prescient Design was one off), more partnerships, claim having their own teams.

17.01.2025 02:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

9. Org matter in decision making. e.g. Merck organized into Research vs Development. J&J organized along indication areas. Do you invest on risk, or on inflection points?

17.01.2025 02:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

8. Every pharma's interested in obesity, but also careful because already have 3 players, hard to differentiate.

17.01.2025 02:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

7. Pharmas need to look over their shoulders prior to billion-dollar acquisitions in case generics come out of China in a few years with the same MOA. One pharma CEO, β€œwe have to get the cost of R&D down to be competitive.”

17.01.2025 02:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

6. Large deals tend to result in cost cuts, not topline growth rates, and this industry trades on topline growth rate. Bolt-ons and mega-billion dollar deals β€” barbell strategy β€” may be in 2025.

17.01.2025 02:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

5. 2023 was a record M&A year, $130bn. 2024 was a digestion year: not horrible for the number of deals, but private deals because capital markets closed. Scale is imprt in pharma, drives how much R&D is allocated. Previous admin was against large deals. New admin not against.

17.01.2025 02:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4. IRA shifted focus to bigger cancers, may be here to stay. Biologics and small molecules timelines may not be aligned, 13 vs 9. Small molecules have challenges with tox, and only have 9yrs to recoup investment. There could hopefully be bipartisan support to even this 9 vs 13.

17.01.2025 02:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3. Saw a lot of fast following the last few years: 3–4 drugs on same MOA is hard to get a return. Do VCs shift to lower risk lower reward investments instead?

17.01.2025 02:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2. Last year’s IPOs, 80% are below water, thus capitalize your company such that you aren’t dependent on an IPO. Have optionality. Is M&A the goal? If you are taking a drug to market, you may not have other options but to IPO.

17.01.2025 02:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Takeaways from JPM Healthcare Conf 2025 #JPM2025 For having survived the past two years of biotech winter and current political uncertainties, the crowd was pretty cautiously optimistic for dealflow to recover. And yes, the word β€œagentic” AI should have been a drinking game.πŸ§΅πŸ‘‡

17.01.2025 02:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achie...

Paper on arXiv: arxiv.org/abs/2501.04519

12.01.2025 16:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

SLM as a process preference model (PPM) to predict reward labels for each reasoning step. Q-values can reliably distinguish positive (correct) steps from negative. Using preference pairs and pairwise ranking loss, instead of direct Q-values, eliminate the inherently noise. 6/n

12.01.2025 16:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

SLM samples candidate nodes, each generating CoT and corresponding Python code. Only nodes with successful execution are retained. MCTS automatically assign (self-annotate) a Q-value to each intermediate step based on its contribution: more trajectories=higher Q. 5/n

12.01.2025 16:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Process reward modeling (PRM) provides fine-grained feedback on intermediate steps because incorrect intermediate steps significantly decrease data quality in math. 4/n

12.01.2025 16:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Result: β€œ4 rounds of self-evolution with millions of synthesized solutions for 747k math problems … it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.” 3/n

12.01.2025 16:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

β€œUnlike solutions relying on superior LLMs for data synthesis, rStar-Math leverages smaller language models (SLMs) with Monte Carlo Tree Search (MCTS) to establish a self-evolutionary process, iteratively generating higher-quality training data.” 2/n

12.01.2025 16:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

#1minPapers MSFT’s rStar-Math small language model self-improves and generates own training data - second time in recent months that a small model performed equally well (or better) than the billion-parameter large models. πŸ§΅πŸ‘‡

12.01.2025 16:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Chollet -- "o-models FAR beyond classical DL"
YouTube video by Machine Learning Street Talk Chollet -- "o-models FAR beyond classical DL"

Full interview here: www.youtube.com/watch?v=w9WE...

10.01.2025 03:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Speculate how o1 works: search possible chains-of-thought. By backtracking and editing which branches work better, it ends up with a natural language program that adapts to novelty. Clearly doing search in chai-of-thought space at test-time: telltale sign=compute and latency⬆️ 11/n

10.01.2025 03:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some recombination patterns of the building blocks will occur more often in certain contexts, extract this as a reservoir form (higher-level abstraction fitted to the problem), add it back to the building blocks, such that next time you solve it in fewer steps. 10/n

10.01.2025 03:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@gwencheni is following 19 prominent accounts