's Avatar

@sophie-xhonneux.bsky.social

229 Followers  |  159 Following  |  8 Posts  |  Joined: 18.11.2024  |  1.7565

Latest posts by sophie-xhonneux.bsky.social on Bluesky

about | ICLR Blogposts 2026 A simple, whitespace theme for academics. Based on [*folio](https://github.com/bogoli/-folio) design.

Call for Blog Posts: Submission deadline: Dec. 1st, 2025 23:59 AOE All information is now available: iclr-blogposts.github.io/2026/about/ Please RT!
Organizers:

@schwinnl.bsky.social @busycalibrating.bsky.social @jonkhler.argmin.xyz @n-gao.bsky.social @mhrnz.bsky.social & myself

22.09.2025 07:47 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
blog | ICLR Blogposts 2025 Home to the 2025 ICLR Blogposts track

See last year's accepted blog posts: iclr-blogposts.github.io/2025/blog/in...

22.09.2025 07:45 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Blog Posts are a great medium to share ML research. If you have new intuitions on past work, noticed key implementation details for reproducibility, have insights into the societal implications of AI, or an interesting negative result consider writing and submitting a blogpost.

22.09.2025 07:45 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

πŸ“£ Call for Blog Posts at #ICLR2026 @iclr_conf

Following the success of the past iterations, we are opening the Call for Blog Posts 2026!

iclr-blogposts.github.io/2026/about/#...

Please retweet!

22.09.2025 07:44 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Fantastic opportunity to join our team at the European Center for Medium-Range Weather Forecast (ECMWF) as an ML Scientist working on Atmospheric Composition/Air Quality Forecasting: <https://jobs.ecmwf.int/Job/JobDetail?JobId=10318>
Write me if you have any questions!

16.06.2025 12:42 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

If you are at @iclr-conf.bsky.social and are interested in making your RLHF really fast come find @mnoukhov.bsky.social and me at poster #582.

25.04.2025 07:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am at ICLR this year, please reach out if you would like to have a chat.

24.04.2025 04:00 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: Β§3 Analysis of Reasoning Chains (central cloud), Β§4 Scaling of Thoughts (discussing thought length and performance metrics), Β§5 Long Context Evaluation (focusing on information recall), Β§6 Faithfulness to Context (examining question answering accuracy), Β§7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), Β§8 Language & Culture (exploring moral reasoning and language effects), Β§9 Relation to Human Processing (comparing cognitive processes), Β§10 Visual Reasoning (covering ASCII generation capabilities), and Β§11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: Β§3 Analysis of Reasoning Chains (central cloud), Β§4 Scaling of Thoughts (discussing thought length and performance metrics), Β§5 Long Context Evaluation (focusing on information recall), Β§6 Faithfulness to Context (examining question answering accuracy), Β§7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), Β§8 Language & Culture (exploring moral reasoning and language effects), Β§9 Relation to Human Processing (comparing cognitive processes), Β§10 Visual Reasoning (covering ASCII generation capabilities), and Β§11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

Models like DeepSeek-R1 πŸ‹ mark a fundamental shift in how LLMs approach complex problems. In our preprint on R1 Thoughtology, we study R1’s reasoning chains across a variety of tasks; investigating its capabilities, limitations, and behaviour.
πŸ”—: mcgill-nlp.github.io/thoughtology/

01.04.2025 20:06 β€” πŸ‘ 52    πŸ” 16    πŸ’¬ 1    πŸ“Œ 9

Our work on Asynchronous RLHF was accepted to #ICLR2025 ! (I was so excited to announce it, I forgot to say I was excited)

Used by @ai2.bsky.social for OLMo-2 32B πŸ”₯
New results show ~70% speedups for LLM + RL math and reasoning 🧠

🧡below or hear my DLCT talk online on March 28!

18.03.2025 20:45 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1

Thanks again to my collaborators:
@vwxyzjn.bsky.social
@sophie-xhonneux.bsky.social
@arianh.bsky.social
Rishabh and Aaron who have not yet migrated πŸ¦‹

DMs openπŸ“²let's chat about about everything LLM + RL @ ICLR and check out
Paper πŸ“° arxiv.org/abs/2410.18252
Code πŸ§‘β€πŸ’» github.com/mnoukhov/asy...

18.03.2025 20:45 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Come to our Spotlight Poster #4702!

East Exhibition Hall A-C

12.12.2024 19:02 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Voici un rΓ©sumΓ© d'une minute de l'article de @sophie-xhonneux.bsky.social " Efficient Adversarial Training in LLMs with Continuous Attacks ". Venez voir le poster vedette Γ 
@neuripsconf.bsky.social aujourd'hui : Session de posters 3 Est, #4702.

12.12.2024 18:58 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

I will be at NeurIPS! Would love to chat about research!

Especially about fine-tuning of LLMs as well as generative models more generally and reasoning!

I will be presenting "Efficient Adversarial Training in LLMs with Continuous Attacks" (spotlight) at the morning poster session on Thursday!

10.12.2024 00:20 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@sophie-xhonneux is following 19 prominent accounts