Nikolai Rozanov's Avatar

Nikolai Rozanov

@ai-nikolai.bsky.social

CS. PhD Candidate in LLM Agents @ImperialCollegeLondon || ex tech-founder nikolairozanov.com

495 Followers  |  206 Following  |  18 Posts  |  Joined: 18.11.2024  |  1.7407

Latest posts by ai-nikolai.bsky.social on Bluesky

Post image

Do LLMs need rationales for learning from mistakes? πŸ€”
When LLMs learn from previous incorrect answers, they typically observe corrective feedback in the form of rationales explaining each mistake. In our new preprint, we find these rationales do not help, in fact they hurt performance!

🧡

13.02.2025 15:38 β€” πŸ‘ 21    πŸ” 9    πŸ’¬ 1    πŸ“Œ 3

are you guys interested in research interns at this stage as well?

24.01.2025 00:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Not that you need another thread on Deepseek's R1, but I really enjoy these models, and it's great to see an *open*, MIT-licensed reasoner that's ~as good as OpenAI o1.

A blog post: itcanthink.substack.com/p/deepseek-r...

It's really very good at ARC-AGI for example:

22.01.2025 22:01 β€” πŸ‘ 40    πŸ” 7    πŸ’¬ 2    πŸ“Œ 0
Post image

LLM360 gets way less recognition relative to the quality of their totally open outputs in the last year+. They dropped a 60+ page technical report last week and I don't know if I saw anyone talking about it. Along with OLMo, it's the other up to date open-source LM.

Paper: https://buff.ly/40I6s4d

23.01.2025 02:37 β€” πŸ‘ 45    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Thank you for posting this work.

We are finding very similar findings for LLM Agent research.

Would anyone be interested in a collaboration on reproducibility on that?

22.01.2025 10:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

#NLP #LLMAgents Community, I have a question:

I have been running Webshop with older GPTs, e.g. gpt-3.5-turbo-1106 / -0125 / -instruct). On 5 different code repos (ReAct, Reflexion, ADaPT, StateAct) I am getting scores of 0%, while previously the scores where at ~15%.

Any thoughts anyone?

21.01.2025 10:36 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

#NLP #LLMAgents Community, I have a question:

I have been running Webshop with older GPTs, e.g. gpt-3.5-turbo-1106 / -0125 / -instruct). On 5 different code repos (ReAct, Reflexion, ADaPT, StateAct) I am getting scores of 0%, while previously the scores where at ~15%.

Any thoughts anyone?

21.01.2025 10:36 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Hey,

I work on LLM agents, if that qualifies. Please add me as well. Thanks.

28.11.2024 01:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Posting a call for help: does anyone know of a good way to simultaneously treat both POTS and MΓ©niΓ¨re’s disease? Please contact me if you’re either a clinician with experience doing this or a patient who has found a good solution. Context in thread

24.11.2024 16:34 β€” πŸ‘ 128    πŸ” 72    πŸ’¬ 15    πŸ“Œ 6
The OLMo 2 models sit at the Pareto frontier of training FLOPs vs model average performance.

The OLMo 2 models sit at the Pareto frontier of training FLOPs vs model average performance.

Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B β€” As always, we released our data, code, recipes and more 🎁

26.11.2024 20:51 β€” πŸ‘ 151    πŸ” 36    πŸ’¬ 5    πŸ“Œ 12

Hi there,

Please add me as well. I'm a PhD student on LLM agents at Imperial College London

25.11.2024 23:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That's really interesting and perhaps find some roots in that numbers come from the arabic script?

What are your thoughts on that?

24.11.2024 11:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Hey, I would love to be added too. I work on LLM Agents, and worked on Bayesian Exploration in RL.

24.11.2024 10:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hey, thanks for the group. I would love to be added too. I'm a PhD on LLM Agents at Imperial College London.

24.11.2024 10:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I would love to be in that one too :))

23.11.2024 19:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Pretty cool people are being added to the LLM Agent & LLM Reasoning group. Thanks @lisaalaz.bsky.social for suggesting @jhamrick.bsky.social @gabepsilon.bsky.social and others.

Feel free to mention yourself and others. :)

go.bsky.app/LUrLWXe

#LLMAgents #LLMReasoning

23.11.2024 19:36 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 9    πŸ“Œ 0

πŸ‘ ;)

23.11.2024 19:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Definitely, done.

23.11.2024 19:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Done :)

23.11.2024 19:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Sure thing.

21.11.2024 19:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks, done.

21.11.2024 19:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Done :)

21.11.2024 19:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

#EMNLP2024 was a fun time to reconnect with old friends and meet new ones! Reflecting on the conference program and in-person discussions, I believe we're seeing the "Google Moment" to #IR research play out in #NLProc.
1/n

21.11.2024 13:38 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

I thought to create a Starter Pack for people working on LLM Agents. Please feel free to self-refer as well.

go.bsky.app/LUrLWXe

#LLMAgents #LLMReasoning

20.11.2024 14:08 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 11    πŸ“Œ 0

I thought to create a Starter Pack for people working on LLM Agents. Please feel free to self-refer as well.

go.bsky.app/LUrLWXe

#LLMAgents #LLMReasoning

20.11.2024 14:08 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 11    πŸ“Œ 0
Preview
Meta-Reasoning Improves Tool Use in Large Language Models External tools help large language models (LLMs) succeed at tasks where they would otherwise typically fail. In existing frameworks, LLMs learn tool use either by in-context demonstrations or via full...

Hi Bluesky, would like to introduce myself πŸ™‚

I am PhD-ing at Imperial College under @marekrei.bsky.social’s supervision. I am broadly interested in LLM/LVLM reasoning & planning πŸ€– (here’s our latest work arxiv.org/abs/2411.04535)

Do reach out if you are interested in these (or related) topics!

20.11.2024 11:26 β€” πŸ‘ 40    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models Planning and acting to solve `real' tasks using large language models (LLMs) in interactive environments has become a new frontier for AI methods. While recent advances allowed LLMs to interact with o...

Quick intro to myself.

I am a CS PhD in LLM Agents @imperial-nlp.bsky.social with @marekrei.bsky.social.

This is our latest work on LLM Agents:

StateAct: arxiv.org/abs/2410.02810 (outperforming ReAct by ~10%).

Feel free to reach out for collaboration.

20.11.2024 10:35 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Welcome to Bluesky to more of our NLP researchers at Imperial!! Looking forward to following everyone's work on here.

To follow us all click 'follow all' in the starter pack below

go.bsky.app/Bv5thAb

20.11.2024 08:35 β€” πŸ‘ 20    πŸ” 7    πŸ’¬ 3    πŸ“Œ 0

@ai-nikolai is following 20 prominent accounts