Chris Paxton's Avatar

Chris Paxton

@cpaxton.bsky.social

Writing about robots https://itcanthink.substack.com/ RoboPapers podcast https://robopapers.substack.com/ All opinions my own

6,430 Followers  |  1,568 Following  |  4,574 Posts  |  Joined: 25.02.2024
Posts Following

Posts by Chris Paxton (@cpaxton.bsky.social)

This is a nothing claim then. Who cares? Those are the kind of papers that literally no one reads.

04.03.2026 05:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How much of this is because social science research is terrible though

04.03.2026 05:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

thank GOD, these emp grenades have been taking up inventory space my entire playthrough

04.03.2026 03:33 β€” πŸ‘ 18    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

oh no

04.03.2026 03:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

A strong guy

04.03.2026 02:02 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Classic big company move

04.03.2026 02:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Lots of core team members of Alibaba Qwen are resigning publicly on X.

The gaping hole that Qwen imploding would leave in the open research ecosystem will be hard to fill. The small models are irreplaceable.

I’ll do my best to keep carrying that torch. Every bit matters.

03.03.2026 18:10 β€” πŸ‘ 87    πŸ” 10    πŸ’¬ 3    πŸ“Œ 2
Max Schwarzer
@max_a_schwa...β€’ 1h
X
l've decided to leave OpenAl. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping 01-preview (which started life as of one of my derisking runs), to post-training 01 and 03 with @ericmitchellai, @yanndubs and many others.
I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAl has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad.
I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me.

Max Schwarzer @max_a_schwa...β€’ 1h X l've decided to leave OpenAl. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping 01-preview (which started life as of one of my derisking runs), to post-training 01 and 03 with @ericmitchellai, @yanndubs and many others. I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAl has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad. I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me.

I'm personally very excited for my next chapter
-- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!

I'm personally very excited for my next chapter -- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!

OpenAI’s head of post-training is leaving for Anthropic (oai is known for post-training, among the big labs)

03.03.2026 22:07 β€” πŸ‘ 85    πŸ” 8    πŸ’¬ 5    πŸ“Œ 2

If there's a "permanent underclass" it's not having written enough on the open web to be immortalized in the weights.

03.03.2026 21:15 β€” πŸ‘ 56    πŸ” 1    πŸ’¬ 7    πŸ“Œ 1
Video thumbnail

A humanoid robot that actually looks like an industrial machine. Launching in 18 months from Noble Machines

03.03.2026 20:55 β€” πŸ‘ 47    πŸ” 6    πŸ’¬ 5    πŸ“Œ 6

Well, I can't exactly argue with that

03.03.2026 20:23 β€” πŸ‘ 30    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0

Archax is building a large pilotable robot for heavy labor

03.03.2026 20:23 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

half power armor, half sentry bot

03.03.2026 20:02 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

"pilot-optional"

Like Severian climbing into Sidero in Urth of the New Sun

'you have no right in me!'

03.03.2026 20:02 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

Capabilities keep increasing and costs keep falling. Very good for accessibility of AI models.

03.03.2026 18:34 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I saw this; hope they were able to find better roles

03.03.2026 19:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Tsubame Heavy Industries is building the Archax, a 4.5 meter tall pilot-optional humanoid robot for heavy labor: tsubame-hi.com/en/the-archax/

03.03.2026 19:12 β€” πŸ‘ 72    πŸ” 9    πŸ’¬ 15    πŸ“Œ 32

ah, OpenAI is entirely stopping DoW deployment for now

that was not clear to me from sama’s post. also, i’m very glad to see Noam getting directly involved in policy. i realize he’s just a researcher, but it’s great to have important people deeply invested in this

03.03.2026 16:24 β€” πŸ‘ 133    πŸ” 23    πŸ’¬ 14    πŸ“Œ 7

Department of Intensive Combat Operations

03.03.2026 17:31 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Academics Need to Wake Up on AI Ten theses for folks who haven't noticed the ground shifting under their feet

Sorry, Bluesky, but I have to say it: AI can already do social science research better than most professors with PhDs. And, for the first time in my life, I really have no idea what happens in five years.

Things are changing already, we just need to wake up.

03.03.2026 00:08 β€” πŸ‘ 152    πŸ” 20    πŸ’¬ 270    πŸ“Œ 247

i have nothing to add to this but it generates funny replies

03.03.2026 06:49 β€” πŸ‘ 93    πŸ” 3    πŸ’¬ 8    πŸ“Œ 0
Video thumbnail

OmniExtreme from BIGAI: a general-purpose tracking policy which works with extreme motions and complex contacts. Works by pre-training a general-purpose flow policy, then learning an actuation-aware residual policy for handling complex physical dynamics.

03.03.2026 04:48 β€” πŸ‘ 44    πŸ” 6    πŸ’¬ 5    πŸ“Œ 1

I do think this is just them goofing off, the military versions are less colorful and way scarier

03.03.2026 04:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Drone and dog shooting fireworks at each other

Video from 阿阳 / Shenzhen media group

03.03.2026 04:04 β€” πŸ‘ 21    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Video thumbnail

Xiaomi humanoids with self tapping nuts. 90% success rate, 3 hour deployment, using tactile

03.03.2026 02:27 β€” πŸ‘ 18    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Video thumbnail

Next on RoboPapers: VLam4VLA -- what makes a vision language model good for robotics

03.03.2026 02:24 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Now let me design the plan with a Plan agent.

[...]

Excellent analysis from the Plan agent.

Now let me design the plan with a Plan agent. [...] Excellent analysis from the Plan agent.

claude giving claude a medal never gets old

02.03.2026 22:59 β€” πŸ‘ 126    πŸ” 10    πŸ’¬ 6    πŸ“Œ 0

"bullshit bench but for people"

02.03.2026 20:57 β€” πŸ‘ 44    πŸ” 1    πŸ’¬ 5    πŸ“Œ 0

Alibaba Qwen remain the champions of open-source ai

02.03.2026 21:04 β€” πŸ‘ 18    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Qwen releases 4 new Qwen3.5 Small models!

Qwen3.5: 0.8B β€’ 2B β€’ 4B β€’ 9B

Run Qwen3.5-0.8B, 2B and 4B on your phone. Run 9B on 6GB RAM.

The vision reasoning LLMs perform better than models 4x their size.

GGUFs: huggingface.co/collections/...
Guide: unsloth.ai/docs/models/...

02.03.2026 13:39 β€” πŸ‘ 52    πŸ” 4    πŸ’¬ 0    πŸ“Œ 3