Jason Rute's Avatar

Jason Rute

@jasonrute.bsky.social

AI Researcher @ Mistral AI | Formally IBM Research | Former Mathematician/Logician/Data scientist | Building AI for math and reasoning

85 Followers  |  21 Following  |  50 Posts  |  Joined: 23.11.2024  |  1.8683

Latest posts by jasonrute.bsky.social on Bluesky

(A) Is the least hand holding, but it could misformalize. (B) Makes sure the formalization fits your model’s constraints, but leaves you open to criticism. (C) Joseph has thought more about IMO Lean formalization than anyone I know, but his may not fit your model’s requirements.

16.07.2025 11:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - jsm28/IMOLean: Suggested conventions and examples for Lean formalization of IMO problem statements Suggested conventions and examples for Lean formalization of IMO problem statements - jsm28/IMOLean

For those using @leanprover AI models to β€œcompete” in #imo2025 (@ProjectNumina , @KaiyuYang4), what Lean formalizations will you use? (A) autoformalized statements, (B) manually formalized statements, (C) Joseph Meyer’s IMOLean formalizations (github.com/jsm28/IMOLean)?

16.07.2025 11:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Oh, you want a tool which annotates papers (maybe on the fly) where if you hover over say β€œinaccessible cardinal” it gives you the definition needed for the paper (or a link)?

02.02.2025 17:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

PDFs? You mean if the lean code is in a paper you can’t mouse over it? If that is what you mean, what do pdfs have to do with LLMs or are though two separate examples of things which don’t have access to language server information?

02.02.2025 08:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I have no idea if LLMs are any good as math collaborators for bouncing around ideas and brainstorming, but if those who do get this to work keep it a secret, we just won’t know. We should be rewarding those who use them wisely for their research.

14.01.2025 16:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If we aren’t willing to talk about when tools are helpful, it will lead to an asymmetry where those in the know will use hidden methods that they keep secret out of either embarrassment or competition. IMHO, this isn’t in the spirit of mathematics research.

14.01.2025 16:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Mathematics isn’t chess. Using tools isn’t cheating. It is research and knowing what tools to use is important to research. But also as these tools start to feel more human, it does bring up tricky questions of attribution and citation.

14.01.2025 16:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

At #JMM2025 last week, a mathematician told me he used ChatGPT (forgot which version) to get a major insight into solving his math problem. But, he is reluctant to share publicly (e.g. in his paper) that this happened since it could make him look less bright. I find this sad. 🧡

14.01.2025 16:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I last attended JMM in Seattle in 2017. It was also the year I left mathematics. Like many math postdocs, I couldn’t find a suitable job and went into industry, first data science and later AI research. I’m so excited to be back talking about AI for Math!

09.01.2025 12:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
<p>Applications of AI to formal theorem proving</p> As with many other scientific and professional areas, artificial intelligence i...

The talk is Friday 8am (bright and early!) in the session β€œAI for the working mathematician”: meetings.ams.org/math/jmm2025...

09.01.2025 12:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
<p>Applications of AI to formal theorem proving</p> As with many other scientific and professional areas, artificial intelligence i...

I’m really excited to announce that I’m going to be at the Joint Math Meetings #JMM2025 to give a talk on AI for Formal Theorem Proving on Friday. Reach out if you want to talk about AI for Math or for all my math friends, if you just want to catch up!

09.01.2025 12:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I think the long term goal of Lean4Lean is (1) a full implementation of the Lean’s kernel in Lean, (2) a formalization of Lean’s type theory LeanTT as in Mario’s thesis, (3) a proof that the kernel faithfully implements LeanTT, (4) a formalization of Mario’s proof that LeanTT is consistent/sound.

05.01.2025 19:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - digama0/lean4lean: Lean 4 kernel / 'external checker' written in Lean 4 Lean 4 kernel / 'external checker' written in Lean 4 - digama0/lean4lean

github.com/digama0/lean...

05.01.2025 19:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - digama0/mm0: Metamath Zero specification language Metamath Zero specification language. Contribute to digama0/mm0 development by creating an account on GitHub.

github.com/digama0/mm0

05.01.2025 19:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m not sure exactly what you mean here, but would Metamath Zero (if finished) be what you are looking for? It would be a checker verified up to x86 byte code that could check a trace from another theorem prover (if the kernel rules were given to MM0 and the trace contained all the proof steps).

05.01.2025 19:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you are just worried about Lean’s kernel, I think Lean4Lean could help verify that it is both correct and sound. If you are worried about the full Lean code base, that indeed gets a bit complicated since one could do weird stuff which bypasses the kernel. I’m not sure the full correct solution.

05.01.2025 18:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But just to be clear, I have no current plans to write any proof checking language (except maybe some toy external type checkers for some existing languages like Lean or MetaMath).

04.01.2025 16:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Or better yet, meta proof checking languages like Metamath Zero, so that I know already the kernel was correct and all I had to do was get the logic specification right (and maybe that language would also have good libraries and tools for proving that specification matches a desired semantics).

04.01.2025 16:18 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If I wrote a real proof checker now that I expected to be performant, I think I would use Rust. But in the next 100 years I would hope to have performant languages that we could also prove correctness of so that I could prove my checker is correct.

04.01.2025 16:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I don’t understand this question. The language the checker is expected to implemented in, or the name of the expected yet-to-be-developed proof checking language?

04.01.2025 16:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

When I compare o1 pro to AlphaProof, I lean towards thinking o1 pro is better (much faster to the right ideas), but it obviously didn’t sufficiently prove stuff. I think something like o1/o3 plus access to Lean would be really powerful. It now seems to me that AlphaProof is too constrained w/o CoT.

24.12.2024 00:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It it different in other functional languages? But I can see how it is quite confusing if you aren’t prepared for this.

23.12.2024 23:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Are you friends with any of the xAI folks?

23.12.2024 13:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Caveat 3: I’m talking about really cutting-edge theorem proving here. There is still great value in ATPs for day-to-day formalization. We don’t need o3-like models for that. We need solvers at the right level of speed/cost/usability (tactics, hammers, chatbots, copilots, etc.).

23.12.2024 13:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Caveat 2: Verification (including formalization) should be incorporated in the middle of reasoning models, and not just at the end. An AI should be checking its results as it goes, and adjusting its informal thinking process to account. (And for code too, e.g. unit tests!)

23.12.2024 13:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Caveat 1: Auto-formalization itself will require ATP, at various levels of power & speed (simple tactics up to reasoning models) to fill in proof gaps, verify semantic correctness, and RL-train the formalizers. But I claim the most important missing piece is auto-formalization.

23.12.2024 13:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hot take: For automatic theorem proving (ATP) to solve unsolved math problems, the most important need is auto-formalization. We already have general-purpose reasoning models (o3 and more to come). We need to verify the correctness of these models with absolute certainty. 🧡

23.12.2024 13:42 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

We need so much more work on autoformalization (well that and practical code writing, like where the AI has to write a whole project, since I think they share many of the same difficulties like writing APIs, using libraries, and breaking down problems into sub-problems).

23.12.2024 01:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It could be that the answer is no, but I wanted to get your thoughts because I think it often got the right answer even if the proof had mistakes.

23.12.2024 01:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What I’m trying to get at: Even in o1 pro, was the intuition and skill there but not the discipline and follow-through to finish an air tight proof?

23.12.2024 01:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@jasonrute is following 20 prominent accounts