Stanislav Fort

Stanislav Fort

@stanislavfort.bsky.social

AI + security at AISLE | Stanford PhD in AI & Cambridge physics | scientific progress

2,183 Followers 617 Following 103 Posts Joined Oct 2023
3 weeks ago
Preview
AISLE Becomes the #1 Source for OpenClaw Security Disclosures AISLE is the largest source of security findings in OpenClaw, exposing risks in AI agents with shell access, file system control, and API keys to y...

OpenClaw has 200k stars and gives AI agents shell access, API keys, and code execution. 42,000 exposed instances on the public internet.

Everyone is asking what AI agents can do. Almost nobody is asking who secures them. We at AISLE do.

Blog post: aisle.com/blog/aisle-t...

1 0 0 0
3 weeks ago
Post image

AISLE is now the #1 source of accepted security findings in OpenClaw, the fastest-growing AI agent framework. Our AI discovered 15 vulnerabilities: 1 Critical (CVSS 9.4), 9 High, 5 Moderate. 21% of all OpenClaw security advisories globally are from us, more than anyone else ⏬

8 0 2 0
1 month ago
Preview
What AI Security Research Looks Like When It Works What a year of finding zero-days in OpenSSL, curl, and the Linux kernel taught us about AI-driven security research done right.

Daniel Stenberg of curl now invokes our Analyzer on his PRs. His reaction to the OpenSSL news:

"I'm a little amazed.. 12(!) of them were reported by people at Aisle... if you are curious what AI can do for Open Source security when used for good"

Blog: aisle.com/blog/what-ai...

2 0 0 1
1 month ago
Post image

New post on what AI cybersecurity research looks like when it actually works! I wrote up what we've learned discovering 12 of 12 new OpenSSL zero-days, 5 CVEs in curl, and additional 100+ validated CVEs across critical open source infrastructure, middleware, and secure apps πŸ”—β¬

1 1 1 0
1 month ago

Thanks for sharing! Happy to answer any questions you / your followers might have. The full thread with more details is here: bsky.app/profile/stan... I'm genuinely excited and slightly worried how far we've managed to push our AI system. A CVE is one thing, but we've now industrialized the process.

1 0 0 0
1 month ago
Preview
AISLE Discovered 12 out of 12 OpenSSL Vulnerabilities AISLE's autonomous analyzer found all 12 CVEs in the January 2026 coordinated release of OpenSSL, the open-source cryptographic library that underp...

At AISLE (aisle.com) we're creating Al to secure the world's most critical software infrastructure. We've discovered & fixed 100s of zero-days in some of the most important code on the planet.

We're hiring people who share our mission!

OpenSSL 12/12 CVEs post: aisle.com/blog/aisle-d... 6/6

0 0 0 0
1 month ago
Post image

Their bug bounty just died from Al slop, while we reported >30 real fixed vulns. The ceiling is rising as fast as the median is collapsing. The whole distribution is bifurcating and any single point-estimate of what it is is bound to be an incomplete narrative. 5/6

0 0 1 0
1 month ago
Preview
AI found 12 of 12 OpenSSL zero-days (while curl cancelled its bug bounty) β€” LessWrong This is a partial follow-up to AISLE discovered three new OpenSSL vulnerabilities from October 2025. …

Full blogpost here, including how this connects to our 5 new CVEs in another pillar of the OSS infra = curl (thanks go to @bagder.mastodon.social.ap.brid.gy for his excellent job building and maintaining it):
lesswrong.com/posts/7aJwgb...

4/6

1 0 1 0
1 month ago
Post image

HIGH severity CVEs in OpenSSL average less than 1 per year. This release includes one & we discovered it:
CVE-2025-15467

A stack buffer overflow that requires no authentication. Parsing untrusted CMS content could be vulnerable. Pre-auth remote code execution in OpenSSL is wild! 3/6

1 0 1 0
1 month ago
Post image

In Fall 2025, we announced that our Al system discovered 3 of 4 OpenSSL CVEs that year. Already historically unprecedented, but we kept pushing. Three months later, the final count is: 13 of 14 OpenSSL CVEs in 2025, discovered by our Al 2/6

0 0 1 0
1 month ago
Video thumbnail

OpenSSL secures most of the internet's encryption. They just patched 12 new zero-day vulnerabilities. Our Al system developed by AISLE is responsible for discovering all 12/12, every single one of them. This includes a pre-auth HIGH severity one & 3 that lurked there for >25 years! 1/6

6 2 1 0
1 month ago

I can help out! Do you want to dm me?

0 0 0 0
1 month ago

What would you like to know?

0 0 0 0
1 month ago

We discovered them. The OpenSSL maintainers, nor anyone else in the world, likely knew they existed.

1 0 0 0
4 months ago

What are they worried about?

3 0 0 0
5 months ago

πŸ’― this

4 0 0 0
7 months ago

I doubt the AI overviews are a big deal in the total number tbh. Gemini is extremely useful and e.g. I'm running at least >1B tokens a day through it for sure.

1 0 2 0
7 months ago

I totally disagree. Bluesky has an unproductive anti-AI mindset that is often propagated by people who are nominally experts (e.g. professors) but who have not kept up with the pace of change in AI and therefore are practically useless in judging its potential. It's surprisingly bad on here re AI

5 0 1 0
8 months ago

This level of ignorance is surprising but unfortunately legitimately dangerous, giving the readers a pleasant but ultimately false idea that AI is just not that good really. One doesn't have to rely on academic experts here -- just trying out using LLMs clearly shows that they are *extremely* useful

11 0 0 0
10 months ago

This is obviously not correct. "The wealthy" are not responsible for climate change. The industrial civilization as a whole is, but because it also produces so much net positive value to humans it's a good trade-off to have made. The zero sum mindset you're displaying is misdiagnosing the issue.

6 0 2 0
1 year ago

This is a very weak argument likely based on vibes. SpaceX is both very efficient (price per ton to orbit is very low => demand from customers) & they do things that no company or government is able to do (massive reusability of orbital rockets). You should check out the Falcon 9 track record.

0 0 0 0
1 year ago

In a narrow subfield it generally correlates with that, yes. But that's off topic, you should address the point about functional equivalence I made if you want to continue the discussion.

0 0 1 0
1 year ago

Successfully acting as if it had knowledge is functionally equivalent to having knowledge. The distinction you are making is a selective call for rigor that even humans would have a hard time passing.

3 0 1 0
1 year ago

Yet you misread a simple plot, drew an obviously wrong conclusion, and ran with it because it supported your biases.

0 0 1 0
1 year ago
Preview
Language Models (Mostly) Know What They Know We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated o...

I disagree and happen to be a co-author on an early paper addressing this very question: arxiv.org/abs/2207.05221

1 0 2 0
1 year ago

I think you are confusing knowing things and being sentient. These are very different concepts. In the end I do not practically care if the LLM has qualia as long as it is performing as if it knew things functionally (and it does exactly that)

1 0 1 0
1 year ago
Preview
o1 pro - Marginal REVOLUTION Often I don’t write particular posts because I feel it is obvious to everybody.Β  Yet it rarely is. So here is my post on o1 pro, soon to be followed by o3 pro, and Deep Research is being distributed, ...

I literally use AI (mainly o1 pro) daily in my research. It is genuinely helpful on the level of a graduate student research assistant. Many highly technical people agree, see for example: marginalrevolution.com/marginalrevo...

1 0 1 0
1 year ago

It can and practically would just use a calculator or a python interpreter or something and just get 100%. Here they were just testing how well it can do math in its head. The fact that it struggles with 10-digit numbers and above is no surprise -- humans are even weaker in this.

2 0 1 0
1 year ago

You're reading the graph wrong. These are the **numbers of digits** in the numbers. It's multiplying two numbers each of which has more than 10 digits. Can you do that in your head?

4 0 1 0
1 year ago

Can you multiply 10-digit numbers in your head while also having PhD-level knowledge in basically any field? If anything, this mind seems superior to essentially any human in almost anything, including mental math. And of course it can always make a tool call to a calculator and get 100% accuracy.

4 0 5 0