OpenClaw has 200k stars and gives AI agents shell access, API keys, and code execution. 42,000 exposed instances on the public internet.
Everyone is asking what AI agents can do. Almost nobody is asking who secures them. We at AISLE do.
Blog post: aisle.com/blog/aisle-t...
AISLE is now the #1 source of accepted security findings in OpenClaw, the fastest-growing AI agent framework. Our AI discovered 15 vulnerabilities: 1 Critical (CVSS 9.4), 9 High, 5 Moderate. 21% of all OpenClaw security advisories globally are from us, more than anyone else β¬
Daniel Stenberg of curl now invokes our Analyzer on his PRs. His reaction to the OpenSSL news:
"I'm a little amazed.. 12(!) of them were reported by people at Aisle... if you are curious what AI can do for Open Source security when used for good"
Blog: aisle.com/blog/what-ai...
New post on what AI cybersecurity research looks like when it actually works! I wrote up what we've learned discovering 12 of 12 new OpenSSL zero-days, 5 CVEs in curl, and additional 100+ validated CVEs across critical open source infrastructure, middleware, and secure apps πβ¬
Thanks for sharing! Happy to answer any questions you / your followers might have. The full thread with more details is here: bsky.app/profile/stan... I'm genuinely excited and slightly worried how far we've managed to push our AI system. A CVE is one thing, but we've now industrialized the process.
At AISLE (aisle.com) we're creating Al to secure the world's most critical software infrastructure. We've discovered & fixed 100s of zero-days in some of the most important code on the planet.
We're hiring people who share our mission!
OpenSSL 12/12 CVEs post: aisle.com/blog/aisle-d... 6/6
Their bug bounty just died from Al slop, while we reported >30 real fixed vulns. The ceiling is rising as fast as the median is collapsing. The whole distribution is bifurcating and any single point-estimate of what it is is bound to be an incomplete narrative. 5/6
Full blogpost here, including how this connects to our 5 new CVEs in another pillar of the OSS infra = curl (thanks go to @bagder.mastodon.social.ap.brid.gy for his excellent job building and maintaining it):
lesswrong.com/posts/7aJwgb...
4/6
HIGH severity CVEs in OpenSSL average less than 1 per year. This release includes one & we discovered it:
CVE-2025-15467
A stack buffer overflow that requires no authentication. Parsing untrusted CMS content could be vulnerable. Pre-auth remote code execution in OpenSSL is wild! 3/6
In Fall 2025, we announced that our Al system discovered 3 of 4 OpenSSL CVEs that year. Already historically unprecedented, but we kept pushing. Three months later, the final count is: 13 of 14 OpenSSL CVEs in 2025, discovered by our Al 2/6
OpenSSL secures most of the internet's encryption. They just patched 12 new zero-day vulnerabilities. Our Al system developed by AISLE is responsible for discovering all 12/12, every single one of them. This includes a pre-auth HIGH severity one & 3 that lurked there for >25 years! 1/6
I can help out! Do you want to dm me?
What would you like to know?
We discovered them. The OpenSSL maintainers, nor anyone else in the world, likely knew they existed.
What are they worried about?
π― this
I doubt the AI overviews are a big deal in the total number tbh. Gemini is extremely useful and e.g. I'm running at least >1B tokens a day through it for sure.
I totally disagree. Bluesky has an unproductive anti-AI mindset that is often propagated by people who are nominally experts (e.g. professors) but who have not kept up with the pace of change in AI and therefore are practically useless in judging its potential. It's surprisingly bad on here re AI
This level of ignorance is surprising but unfortunately legitimately dangerous, giving the readers a pleasant but ultimately false idea that AI is just not that good really. One doesn't have to rely on academic experts here -- just trying out using LLMs clearly shows that they are *extremely* useful
This is obviously not correct. "The wealthy" are not responsible for climate change. The industrial civilization as a whole is, but because it also produces so much net positive value to humans it's a good trade-off to have made. The zero sum mindset you're displaying is misdiagnosing the issue.
This is a very weak argument likely based on vibes. SpaceX is both very efficient (price per ton to orbit is very low => demand from customers) & they do things that no company or government is able to do (massive reusability of orbital rockets). You should check out the Falcon 9 track record.
In a narrow subfield it generally correlates with that, yes. But that's off topic, you should address the point about functional equivalence I made if you want to continue the discussion.
Successfully acting as if it had knowledge is functionally equivalent to having knowledge. The distinction you are making is a selective call for rigor that even humans would have a hard time passing.
Yet you misread a simple plot, drew an obviously wrong conclusion, and ran with it because it supported your biases.
I disagree and happen to be a co-author on an early paper addressing this very question: arxiv.org/abs/2207.05221
I think you are confusing knowing things and being sentient. These are very different concepts. In the end I do not practically care if the LLM has qualia as long as it is performing as if it knew things functionally (and it does exactly that)
I literally use AI (mainly o1 pro) daily in my research. It is genuinely helpful on the level of a graduate student research assistant. Many highly technical people agree, see for example: marginalrevolution.com/marginalrevo...
It can and practically would just use a calculator or a python interpreter or something and just get 100%. Here they were just testing how well it can do math in its head. The fact that it struggles with 10-digit numbers and above is no surprise -- humans are even weaker in this.
You're reading the graph wrong. These are the **numbers of digits** in the numbers. It's multiplying two numbers each of which has more than 10 digits. Can you do that in your head?
Can you multiply 10-digit numbers in your head while also having PhD-level knowledge in basically any field? If anything, this mind seems superior to essentially any human in almost anything, including mental math. And of course it can always make a tool call to a calculator and get 100% accuracy.