Netsec Explained

Netsec Explained

@gtklondike.bsky.social

Netsec Explained is a passion project; dedicated to the research, learning, and sharing of intermediate and advanced level network security topics. https://www.youtube.com/c/NetsecExplained

957 Followers 35 Following 91 Posts Joined Sep 2023
6 months ago

Thanks, I'll take a look

2 0 0 0
6 months ago

Im curious if they have patterns or commone use case examples using their platform. I use N8N for some automation, but feel like we're all independently reinventing useful patterns and keeping them to ourselves.

2 0 1 0
6 months ago
YouTube
Most Brutual Self-Driving Test by Chinese Company #tesla #huawei #china #byd #car #fsd YouTube video by virtual savage

Wow, this is incredible! We need governments or large orgs to do broad research tests like these.

youtube.com/shorts/GAdhl...

0 0 0 0
7 months ago

Exactly! LLMs will not lead to AGI.
www.instagram.com/reel/DMVO5nT...

Study referenced:
arxiv.org/abs/2507.06952

0 0 0 0
7 months ago

To be clear, I'm not saying this to name and shame. It's common enough, that I hope people will learn from real life examples. I also just really wish people will stop doing it!

Come on all, let's be smart about this.

0 0 0 0
7 months ago
Preview
When AI Has Root: Lessons from the Supabase MCP Data Leak

Last year, I presented one of the top presentations on AI security at RSAC 2024.

In there I explicitly said "do not give your AI root access. It will be a confused deputy, I will add you to my list of examples".

Well, guess who got added to the list?

www.pomerium.com/blog/when-ai...

0 0 1 0
7 months ago
YouTube
Prompt Engineering and AI Red Teaming — Sander Schulhoff, HackAPrompt/LearnPrompting YouTube video by AI Engineer

Awesome presentation from HackAPrompt.
youtu.be/_BRhRh7mOX0

0 0 0 0
9 months ago
Preview
From the ChatGPT community on Reddit: ChatGPT vision of users treating it. Prompt inside come show yours! Explore this post and more from the ChatGPT community

These are incredible, and not creepy at all.
www.reddit.com/r/ChatGPT/s/...

0 0 0 0
9 months ago
YouTube
ChatGPT Adding Watermarks to Text Output? #ai #chatgpt YouTube video by Will Francis

Think something was written with ChatGPT? Turns out the latest models have an unintentional watermark.
youtube.com/shorts/qt4r_...

1 0 0 0
9 months ago

I've always been a fan of building a yardstick and then seeing how you and your organization measure up against it. My question is what are the yardsticks that you use to measure how well a security team is doing?

So glad you take write ins.

1 0 0 0
9 months ago
Preview
GitHub - microsoft/AI-Red-Teaming-Playground-Labs: AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure. AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure. - microsoft/AI-Red-Teaming-Playground-Labs

Big news, Microsoft just open sourced their AI red team labs.
aka.ms/AIRTlabs

1 1 0 0
10 months ago

Quick poll for a security friend. If you are a dev:

Do you know what threat modeling is?
Do you do it?
Why or why not?
If so what does that look like for you?

8 4 7 0
10 months ago

He must not have very many friends IRL, because I can't even think of a single one I'd replace with a chat bot.

0 0 0 0
10 months ago
YouTube
AI Money Glitch YouTube video by ThePrimeTime

Someone mentioned this in my comments the other day, but I didn't even think about the possibility of a deluge of bad/false AI generated bug reports being a problem in AppSec. and yet, here we are.

youtube.com/shorts/BInml...

0 0 0 0
10 months ago

I think it will primarily settle into being a copilot, a quick research and problem solving tool for technical issues.

What I do worry about is when a new technology comes out (like Rust), the AI won't have the millions of Stack Overflow posts to pull from.

3 0 1 0
10 months ago

Sending you good vibes!

2 0 0 0
10 months ago
Preview
AI Agents Fail in Novel Ways, Put Businesses at Risk Microsoft researchers identify 10 new potential pitfalls for companies that are developing or deploying agentic AI systems, with failures potentially leading to the AI becoming a malicious insider.

This is a very interesting read on new and unique ways that AI agentic systems fail. What are your thoughts?

www.darkreading.com/vulnerabilit...

0 0 0 0
10 months ago

That's the ideal, but I'm worried about rent seeking behavior. Gatekeeping info and capital to charge at a premium. There's been a lot of talk about technofeudalism lately. There's even prominent figures in the administration that have stated they want to use AI and automation to replace labor.

1 0 0 0
10 months ago
YouTube
AI Red Teaming: Breaking AI to Build a Secure Future YouTube video by TrojAI

About a month ago, I was asked to hop on a panel with some very talented people to discuss our thoughts on the state of AI security and red teaming. Check it out!

www.youtube.com/watch?v=HzqK...

0 0 0 0
10 months ago

Congrats! What did you do to monetize your skills for those 10 days? Bug bounty, speaking, social media, etc.?

1 0 0 0
10 months ago
YouTube
Get Started in AI CTFs YouTube video by Netsec Explained

AI isn't just LLMs. Here's all the places to go to learn how to hack more traditional AI/ML. Inspired by the AI Village challenges at Defcon.

www.youtube.com/watch?v=hnNZ...

0 0 0 0
10 months ago
YouTube
Real-world Attacks on LLM Applications YouTube video by Netsec Explained

If you want to learn how to hack AI, I have a video for that. Check it out!

www.youtube.com/watch?v=_4Q9...

0 0 0 0
10 months ago

No friends? No problem.

0 0 0 0
10 months ago
Preview
The Leaderboard Illusion Measuring progress is fundamental to the advancement of any scientific field. As benchmarks play an increasingly central role, they also grow more susceptible to distortion. Chatbot Arena has emerged ...

"When a metric becomes a target it ceases to be a useful metric."

arxiv.org/abs/2504.20879

9 2 0 1
10 months ago

This made me feel good! The perfect compliment from someone on my talks:

"You've made a difficult topic interesting, and explained it in a way that's memorable"

0 0 0 0
10 months ago

I'm curious. How would you define or describe the following?
* AI red teaming
* AI pentesting
* jailbreaks vs prompt injections
* AI agents

With all the semantic games in the AI+security space, let's settle on some common definitions and descriptions.

0 0 0 0
10 months ago
Post image

Going to #RSA? I’ll be speaking at Aegis of Tomorrow: An AI & Security Summit on Monday, April 28 from 3–5pm.

I’ll be sharing a framework for cutting through AI hype and prioritizing cybersecurity investments based on how attacker capabilities are actually evolving.

👉 Register here: lu.ma/9j1p8ixj

2 1 0 0
10 months ago

Holy shit, holy shit, holy shit.

1 0 0 0
11 months ago
YouTube
New Research Reveals How AI “Thinks” (It Doesn’t) YouTube video by Sabine Hossenfelder

You've heard of "Vibe Coding", now let me introduce you to "Vibe mathematics"!

Some think in the next 2 years, we'll have AGI. I think it'll discover astrology instead. Do you think it's a Cancer, or Sagitarios?

youtu.be/-wzOetb-D3w?...

1 0 0 0
11 months ago
Preview
Web Application Pentesting and the Importance of Specialization with Tib3rius by Phillip Wylie Show About The Guest:Tib3rius is a penetration tester with over ten years of experience, specializing in web application security. He is the creator of the popular tool Autorecon, which is widely used for enumeration in the OSCP exam and CTF challenges. Tib3rius also offers courses on Udemy and Hackers Academy, focusing on privilege escalation techniques for Windows and Linux. Summary:Tib3rius joins Phillip Wylie on The Phillip Wylie Show to discuss his background in penetration testing and his specialization in web application security. He shares insights into the development of his tool Autorecon, which was initially created for the OSCP exam but gained popularity in the community. Tib3rius also talks about the importance of specialization in offensive security and offers advice for those looking to start a career in penetration testing. He highlights the value of bug bounty hunting as a way to gain practical experience and shares his thoughts on the OWASP Top Ten and the future of web application security tools. Key Takeaways: Autorecon, a tool created by Tib3rius, is widely used for enumeration in the OSCP exam and CTF challenges. Specializing in a specific area of penetration testing, such as web application security, can lead to becoming a subject matter expert and increase value to a company. Bug bounty hunting can provide practical experience and count as valuable experience in the field of penetration testing. The OWASP Top Ten has evolved from a list of the top ten vulnerabilities to a list of categories, covering a wide range of web application security issues. The future of web application security tools, such as Kaido, remains to be seen, but competition in the field can lead to improvements and alternatives to existing tools. Quotes: "I think specialize in something and learn that thing well, and you'll be fine." - Tib3rius "Bug bounty hunting is a great thing to go into because you'll get some experience actually testing real applications." - Tib3rius "The OWASP Top Ten has become a catch-all category that covers almost every vulnerability." - Tib3rius Socials and Resources: https://twitter.com/0xTib3rius http://youtube.com/Tib3rius https://tib3rius.com/ https://courses.tib3rius.com/ https://linktr.ee/tib3rius

Web Application Pentesting and the Importance of Specialization with Tib3rius podcasters.spotify.c...

7 3 0 0