Leo Meyerovich's Avatar

Leo Meyerovich

@lmeyerov.bsky.social

Makes: graphistry.com/get-started / louie.ai / graphtheplanet.com OSS: pygraphistry, gfql graph lang, apache arrow, GPU dataframes Before: web FRP, socio-plt, parallel browsers, project domino Data-intensive investigations with LLMs, GPUs, & graphs

85 Followers  |  71 Following  |  77 Posts  |  Joined: 05.08.2023  |  2.1465

Latest posts by lmeyerov.bsky.social on Bluesky

My litmus test for where an engineer or analyst is on their agentic automation journey: Are they maxing out their quota every month, week, day, ... or not at all?

30.07.2025 02:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

5/5

... But fundamentally, as an AI team, it's hard to get excited by steam engine vendors & their VCs when our day-to-day is about electricity. Our team has been enjoying the fun interfaces, but abstaining: that's not where our work is.

27.04.2025 03:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

4/

The Python-only-era tools are flat, with occasional step improvements when OpenAI releases something 10% better

AI-native teams think in learning loops that compound over users and time

These look similar early on, and building loops is hard so probably < 5% of LLM devs do them today

27.04.2025 03:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/

Today's AI-native teams: We think about learning. If an agent is doing some MCP flow today, will it work better tomorrow? And how much better next week? The month after?

27.04.2025 03:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/

Before: Python frameworks competed on being thin LLM + RAG API wrappers. That means minimize # lines of code for RAG/chat/CoT demos, and maximize # of connectors. Adding "agents/workflows" is checkboxing a few more patterns that, largely, look the same across them.

27.04.2025 03:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1/

As folks get excited by the new batch of agent frameworks, IMO, almost all are increments on the Pythonic steam engine era of the last 2 years, while 'adult' AI teams work with electricity

27.04.2025 03:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Big welcome to Semih of Kuzu DB fame for Graph the Planet on Monday. If you like @duckdb.org /
@arrow.apache.org <> graphs, Kuzu is *very* interesting!

25.04.2025 20:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

5/

Curious what others are seeing and thinking about here!

(+, DM if at #RSAC / graph the planet next week!)

21.04.2025 18:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

4/

I’ve been surprisingly OK with the AI messing up

As we build louie.ai and I use it for my own work, I'm thinking a lot more about Vibes Investigating:

- what's working
- differences with software-centric vibes flows vs data-centric
- dovetailing with automation as we bring AI to operations

21.04.2025 18:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/

Recent examples:

- Pairing on a big lawsuit. Live-editing viz + stats skipped a week of back-and-forth

- Identifying stats for that case. Ex: Median absolute deviation instead of stdev

- Mapping cyber pen test team logs + repurposing as a dashboard. Joins are πŸ’© to code but easy to describe!

21.04.2025 18:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/

In Vibes Investigating, I make requests, see results, adjust, and repeat

No manual DB querying, data wrangling, or plotting API fiddling

At the end, I trash it as I would a Splunk/Google search result, or I share my AI notebook just like a regular Google document or Python notebook

(cont)

21.04.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
VibesΒ Investigating: Analyzing data with AI
YouTube video by Graphistry VibesΒ Investigating: Analyzing data with AI

1/ I'm increasingly doing something we call "Vibes Investigating" in the louie.ai team

This rethinks karpathy’s "Vibes Coding", now for data - think operations, incident response, fraud, SRE, logs, data science

Quick demo of analyzing some logs: youtu.be/rpCWyC4MFM0

In Vibes Investigating (cont)

21.04.2025 18:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Excited for Alex to share one of our biggest focuses for awhile now, including details behind a SOTA result for the #security #genai community we've been keeping under wraps!

Fun to be finally going public on this stuff 🏁🏁🏁

19.04.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Victor's team is always early to deploying new AI methods to their many crypto investigations, so excited to learn what is happening in the graph + #LLM world here!

PS: Fun fact, he led data science at Mandiant, so "he has seen some things" and is always a fun speaker 🀠

16.04.2025 19:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The genAI session by #BAML's creator Vaibhav is one of the most important at graphtheplanet.com
at RSA week this month in SF -

Basically, how are teams advancing from AI vibes to AI automation & AI autonomy, and especially in sensitive or investigative scenarios?

Super excited for this track!

12.04.2025 22:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

the louie slack this week

11.04.2025 00:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Finally announced: John Lambert is keynoting Graph the Planet 2025! (#RSAC week in SF)

John's perspective has been burned into my brain for years: "Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win". Graphs are all the way to @microsoft.com 's CISO office 🀯

10.04.2025 16:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Hello Silicon Valley, 2025 vintage: GPU resellers advertising on billboards

22.03.2025 00:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Curious how startup folks approach release velocity nowadays. We're revisiting gitflow assumptions:

- localdev
- dev branch <> server
- big changes on long branches
- SaaS: most devs deploy 1-3x/week
- Enterprise: 2w-4w cycle, with customers updating quarterly

keep in mind we're < 20 ~senior devs

21.02.2025 19:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
LLM repo_stringify.sh:: Crawl git repo for text files and dump them into stdout in a prompting-friendly format LLM repo_stringify.sh:: Crawl git repo for text files and dump them into stdout in a prompting-friendly format - repo_stringify.sh

gist: gist.github.com/lmeyerov/6bc...

16.02.2025 19:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Sunday funday: Make git repos friendly for LLM use with a tiny bash script -

(sharing from some internal louie_ai tooling)

- filters & excludes for desired text files
- convert to LLM-friendly markdown that won't be confused with instruction prompts
- special casing of ipynb
- multicore

gist =>

16.02.2025 19:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Graphistry won the US Cyber Command AI competition by auto-correlating alerts into incidents and kill chains.

Our graph ML clusters alerts fast and links them into a timeline. CEO Leo Meyerovich demos our one-node GPU solution processing over a billion alerts.

#cybersecurity #hunt #SOC

07.02.2025 19:14 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

following your analogy, sounds more like either we solve RL fine-tuning so we can do end-to-end DL, or we find a better path. e.g., learn qlora-like RL patches learned from CoT traces

05.02.2025 04:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

O-series models still do not have any fine-tuning documented. I am seeing teams gravitate to manual staging - reasoner for initial planning, and feed to gpt4 rest. We are looking at r1 - interesting times!

04.02.2025 21:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Technically, I don't know if it's even viable via current tools like qlora. I'm guessing CoT will be fine as base layers can be taught the common case, but true new reasonings may be harder?

04.02.2025 21:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Something the louie team is seeing that I haven't seen discussed: Reasoning LLM fine-tuning for agents & reasoners

* Ex: Multiturn quality drops when reasoning LLM's effective CoT nests with CoT agents and get confused

* Ex: Domain specific scenarios like legal & medical needing specific guidance

04.02.2025 21:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I have been puzzling on efficiency notions, like revenue per engineer going up due to these tech x market improvements, but also going down due to easier competition. That's part of why I think small team specialization wins.

31.01.2025 19:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- .. quality / community / data matters. Don't get distracted by today's AI consultants and weekend AI apps: Similar to mobile apps racing to the bottom with offshore clones, they're not the important part. Instead, thoughtful teams that give a damn is the difference, both for consumer and b2b

31.01.2025 19:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- It's becoming an even more amazing time for individuals and small businesses where a few people who give a damn about specific problems can more easily turn that into deliverable quality, e.g., $1M-20M/yr businesses...

31.01.2025 19:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Good tech ideas used to take 10-100X more time and capital to cover markets that were 10-100X smaller

That means:

- VCs: They care about a few lottery winners, and it is indeed an amazing time for both infra + app builders to profitably multiply YoY: <3 team hashtag#LOUIE_AI

31.01.2025 19:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@lmeyerov is following 20 prominent accounts