@tomjohnson3.bsky.social
CTO at Multiplayer.app, full stack session recordings to seamlessly capture and resolve issues or develop new features. Also: π€ robot builder πββοΈ runner πΈ guitar player
β£ Shape your data early.
β£ Prioritize security.
β£ Be deliberate with receivers.
β£ Export with efficiency.
β£ Monitor the Collector itself.
The lesson I keep coming back to is simple: an observability framework is only as strong as its Collector configuration.
Iβve spent the better part of the past year working with the OpenTelemetry Collector to ensure our full-stack session recordings include automatically correlated backend traces.
Here are the lessons Iβve learned (sometimes the hard way) about configuring the Collector π
He's giving a talk at ZurichJS next week.
If you're in town, I recommend checking out their EOY meetup on Thursday 13 Nov, 18:00 CEST. π
zurichjs.com/events/zuric...
π @farisaziz12.bsky.social describes the pain of customer support perfectly.
Itβs always exciting to see Multiplayer show up in real-world stories like this, as part of how engineers actually solve problems. Seeing it used to cut through the βscreenshot chaosβ is exactly why we built it.
Full article: mayanksharmasharma77.substack.com/p/how-i-fina...
23.10.2025 07:52 β π 0 π 0 π¬ 0 π 0"Effective debugging isnβt about speed. Itβs about visibility and understanding. When AI has access to complete context, it becomes a real collaborator in that process."
π This is exactly why we built Multiplayer. π€©
Full write-up: dzone.com/articles/fiv...
22.10.2025 08:39 β π 0 π 0 π¬ 0 π 0Optimizations donβt have to be flashy or complex and a 5-minute fix to our CI/CD pipeline saved us 5hr a day.
This is a reminder thatΒ even the most obvious optimizations can hide in plain sightΒ when youβre heads down building the next big thing.
Save your spot: luma.com/joouzfzw
21.10.2025 14:27 β π 0 π 0 π¬ 0 π 0A sneak peak of my presentation for tomorrowβs MCP demo night π
If youβre in New York - come say hi!
I bet a Multiplayer full stack session recording + Claude Code would have caught that π
16.10.2025 17:26 β π 0 π 0 π¬ 0 π 0Claude Code error or human error?
From the latest Anthropic blog post: variable names donβt match (red)β¦misspelling of urgent (blue) β¦ unnecessary second check (second yellow line)
What about 'workslop'? That one would count too π
15.10.2025 11:55 β π 1 π 0 π¬ 0 π 0π§΅ What is an βAI Engineer,β really?
Itβs one of the hottest job titles of 2025 but also one of the most misunderstood. Letβs unpack what the role actually means (and why it matters).
When your AI confidently βfixesβ production.
Donβt worry, Iβm sure it learned from this. π
Sometimes the frontend data isnβt enough.
Sometimes (okay, always) you also want to know what happened in the backend.
How much time do you have? I feel like the risk zone is anywhere between 3-6 hours.
08.10.2025 12:20 β π 1 π 0 π¬ 0 π 0This is a good time to remind everyone of the AI Darwin Awards. π
08.10.2025 12:15 β π 1 π 0 π¬ 0 π 0Devs: Iβll just make a small change.
QA tickets: πΆπΆπΆπΆπΆπΆπΆπΆ
Start simple, release gradually, and let user feedback guide you. Less is more when it comes to MCP.
βΒ Curious what others are seeing: whatβs the most *useful* MCP tool youβve come across so far?
For us, that meant focusing on two high-value use cases:
1οΈβ£ Fixing bugs (where we can pipe full-stack session data directly into an AI tool)
2οΈβ£ Building features (where annotations/sketches from a session replay add the needed context to AI prompts).
Donβt just map your API 1:1 into MCP tools. That creates context bloat, and LLMs arenβt great at wiring together dozens of endpoints. Instead, scope tools tightly around developer intent.
01.10.2025 12:19 β π 1 π 0 π¬ 1 π 0MCP servers are everywhere right now. But most are collecting dust.
The key lesson weβve learned at Multiplayer: scope matters. π§΅
I repeat. DONβT UPVOTE. I donβt care about that.
I just want to hear your feedback:
πΒ Would you use this mainly for debugging, testing, or feature development?
πΒ Have you tried session replays before? What worked, what didnβt?
I repeat. DONβT UPVOTE. I donβt care about that.
I just want to hear your feedback:
πΒ Would you use this mainly for debugging, testing, or feature development?
πΒ Have you tried session replays before? What worked, what didnβt?
But I also know that βI promise it's betterβ isnβt always enough when youβre busy and already juggling priorities.
So Iβd love to hear: whatβs madeΒ *you*Β drop a tool that was working and try a new one? And what made the switch worth it?
Is it word of mouth, seeing a demo, hitting a pain point one too many times, or just plain curiosity?
From my side: I genuinely believe weβre building something that saves time, reduces context switching, and brings all your data into one place.
I build developer tools for a living, and Iβve been wondering about this a lot: once you have a workflow that βworks well enough,β whatβs the trigger to get you to switch to something different and/or (possibly) better?
29.09.2025 09:59 β π 0 π 0 π¬ 1 π 0