Just recently I have been getting a lot of scam alerts from New Zealand businesses.
But if I were a scammer, I would pretend to be sending a scam alert email - or calling someone to alert them about scams.
@jmoonz.bsky.social
keep on building also at https://moo.nz/@j previously @ferrouswheel on twitter
Just recently I have been getting a lot of scam alerts from New Zealand businesses.
But if I were a scammer, I would pretend to be sending a scam alert email - or calling someone to alert them about scams.
Claude code android app silently eating your prompt if you don't wait for the session to initialize before navigating away is such a pain.
While I prefer Anthropic models, the general quality of their software releases feels less polished vs OpenAI.
Built with the help of claude/codex, but I started this repo 10+ years ago. LLMs are letting me focus on the interesting distributed problems instead of spending a week getting an Android client working.
27.02.2026 04:27 β π 3 π 0 π¬ 0 π 0Don't use tenet yet, it is still research mode with breaking changes and no public relays.
27.02.2026 04:25 β π 1 π 0 π¬ 1 π 0So I'd heard of Nostr before, but didn't realize I was essentially building something quite similar with tenet.
github.com/ferrouswheel...
oh excellent news - thanks for the link!
26.02.2026 20:27 β π 0 π 0 π¬ 0 π 0I think my biggest reservation with atproto is I don't actually want most of my social data public and permanently on record.
Until there are good stories for data privacy and private groups, I'm going to keep pursuing p2p where "forgetting" and transient data is an inherent feature.
I just did a bunch of major refactors in 2 days that would have taken me weeks otherwise.
Lots of complex integration. The agent didn't know how to do it all, but i could break it down it solvable pieces and the side benefit of this all is I don't struggle with wrist pain anymore.
*rubber duck π¦
But maybe i should go with runner duck because it's pretty quick π¦ π¨
I get to focus on the what and how, instead of minutiae.
And if I do need to focus on minutiae, it's a lot less painful to debug. Because either I can runner duck with an agent or I can more quickly make tools to test hypotheses about it.
And one point I'd agree, but absent of unreasonable time demands from management, I actually find making software more enjoyable now.
I can focus on architecture, aesthetics and the hundreds of quality of life improvements that there is usually no time for.
Although I see mention of it also increasing subscription costs a similar amount. :-(
I miss when these were one off purchases and they didn't feel the need to constantly change their products.
I'm looking for an alternative now and Bitwarden has been mentioned in past. Thanks!
24.02.2026 17:48 β π 1 π 0 π¬ 1 π 0Have you decided what to switch to? I'm considering migrating.
24.02.2026 17:47 β π 0 π 0 π¬ 0 π 0*frustrated
And in this way it's building "intuition" for what a user actual wants by skipping some fo the extra back and forth.
I mean just interacting with humans en mass is a training signal.
You could structure unsupervised learning based on skipping straight to the correct answer after several rounds of a frustrating user trying to get the LLM to give the right answer.
plus human brains are running at an efficient 20W.
using published numbers for GPT-3 training over 15 days - a human would have to live ~480 years to burn that much energy with their brain alone.
I wish I had your optimism.
I worry they will centralism control, kill the personal computer market, then once people are reliant on it... increase prices to recoup investment and capex.
You can - e.g. switching floating point representation, which some architectural transitions have enabled, though you can't just flip representations without assessing the impact on training.
I guess my broader point is that new generation hardware takes time to fully utilise vs naive approaches.
yup - but that means they are not maxing out the capability of the new hardware until that work is done. Are they hitting close to 100% utilization of cores/memory bandwidth straight away? Maybe, but burning watts is not the same as capability and throughput.
21.02.2026 18:53 β π 1 π 0 π¬ 1 π 0Sorry that's a bit harsh. Yes, models are deployed quickly to take advantage of new hardware. But usually there are new features that take a little while to understand the best way to fully utilize them.
21.02.2026 18:49 β π 0 π 0 π¬ 0 π 0This is blatently untrue. It takes time to optimize and understand how best to push a new CUDA arch to the limit.
21.02.2026 18:43 β π 0 π 0 π¬ 2 π 0If you follow the whole "if all you have is a hammer everything looks like a nail" sure.
But that doesn't mean a hammer isn't a useful tool in some situations.
But I get it. I take great satisfaction in well crafted software.
Unfortunately very few companies are configured to let you build things well rather than accumulate tech debt.
I'm losing time. I can now do things that took months in weeks or days.
20.02.2026 18:03 β π 0 π 0 π¬ 1 π 0But there will be fuck ups by people who don't know better or don't have the software experience to temper and review what is created.
20.02.2026 17:59 β π 0 π 0 π¬ 1 π 0As someone with similar experience. LLM assisted coding lets you do that, try something out much quicker Didn't work? Throw it out, try something else.
Much more expensive to do that if every ascii character is artisanly crafted with human fingers.
Has anyone built a "weather forecast" for LLM providers?
Like "today claude is acting more sycoohantic than usual" and "chatgpt is being unnecessarily 70% more verbose"
claude sonnet feels particularly stupidified today.
20.02.2026 09:06 β π 0 π 0 π¬ 0 π 1