I built a fully-functioning in-browser mobile Go Game for local play in a couple hours yesterday, or should I say, an AI Agent did.
chrispenner.ca/rugo
I'll summarize my experience with the tool;
@rntz.net.bsky.social
Michael Arntzenius irl. Postdoc at UC Berkeley doing PL + DB + incremental computation. PL design, math, calligraphy, idle musings, &c. rntz.net π @rntz@recurse.social π¦ @arntzenius Attempting to use bsky more now that people are showing up.
I built a fully-functioning in-browser mobile Go Game for local play in a couple hours yesterday, or should I say, an AI Agent did.
chrispenner.ca/rugo
I'll summarize my experience with the tool;
Ultimately, the takeaway for me is:
* It's a great tool for getting a small artifact like this. It fulfills a need I had, and was very low investment.
* I didn't learn a damn thing. I don't know any more rust, I don't understand WebGPU, I didn't learn how to structure a game.
just finished playing What Remains of Edith Finch
I have a strong urge to drop everything and move to the Pacific Northwest
AIUI leapfrog-via-galloping-search is pretty close to the state-of-the-art way to do database joins on sorted data structures (eg BTrees). but, it doesn't parallelize. the divide-and-conquer algo seems very parallelizable. you could switch to leapfrog intersection once parallelism is exhausted.
09.08.2025 09:42 β π 0 π 0 π¬ 0 π 0btw, the "better version" of the linear merge-intersection is leapfrog intersection: keep pointers into each list, starting at front. repeatedly "leapfrog" the pointer whose element is smaller by searching it forward toward the larger element. binary search will work, but galloping search is better.
09.08.2025 09:37 β π 1 π 0 π¬ 1 π 0the linear-merge intersection can be inefficient if one list is small. eg [0..1000] intersect [1001..1016] is empty; divide-and-conquer discovers this in 4 splittings; linear merge takes 1000 steps. linear in the sum of the input sizes, but unnecessarily slow.
09.08.2025 09:34 β π 0 π 0 π¬ 1 π 0Intersect 2 sorted lists A, B: Wlog |A| <= |B|. Let x =the median of A = A[|A|/2]. Binary search for x in B. This splits A, B each into two parts (below/above x). Recursively intersect A1,B1 and A2,B2.
Is this a well-known sorted list intersection algorithm? What is its worst-case complexity?
someplace where the light is strong, the air is cool, and the nights are quiet
08.08.2025 10:32 β π 1 π 0 π¬ 0 π 0I haven't. what's the easiest way to do that?
I previously (although for different Q) tried godbolt / looking at asm, but was completely overwhelmed by quantity of asm generated and could not parse what was going on at all.
I do not understand inlining in Rust. Even with #[inline(always)], I get worse performance than if I inline manually. Wat?
07.08.2025 20:43 β π 1 π 0 π¬ 1 π 0full code, including the seekable iterators with worst-case optimal/fair intersection, here: gist.github.com/rntz/9c10db3...
07.08.2025 16:18 β π 1 π 0 π¬ 0 π 0yeah, it turned out to be that an assert!(thing >= smaller_thing) was making a subsequent max(smaller_thing, thing) get optimized away. the assert! and max were in completely different functions, though; this only happened after inlining.
07.08.2025 15:55 β π 1 π 0 π¬ 0 π 0I tried optimizing my worst-case optimal seekable iterators in Rust and thought I did pretty good. Then I tried hand-optimizing a little "count the intersection" loop and discovered it's 2x as fast as the iterators. Feh!
07.08.2025 15:54 β π 4 π 0 π¬ 1 π 0why is removing assert!s making my Rust code run _slower_?
07.08.2025 08:09 β π 5 π 0 π¬ 2 π 0We compare the two algorithms on a significant fragment of web layout that includes line breaking, flex-box, intrinsic sizes, and many other features, benchmarking 50 real-world web pages like Twitter, Discord, Github, and Lichess. Across 2216 frames, Spineless Traversal is 1.80Γ times faster on average. Speedups are concentrated in the most latency-critical frames: on the 65.6% of
the four genders: Twitter, Discord, Github, Lichess
27.07.2025 15:27 β π 6 π 2 π¬ 0 π 0This should allow e.g. parallel or, (por bottom true == por true bottom == true).
How hard would it be to make a dependently typed language runtime do (1)?
I want a PL for "deterministic" "concurrency" where:
1. I can run two exprs in parallel, receiving whichever evaluates "first", first. I have to prove this nondeterminism doesn't affect the result. To make this feasible:
2. I can quotient types. I must prove functions respect the equalities I add.
uploading this classic clip for posterity buttondown.com/jaffray/arch...
12.07.2025 14:15 β π 8 π 2 π¬ 0 π 0you, a set-theoretic plebeian:
> data Bool = True | False
me, a domain-theoretic sophisticate:
> data Bool = True | Later Bool
gonna write a paper with exactly two (2) citations
07.07.2025 04:18 β π 1 π 0 π¬ 0 π 0White mountain ermine hopping in the snow... Enjoy #bluesky π
06.07.2025 08:57 β π 16389 π 2173 π¬ 302 π 214databases are about "and"
Prolog is about "or"
what if different delimeters determined whether space was application or pipeline?
[xs (filter even) (map square) sum]
brackets = pipeline/Forthish, parens = application/Lispish
the right notion of equality on existential types should be something like bisimulation, right? is there a standard reference for this?
25.06.2025 04:58 β π 3 π 0 π¬ 0 π 0I'm more interested in the big-O of "memory access" than I am in hashing specifically.
also, someone on fediverse linked me to this blog post series, which not only suggests this holds empirically but also makes exactly your sqrt(n) via black holes argument! :) www.ilikebigbits.com/2014_04_21_m...
I've heard the argument made that hashtable lookup is only constant-time if memory access is O(1), but it's actually O(log n).
But isn't it actually O(βn)? In a given time t, I can reach at most O(tΒ³) space, limited by the speed of light.
π₯ take: Functional programming isn't really about mathematical functions, but input-output black-box procedures. It can't answer perfectly cogent questions like "for which x values is f(x) = 3"?
πΆοΈ take: Logic programming _can_ answer these questions. LP is the real FP.
rust gives me the same feeling as dependent types
"ooh! a fun puzzle: how do I convince the compiler my code is legit?"
(3 days of hyperfocus later)
"whew! Now that I've proven 1 + 1 = 2, I can get on to the thing I started out trying to do..."
signature of elements for completeness:
fn elements<X: Ord + Copy>(elems: &[X]) -> Elements<X>
I figured it out by blindly stumbling around until I added the right lifetime annotations to seek().
17.06.2025 00:38 β π 0 π 0 π¬ 2 π 0