Whoops, I missed that they have multiple shots. The gate overhead is ~5600x not 256x.
19.02.2026 18:18 β π 3 π 0 π¬ 0 π 0@craiggidney.bsky.social
Research scientist on Google's quantum team, working on reducing the cost of quantum error correction. Useful tools I've made: - Quirk: https://algassert.com/quirk - Stim: https://github.com/quantumlib/stim - Crumble: https://algassert.com/crumble
Whoops, I missed that they have multiple shots. The gate overhead is ~5600x not 256x.
19.02.2026 18:18 β π 3 π 0 π¬ 0 π 0
Chevignard et al show residues also reduce the qubit cost of quantum attacks on elliptic curves: eprint.iacr.org/2026/280
The space savings is less dramatic than for factoring (1.6x instead of 6x), and they again pay a big gate count penalty (256x), but very interesting.
Made a video tutorial of using crumble to create a quantum error correction circuit: youtu.be/SnpLSvyyEx8
16.02.2026 04:01 β π 13 π 0 π¬ 0 π 0And, of course, if you're in charge of some security thing that's vulnerable to quantum attacks, you should be *assuming progress*. The strategy "we'll start our 3 year plan when it looks 4 years away" is just guaranteeing an "oh shit" moment when progress subtracts 2 years.
14.02.2026 22:25 β π 9 π 1 π¬ 0 π 0The cost of holding assumptions constant is they go stale. In 2012, demanding 1e-3 noise was audacious. Now it's conservative. Locality? The mechanisms for long-range connections are multiplying and improving. Frankly, if you assume *progress*, 100k qubits starts looking high.
14.02.2026 22:14 β π 4 π 0 π¬ 1 π 0This might be unappreciated outside the field, but it's easy to juice numbers by demanding more from hardware. It's a common type of paper (e.g. arxiv.org/abs/2103.06159 and arxiv.org/abs/2302.06639). To avoid this confounder. I've tried to hold my assumptions constant over time.
14.02.2026 22:13 β π 3 π 0 π¬ 1 π 0
I've been asked several times to comment on arxiv.org/abs/2602.11457, which claims to reduce the qubit cost of factoring by 10x.
My take is that they demand a *lot* more qubit connectivity for that number. Your mileage depends entirely on how plausible you find those demands.
From having seen Austin explain the surface code many times: this makes sense to me as the progression he'd chose. He's known surface codes so long that the complications become hard to perceive. And with prerecorded lectures the students can't stop the runaway train with "wait what?" questions.
04.02.2026 23:23 β π 0 π 0 π¬ 0 π 0Yup, it's a bit like zooming way in on the boundary of a hyperbolic code. Except I think those don't have a boundary at the edge, but instead have non-local links to other sections of the edge. Otherwise you can't get the constant coding rates they advertise.
04.02.2026 15:33 β π 0 π 0 π¬ 0 π 0I use *programming* to generate images. I like SVG as a target format because it's so easy to generate programmatically from any language. In this case I was calling stuff I'd previously written to draw the stabilizers of a code. So I just picked qubit locations and stabilizer bases.
04.02.2026 15:30 β π 0 π 0 π¬ 0 π 0
A surface code that goes from a normal tiling to one where each row keeps using half as many qubits as the last.
I can't think of a reason to do this, but it's visually interesting.
Or someone has to endorse you. TBH I thought this was already how it worked. For my first submission nine years ago I had to get an endorsement. I got it from Dave Bacon.
It looks like the main change is that previously endorsement was skipped if your email address was associated with a university.
In other words: Early Fault Tolerance starts when quality transitions from an impassable barrier to a mere tradeoff.
Concretely⦠I arbitrarily declare EFT begins once it's demonstrably possible to do T gates with an infidelity of 10^-10. You mostly won't do T gates that way... but you *could*.
To me it refers to the quality problem being solved while the quantity problem remains.
So you *could* perform universal logical gates far better than you could ever need... but you can't hold a lot of qubits at once. Sacrificing quality to gain a bit of quantity is then a natural tradeoff.
Another paper related to simulating magic state cultivation. This time it's algorithmic insights, rather than hulk-smashing at the circuit with GPUs: scirate.com/arxiv/2512.2...
(...ΒΏporque no los dos?)
Stim pushed the field forward... but I worry it may also hold it back. For example, Stim doesn't understand adaptive strategies like "measure again if you see a suspicious thing". If too many researchers rely on stim, adaptivity will be under-explored!
02.01.2026 10:53 β π 8 π 0 π¬ 2 π 0
Papers citing stim doubled again in 2025.
Stim is a python package for analyzing/simulating quantum error correction circuits. It was thousands of times faster than prior tools and popularized modelling fault tolerance in terms of "detectors" (see quantum-journal.org/papers/q-202...).
Better simulations of magic state cultivation: scirate.com/arxiv/2512.2... .
In my initial paper, I approximated T with S by using a 2x safety factor measured at d=3. IMO this paper suggests another 2x at d=5. They *claim* another 7.65x; an attempts-vs-error curve would help tell.
I always pictured it as more of a bullet-holes-filling-in type of thing, because the stabilizers are being constantly recreated.
29.12.2025 01:39 β π 4 π 0 π¬ 0 π 0
It's a good example of the type of growth to expect, qualitatively speaking.
If the same thing does play out for full quantum codes over the next few years, then the better story would be that. But that hasn't happened yet.
An upbeat blog post for Christmas: Quantum Error Correction goes FOOM
algassert.com/post/2503
Gah, it should be 1% not 0.1% in 2014.
24.12.2025 05:30 β π 0 π 0 π¬ 0 π 0This pattern is why I expect logical qubits (in full quantum codes not just rep codes) to go from barely better than the raw physical qubits to ridiculously amazing within years, despite it taking decades to get to the current level.
24.12.2025 03:44 β π 6 π 0 π¬ 0 π 0
The error rate doesn't change smoothly; it's driven by crossing barriers. Are you below threshold? Are you removing leakage? Are you mitigating cosmic rays?
Even ignoring barriers, qubits growing exponentially vs time would imply super-exponential growth in suppression (i.e. "foom").
History of quantum rep code error rates from the UCSBβGoogle team
2014: 0.1% arxiv.org/abs/1411.7403
2015:
2016:
2017:
2018:
2019:
2020:
2021: 0.01% arxiv.org/abs/2102.06132
2022:
2023: 0.0001% arxiv.org/abs/2207.06431
2024: 0.00000001% arxiv.org/abs/2408.13687
I call this pattern "QEC goes foom"
Something interesting we checked is that, if you skip memory rounds during the cultivation, the end to end infidelity gets worse. This is a major difference from memory experiments, where less rounds is always better. In more complex QEC, the optimal number of rounds is non-zero.
17.12.2025 06:59 β π 2 π 0 π¬ 0 π 0The 1e-4 infidelity and 8% retention is what you'd expect, given the chip's current gate error rates. Great numbers for today, but it's likely to look quaint in a couple years. Cultivation improves *dramatically* vs gate error rate. We're not yet in the regime where it sings.
17.12.2025 06:54 β π 1 π 0 π¬ 1 π 0
We did some experimental testing of magic state cultivation! arxiv.org/abs/2512.13908
A ~1e-4 end2end infidelity is tricky to measure with tomography, so we checked it vs more cultivation. The full escape stage is too wide for the chip, but we did try ending in the grafted code.
Hahaha, I don't know if I'd go *that* far! ...Nature abhors computation in the same way it abhors aluminum foil. Gotta do lots of work convincing it to allow those things to exist.
25.11.2025 22:55 β π 4 π 0 π¬ 0 π 0Magic and entanglement are both common in nature, but not in the purified forms required for big quantum computations.
25.11.2025 19:53 β π 3 π 0 π¬ 1 π 0