You mean, interventionist causation in general? That's a big blind spot for many physicists, sadly. We all "know" about it at a gut level, like when we tell students that external forces *cause* accelerations, rather than the other way around. But many physicists probably couldn't articulate why.
27.02.2026 00:22 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
If there is a law-like restriction instead of an uncertainty principle, thatโs like taking the retrocausal model and reinterpreting the hidden variables as a gauge, where the gauge itself seems to prevent signaling faster than light. But that would defeat the purpose of the paper, wouldnโt it?
26.02.2026 18:42 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
The first model (2 posts back) is forward-causal, but with a strange restriction on what Iโm allowed to do in the future. The last model is retrocausal, but with an epistemic restriction on the initial state, tuned just right to prevent me from being able to send a known signal faster than light.
26.02.2026 18:39 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
If a different model allows those future experiments, it must then it must in turn forbid my complete causal control of the initial state. Maybe, as you say, with some law-like restriction on the initial state itself, or maybe with some uncertainty principle, forbidding paradox-sufficient access.
26.02.2026 18:37 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
If I had control of a system that could signal faster than light, and Lorentz transformations are correct when boosting frames, then itโs pretty clear what sort of experiment I could set up to make a paradox. But if a model forbids my causal control of those future experiments, no more paradoxes.
26.02.2026 18:36 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Yes, there are lots of different causal models which correspond to the very same equations. (Most obvious example here is the Ideal Gas Law; depending on which variables youโre allowed to control, you get very different casual pathways.) Different intervention freedoms = different models.
26.02.2026 18:35 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
You can't see this problem if all you look at is 'existence and uniqueness'. Instead, you have to ask questions about the input-output structure of the model -- and that often requires more analysis than just writing down the bare equations.
24.02.2026 03:54 โ
๐ 2
๐ 0
๐ฌ 0
๐ 0
In this case, with a restriction on (S) due to later events, the issue isnโt *forward* causation; itโs *backward*! If I can set something in the future that would constrain allowed past values of (S), thatโs retrocausal. And without restricting access to S, such a model could signal back in time.
24.02.2026 03:52 โ
๐ 1
๐ 0
๐ฌ 2
๐ 0
In many models (1) and (2) are the same thing, because the inputs are assumed to be the initial state. But in some models, like the one in this paper, this isnโt the case. If there are rules telling you that the initial state (S) canโt be freely set, from outside the model, then (S) isnโt an input.
24.02.2026 03:51 โ
๐ 2
๐ 0
๐ฌ 2
๐ 0
This brings us to question (1), which is where the true causal analysis lies. But you canโt answer this from the bare equations; the model needs to specify the causal structure. What are the modelโs โinputsโ? What events are we allowed to โsetโ, from outside the model, independently?
24.02.2026 03:50 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
After all, correlation is not causation. Our causal instincts are โinterventionistโ. We ask ourselves, if I set event A to this value, instead of that (counterfactual) value, is there an effect at B? Weโre not just asking about correlations, weโre asking about input-output relationships.
24.02.2026 03:50 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
The only nice thing about (2) is that you can analyze it in terms of the โbare equationsโ, without making any causal assumptions. Given the equations, the answer to (2) is easy to assess -- itโs just a question of which events are correlated. But does it have anything to do with โcausationโ?
24.02.2026 03:49 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Fun stuff! Still, itโs important not to conflate these two questions, for any given model: (1) Are there inputs to the model at event A which have an effect at event B? vs. (2) Is there an initial event A that is correlated with some other event B? These questions come apart in cases like this.
24.02.2026 03:49 โ
๐ 2
๐ 0
๐ฌ 1
๐ 0
So my conclusion is almost opposite theirs. Instead of banning signaling-in-principle to rule out "all-at-once" models, it just means that any good all-at-once model must be able to generate its own โuncertainty principleโ limit, to forbid any signaling-in-practice. A challenge, not a no-go theorem.
15.02.2026 20:51 โ
๐ 2
๐ 0
๐ฌ 0
๐ 0
And if โsignaling-in-principleโ was only possible beyond the modelโs own restriction, then thereโs nothing amiss. In fact, if one thinks there is a realistic way to explain Bell inequality violations, then of course there *would* be signaling if you could see all the hidden variables. Yet you can't.
15.02.2026 20:50 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Iโll have to think about it some more, but my gut tells me that while such arguments might work in some cases, I donโt see how you could generalize it. All-at-once models would generically lead to restrictions on what sort of states could be prepared, each with their own โuncertainty principleโ.
15.02.2026 20:49 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Their analysis of Schulman raises a very interesting question: Can you logically combine an all-at-once model (where the probabilities are assigned to entire histories) with conventional preparations (where the probabilities are assigned to initial instantaneous states)? How would that work?
15.02.2026 20:49 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Their real target here isnโt Bohm, but rather the models I favor: Models that break measurement independence by allowing the future setting to constrain the hidden past, solving everything โall at onceโ. And the framing they use here is the trusty Schulman model! I'm very happy to see thisโฆ ๐
15.02.2026 20:48 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Their discussion of Bohmian mechanics doesnโt make this point, but instead notes the possibility of a non-equilibrium initial distribution. Okay, maybe thatโs fair, but only because (in that particular model) such a distribution seems (barely) possible. But what about a different model?
15.02.2026 20:47 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
But Iโm not at all sure about the jump from โsignaling in principleโ to โtestable in principleโ. For instance, in Bohmian mechanics, you could signal at a spacelike distance if Alice and Bob could examine their states at a level below what the uncertainty principle allows. But thatโs not testable.
15.02.2026 20:46 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Interesting new paper by Guido Bacciagaluppi and colleagues, raising some deep issues with Bellโs Theorem. On one level I think theyโre right that thereโs a connection between realistic violations of Bell inequalities and โnonlocal signaling in principleโ. They seem to go together.
15.02.2026 20:45 โ
๐ 4
๐ 0
๐ฌ 1
๐ 0
He was there, for sure, but I don't think anyone based at Chapman gave a talk.
09.02.2026 20:46 โ
๐ 2
๐ 0
๐ฌ 1
๐ 0
YouTube video by Institute for Quantum Studies
john templeton foundation at chapman iqs conference day 1 720p
The talks from a recent quantum foundations conference at Chapman U. are now online... Each day is its own 5-hour video, I guess. My talk, "A realistic alternative to the wavefunction", happened to be the first on day 1 (skip to the 10 minute mark). www.youtube.com/watch?v=FPwj...
09.02.2026 19:04 โ
๐ 3
๐ 1
๐ฌ 1
๐ 0
"Is it wrong to wish on space hardware?"
(Billy Bragg really wants to know...)
06.02.2026 05:57 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
What did you think?
03.02.2026 21:23 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Bold text reading 'CANCEL CHATGPT EDU.' with a red banner stating 'INVEST IN HUMANS' over a dark board background.
The CSU/OpenAI contract is set to expire June 30, 2026.
Sign this petition: https://actionnetwork.org/petitions/cancel-chatgpt-edu-invest-in-humans/ for the CSU NOT to renew the contract and to use the savings to protect jobs at CSU campuses facing layoffs.
31.01.2026 02:00 โ
๐ 274
๐ 142
๐ฌ 6
๐ 31
Right, and the moment you learn anything you didn't already know (even your own choice of measurement basis), it's logically incoherent *not* to Bayesian update your state of knowledge. (a state which includes the wavefunction, imho)
21.01.2026 22:42 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
One path forward, motivated by this analysis, would be to โcollapseโ the state *twice*; once when you decide on the measurement procedure (basis, timing, etc), and then again when the outcome is learned. Thereโs no formal framework that does this in QM, but there is elsewhere. (Ising models,etc.)
21.01.2026 22:00 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
Good points all. Ideal measurement operators โliveโ on instantaneous hypersurfaces (in some special frame). But most real-life measurements donโt conform to this idealization. For an even more dramatic violation, consider arrival-time measurements, which โliveโ on timelike surfaces!
21.01.2026 21:58 โ
๐ 2
๐ 0
๐ฌ 2
๐ 0