My (heterodox) take on PBR is from the perspective of someone in the psi-epistemic camp, and therefore forced to give up one of PBR’s assumptions (#2 in your final list). But this is also what can be given up to resolve Bell’s Theorem, so for me, PBR just sharpens the open options.
Aha – so you *are* interested in this topic! :-)
This is also very well done. Only one technical nitpick, the definition of ontic/epistemic in this context. “Ontic” can still be compatible with certain hidden variables. Bohmian mechanics, for instance, is considered psi-ontic, not subject to PBR.
Fair enough, but check back in every few years just in case it *gets* interesting. 150 years ago, ‘philosophical’ speculation about “what are heat and entropy, really?” also might have seemed pointless, until it turned out the answer to the question actually mattered quite a bit. ;-)
I also think there’s a loophole to #4. When there is no firm scientific consensus to overturn, you can introduce a new explanation and have it gain traction without new experimental tests. I have in mind here some future reformulation of quantum mechanics. www.nature.com/articles/d41...
“The standard explanation for the rejection of continental drift is the lack of a causal mechanism, but this explanation is false. There was a spirited and rigorous international debate over the possible mechanisms… which ultimately settled on the same explanation generally accepted today.”
Nice piece!
One nitpick: I’ve recently learned that the conventional-wisdom story you’re using about the rejection of continental drift isn’t correct. Here’s a quote from Naomi Oreskes (“Plate Tectonics”, 2001):
I just don’t see the “formal problem”, if we’re using subjective probability distributions. There’s no formal problem with having one rule when you move around probability bins, and another rule when you open a bin to see what’s inside. (*Not* having different rules would be a 'formal problem'! :-)
But consider: why focus on “states”, vs. “histories”? We know (instantaneous) states are frame-dependent. Histories don’t require a foliation or chosen hypersurfaces. If you’re only thinking about states, parameterized by time but not physical space, that’s essentially a pre-relativistic framing.
I think we’re all taking it for granted that we’re going to map our ontology onto some mathematics; if it’s a good map then distinguishing between the ‘physical stuff’ and the ‘mathematical states of that stuff’ is pretty much beside the point. One can (and should!) still ask what the stuff is.
Well, if it’s epistemic, having two rules is no longer contradictory. Using the Liouville equation to evolve a probability distribution is very different from collapsing down that distribution upon learning more information, and yet both are absolutely correct things to do in those circumstances.
Well, the *interesting* parts of the discussion usually come down to the question of what might be physically happening between measurements. Granted, this does tend to get obscured by the anti-realists (unmeasured events don’t exist) and the futilists (it doesn’t matter/we’ll never know).
Yes, good key question! But if the answer was “no” -- say, if the wavefunction only represented epistemic knowledge of some very different underlying state -- then what would then be the right way to frame the “measurement problem”? For that, I think you’d really need to pick an ontology.
I think one can’t even frame the problem before taking a stand on the ontology. For instance, I think we completely agree that the quantum system and the measurement device needs to be essentially the same “stuff” -- but we likely have a vast disagreement about what that “stuff” is likely to be.
I need to get myself a new copy of Travis Norsen’s textbook (lost in a fire, sadly). I liked the way he laid out the “Measurement Problems”, and even more importantly, the way he identifies the closely-related “Ontology Problem”.
scholar.google.com/citations?vi...
Sure, you could set this aside as a “different problem”, but MWI has trouble with it, while other approaches don’t (Bohmian mechanics, for instance). Carving problems into categories (X,Y,Z), and claiming that one approach “solves problem X, so it’s better”, obscures more than it illuminates.
That’s not really fair -- usually people use the term “measurement problem” to refer to a set of interrelated problems concerning measurement. One of those problems is how to connect the empirical results of laboratory measurements with the mathematical objects of the theory.
The craft of writing, or the craft of romance? 😉
You mean, interventionist causation in general? That's a big blind spot for many physicists, sadly. We all "know" about it at a gut level, like when we tell students that external forces *cause* accelerations, rather than the other way around. But many physicists probably couldn't articulate why.
If there is a law-like restriction instead of an uncertainty principle, that’s like taking the retrocausal model and reinterpreting the hidden variables as a gauge, where the gauge itself seems to prevent signaling faster than light. But that would defeat the purpose of the paper, wouldn’t it?
The first model (2 posts back) is forward-causal, but with a strange restriction on what I’m allowed to do in the future. The last model is retrocausal, but with an epistemic restriction on the initial state, tuned just right to prevent me from being able to send a known signal faster than light.
If a different model allows those future experiments, it must then it must in turn forbid my complete causal control of the initial state. Maybe, as you say, with some law-like restriction on the initial state itself, or maybe with some uncertainty principle, forbidding paradox-sufficient access.
If I had control of a system that could signal faster than light, and Lorentz transformations are correct when boosting frames, then it’s pretty clear what sort of experiment I could set up to make a paradox. But if a model forbids my causal control of those future experiments, no more paradoxes.
Yes, there are lots of different causal models which correspond to the very same equations. (Most obvious example here is the Ideal Gas Law; depending on which variables you’re allowed to control, you get very different casual pathways.) Different intervention freedoms = different models.
You can't see this problem if all you look at is 'existence and uniqueness'. Instead, you have to ask questions about the input-output structure of the model -- and that often requires more analysis than just writing down the bare equations.
In this case, with a restriction on (S) due to later events, the issue isn’t *forward* causation; it’s *backward*! If I can set something in the future that would constrain allowed past values of (S), that’s retrocausal. And without restricting access to S, such a model could signal back in time.
In many models (1) and (2) are the same thing, because the inputs are assumed to be the initial state. But in some models, like the one in this paper, this isn’t the case. If there are rules telling you that the initial state (S) can’t be freely set, from outside the model, then (S) isn’t an input.
This brings us to question (1), which is where the true causal analysis lies. But you can’t answer this from the bare equations; the model needs to specify the causal structure. What are the model’s “inputs”? What events are we allowed to “set”, from outside the model, independently?
After all, correlation is not causation. Our causal instincts are “interventionist”. We ask ourselves, if I set event A to this value, instead of that (counterfactual) value, is there an effect at B? We’re not just asking about correlations, we’re asking about input-output relationships.
The only nice thing about (2) is that you can analyze it in terms of the “bare equations”, without making any causal assumptions. Given the equations, the answer to (2) is easy to assess -- it’s just a question of which events are correlated. But does it have anything to do with “causation”?