I asked Dan about this and now have a better sense of how he's thinking, but it's kind of complicated -- not unrelated to @benholguin.bsky.social's idea of Knowledge by Constraint though
18.10.2025 00:02 β π 2 π 0 π¬ 0 π 0
@jeremygoodman.bsky.social
I asked Dan about this and now have a better sense of how he's thinking, but it's kind of complicated -- not unrelated to @benholguin.bsky.social's idea of Knowledge by Constraint though
18.10.2025 00:02 β π 2 π 0 π¬ 0 π 0Also, to the extent that the henchmen can't take for granted that they both remember things in exactly the same way, you can run a parallel argument with memory knowledge in place of perceptual knowledge.
17.10.2025 23:58 β π 1 π 0 π¬ 0 π 0Good Q! @harveylederman.bsky.social's paper argues perception is insufficient for CK that the other person even exists. That's relying on perception again, but you might think that, if perception can't yield CK, then the henchmen can't have CK of each other's existence/plans to begin with.
17.10.2025 23:58 β π 1 π 0 π¬ 1 π 0Thanks! I still need to read Cohen's paper.
(Dan Greco replies to Lederman in his book w/ a model that "doesnβt include any possibilities where Alice and Bob are in different coarse-grained states of confidence" (p. 163). I don't see why that's legit, though, since it's a genuine possibility.)
These models demonstrate the surprisingly weak assumptions needed for Lederman's argument! (They are much weaker than the assumptions of Williamson's more famous "anti-luminosity" argument against the possibility of infinitely iterated *intrapersonal* knowledge.)
17.10.2025 18:09 β π 0 π 0 π¬ 1 π 0For example, even though Alice *knows* that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. And so on.
Formally, <x,y,z> R_A <y+1,y,y+1> R_B <y+2,y+2,y+1> R_A <y+3,y+2,y+3> R_B <y+4,y+4,y+3> ...
Here's the rub: from any possibility <x,y,z>, we can reach a possibility <x',y',z'> in which the temp x' is arbitrarily far from x using a finite number of steps of R_A and R_B. (The trick is to zig-zag between R_A and R_B.)
This means Alice and Bob have no non-trivial common knowledge of the temp!
So Alice and Bob know that both apps are at most one degree off, and that their phones might be one degree off in either direction. Each also knows what their own phone reads, which is encoded in the relations R_A and R_B:
<x,y,z> R_A <x',y',z'> iff y'=y
and
<x,y,z> R_B <x',y',z'> iff z'=z
Now to Lederman's argument. Let W = {<x,y,z>: |x-y|β€1 and |x-z|β€1}. Here <x,y,z> is the possibility in which the temperature is x, Alice's app reads y, and Bob's app reads z. We're ignoring possibilities where their phones are broken, and so on, as that would just make common knowledge even harder.
17.10.2025 18:09 β π 0 π 0 π¬ 1 π 0These models also allow us to characterize agents' knowledge about each others' knowledge, and hence to model common knowledge: w is a situation in which what Alice and Bob *commonly know* is that they're in some situation v that can be reached from w by some sequence of steps of R_A and R_B.
17.10.2025 18:09 β π 0 π 0 π¬ 1 π 0"w R_A v" means that, in w: for all Alice knows, she's in v. Likewise for Bob and R_B.
In this way we can model what different agents know in different situations: w is a situation in which Alice knows only that she's in some situation v such that w R_A v (and likewise for what Bob knows and R_B).
Here's a mathematical model that makes Lederman's argument formally precise, using tools from epistemic logic.
These models have three ingredients: the set W of possibilities, and two binary relations R_A and R_B on this set of possibilities, corresponding to Alice and Bob's respective knowledge.
In other words, even though Alice knows that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. Etc.
17.10.2025 17:29 β π 1 π 0 π¬ 1 π 0But they don't have any non-trivial common knowledge about the temperature. Why? Because we can zig-zag with R_A and R_B to access possibilities with arbitrarily extreme temperatures. Formally, for any world <x,y,z>, we have:
<x,y,z>R_A<y+1,y,y+1>R_B<y+2,y+2,y+1>R_A<y+3,y+2,y+3>R_B<y+4,y+4,y+3>...
Both R_A and R_B are equivalence relations, so individual knowledge obeys S5 (i.e., Alice knows exactly what she does and doesn't know, and Bob knows exactly we he does and don't know). They each know what their phone reads and that both apps are at most one degree off.
17.10.2025 17:29 β π 1 π 0 π¬ 1 π 0Here's a model.
Let the space of possibilities be the set of triples <x,y,z> β representing temp, Alice's app, Bob's app β with |x-y| β€ 1 and |x-z| β€ 1.
We model each agent's knowledge using an accessibility relation:
<x,y,z>R_A<x',y',z'> iff y'=y
and
<x,y,z>R_B<x',y',z'> iff z'=z
[oops β it actually doesn't make a difference, but I thought you were referring to Stewart Cohen! same issue though]
17.10.2025 17:06 β π 1 π 0 π¬ 0 π 0No; at least, not in the Williamsonian sense which is relevant to KK/that Cohen is replying to. Our argument is perfectly compatible with "cliff-edge" knowledge: i.e., the thermometer reading n degrees, it actually being n+1 degrees, and you knowing that it's at most n+1 degrees.
17.10.2025 17:04 β π 1 π 0 π¬ 2 π 0Hi Matt, There's no margin for error assumption in Lederman's argument. It's compatible with KK (and indeed with S5) for individual knowledge. The interpersonal case is very different from the intrapersonal case.
17.10.2025 16:58 β π 1 π 0 π¬ 1 π 0Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
17.10.2025 15:43 β π 11 π 7 π¬ 14 π 4You can read our full review (without a paywall) @ philpapers.org/archive/GOOK.... And you can also check out Harvey's paper that inspired us here: philpapers.org/archive/LEDU...
17.10.2025 02:43 β π 11 π 2 π¬ 1 π 0Instead, our brief review draws on recent work by @harveylederman.bsky.social, which argues that people aren't ever in a position to know as much as common knowledge demands. If that's right, then common knowledge can't do the work that Pinker wants it to in explaining social coordination.
17.10.2025 02:43 β π 9 π 0 π¬ 1 π 0Our worry isn't that infinite layers of knowledge can't fit in finite brains. (We agree with Pinker that that concern rests on a contentious picture of how the mind works, one which we are happy to reject.)
17.10.2025 02:43 β π 5 π 0 π¬ 1 π 0Common knowledge, in the relevant technical sense, is infinitely iterated interpersonal knowledgeβhence the crucial ellipsis in the book's title. Do we ever manage such a feat?
17.10.2025 02:43 β π 3 π 0 π¬ 1 π 0Among the book's many virtues, we most appreciated how Pinker centers experimental psychology in what is often a highly abstract and theoretical literature. In that interdisciplinary spirit, our review draws on work in epistemology, on whether common knowledge is really possible.
17.10.2025 02:43 β π 6 π 0 π¬ 1 π 0Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesisβthat common knowledge explains coordinationβfaces a powerful challenge. π§΅
www.science.org/doi/10.1126/...
@harveylederman.bsky.social our old friend, the meta cube!!
05.09.2025 16:33 β π 2 π 0 π¬ 0 π 0"Playing is Weak"
29.07.2025 19:40 β π 2 π 0 π¬ 0 π 0