Matthew Larkum's Avatar

Matthew Larkum

@mattlark.bsky.social

Neuroscientist at the Humboldt University of Berlin, violinist and chamber music enthusiast

65 Followers  |  24 Following  |  15 Posts  |  Joined: 26.05.2025  |  1.7062

Latest posts by mattlark.bsky.social on Bluesky

But now there are now two kinds of “nothing”. With green light, the “feedback replay” doesn't need to do anything. If we simply turn the replay device off, it “can’t” do anything. According to theories that depend on causality (e.g. IIT), the two kinds of nothing are fundamentally different.

26.05.2025 10:13 — 👍 3    🔁 0    💬 1    📌 0

A computational functionalist must decide:
Does consciousness require dynamic flexibility and counterfactuals?
Or is a perfect replay, mechanical and unresponsive, still enough?

26.05.2025 10:13 — 👍 1    🔁 0    💬 1    📌 0

So we ask: is consciousness just the path the system did take, or does it require the paths it could have taken?

26.05.2025 10:13 — 👍 2    🔁 0    💬 1    📌 0

In Turing terms: for the same input, the same state transitions occur. But if you change the input (e.g. shine red light), things break. Some states become unreachable. The program is intact but functionally inert. It can’t see colours anymore. Except arguably green - or can it?

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0

For congruent input (here, the original green light), no corrections are needed. The replay “does nothing”. Everything flows causally just as before. Same input drives the same neurons to have the same activity for the same reasons. If the original system was conscious, should the re-run be, too?

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0
Post image

Back to the new thought experiment extension, where we add a twist: “feedback replay”. Like how patch clamping a cell works, the system now monitors the activity of neurons, only intervening if needed.

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0

Could the head be feeling something? Is it still computation?

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0
Post image

In the original thought exp, we imagined “forward replay”. Here, the transition function (the program) is ignored, which amounts to a “dancing head”. This feels like a degenerate computation (Unfolding argument? doi.org/10.1016/j.co...).

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0
A standard Turing Machine cartoon showing the "green states" that the algorithm uses to compute green, and "red states" that are only necessary for seeing red. Additionally, a recording device recording 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m), for each step.

A standard Turing Machine cartoon showing the "green states" that the algorithm uses to compute green, and "red states" that are only necessary for seeing red. Additionally, a recording device recording 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m), for each step.

To analyze this, we model it with a Universal Turing Machine. Input: “green light.” The machine follows its transition rules and outputs “experience of green.” Each step we record 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m).

26.05.2025 10:13 — 👍 1    🔁 0    💬 1    📌 0

So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0

Then we replay it back into the same neurons. The system behaves identically. No intervention needed. So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0

We record the entire sequence of what happens when “seeing green”. Then we replay it back into the same simulated neurons. If computational functionalist is right, this drives the “right” brain activity for a 1st-person experience.

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0

Now, imagine a person looking at a green light. If the computational functionalist is right, the correct brain simulation algorithm doesn't just process green, it experiences green. Here, we start by assuming some deterministic algorithm can simulate all crucial brain activity.

26.05.2025 10:13 — 👍 0    🔁 0    💬 1    📌 0
Preview
Does brain activity cause consciousness? A thought experiment The authors of this Essay examine whether action potentials cause consciousness in a three-step thought experiment that assumes technology is advanced enough to fully manipulate our brains.

This extends a thought experiment from our earlier paper: doi.org/10.1371/jour...
We (Albert Gidon and @jaanaru.bsky.social) asked: does brain activity cause consciousness, or is something essential lost when the brain's dynamics are bypassed?

26.05.2025 10:13 — 👍 1    🔁 0    💬 1    📌 0
Preview
Frontiers | Does neural computation feel like something? Artificial neural networks are becoming more advanced and human-like in detail and behavior. The notion that machines mimicking human brain computations migh...

Does neural computation feel like something? In our new paper, we explore a paradox: if you replay all the neural activity of a brain—every spike, every synapse—does it recreate conscious experience?
🧠 doi.org/10.3389/fnin...

26.05.2025 10:13 — 👍 15    🔁 7    💬 4    📌 1

@mattlark is following 20 prominent accounts