Ryan Smith's Avatar

Ryan Smith

@rssmith.bsky.social

Associate Professor at the Laureate Institute for Brain Research. My lab focuses on computational neuroscience and psychiatry, emotion-cognition interactions, prospective planning, exploration, and interoception.

320 Followers  |  216 Following  |  64 Posts  |  Joined: 14.12.2024  |  1.9863

Latest posts by rssmith.bsky.social on Bluesky

But I’ve seen other papers also use appraisal = conceptualization/interpretation of interoceptive sensations. And clearly our interpretations of emotions can also be appraised and generate other emotions (eg, being frustrated that you are sad), so there’s lots of circular inference and overlap

13.07.2025 15:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

to how that term has been used in causal appraisal theories of emotion, where this means evaluation of one’s situation in the world along various dimensions (goal congruency, value consistency, etc) and that generate affective responses accordingly (ie, before they could be felt and interpreted).

13.07.2025 15:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ya, that’s more or less exactly what I think. There’s jargon issues though. In papers with Richard we always talked about mapping body state representations to concepts (eg, interpreting heart palpitations as a feeling of panic or symptom of a heart attack). We tried to keep β€œappraisal” restricted..

13.07.2025 15:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Agreed. The video examples are super interesting. I wish the questions were asked in a more controlled way, but I can’t fault for some limitations of this rarely possible type of work. It seemed quite somatic and abrupt. No spontaneous descriptions of emotion proper either. Only when given a word.

13.07.2025 15:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But I suppose I should be clear that interception task performance does still seem affected in multiple disorders. So it is likely still relevant to psychopathology, even if not via direct impact on emotion itself.

13.07.2025 15:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I’ll be interested to see what you have in mind. I’m also skeptical that detection accuracy for things like heartbeats has much to do with emotion. But I think feeling the sensation of a racing heart or other internal sensations and interpreting their meaning is strongly linked to emotion.

13.07.2025 14:32 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sure, that all seems reasonable. I think it wouldn’t be stable unless the right regularities are present between actions and observations (esp in development). But barring that, I guess I’m just prone to generalize because I can’t see why some experiences should be privileged over others

13.07.2025 12:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

for psychology anything like we would find intuitive. But if you have any arguments you find convincing re brain stim induced experience, phantom limb etc that would still make an actual biological body necessary I’m all ears. The material just feels arbitrary, other than actual chem properties

13.07.2025 05:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Haha. Ya, for whatever reason I have the other bias. Like things like phantom limb, hallucinations, the ability to induce experiences with direct stimulation of the brain, etc etc, just convince me that the actual cause of a signal isn’t required. But I think the *as if* part is probably crucial…

13.07.2025 05:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

then we’re back to something about having the right computational architecture to control a body like ours in the way we do.

12.07.2025 18:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Well I think one argument could start from a standard brain-in-a-vat (or matrix-style) premise. We know empirically that stimulating the brain or nerve inputs directly is sufficient to induce experience. So it follows that the brain only needs input signals *as if* it has a body. And if that’s true

12.07.2025 18:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

computational (relation to inference, predictive control, etc.) leads me right back to some form of functionalism.

12.07.2025 14:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

After all, there’s lots of carbon-based structures we definitely don’t think have mental properties. So then it needs to be about structure and dynamics, and the relevant structure and dynamics could (in principle) be realized by non-carbon systems. That + the clear relation between mental and…

12.07.2025 14:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So then I think we’re right back to having the right computational architecture needed for independently controlling and maintaining a body and that assigns high value to doing so. Otherwise it seems like the argument is for some kind of β€œcarbon essentialism”, which feels unmotivated.

12.07.2025 14:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It will have to maintain optimal energy levels, temperature levels, etc. to keep itself functioning just like any evolved system. This would benefit from having a generative model of those processes that predict future changes in those levels, supporting internal planning, and so forth.

12.07.2025 14:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sorry, I missed a couple of the posts above before sending those last 2 messages. I’m with you on much of that. But I think what β€œalive” means becomes the main thing. I think the second you build a robot that’s self-maintaining, you have clear starting points for homeostasis, for example.

12.07.2025 14:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

if those are the basic options on the table, I think there’s clear reasons (at least convincing to me) that some type of functionalism is most plausible to bet on.

12.07.2025 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Those are both naturalistic positions, which I’m scientifically committed to. But then there’s panpsychist views (all matter has some kind of mental aspect, even single particles) and dualist positions (mind is not implemented by physical stuff). I’m sure there’s others, and subtypes of each. But…

12.07.2025 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sure. But we should also be clear about the options. There’s functionalism (mental phenomena are specific types of computations), which I’m advocating. There’s biological identity positions (mind requires implementation with lipids, proteins, etc., above and beyond the computations they implement).

12.07.2025 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These are just examples of clues to follow. All I’m saying is that we know some control architecture exists that has the right properties. It’s just a current puzzle to figure out what the necessary and sufficient conditions are for it.

12.07.2025 03:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

scenarios and use that in model-based planning. Embodiment would ground multiple dynamically evolving needs to continuously track and prioritize to maintain long-run homeostasis. We know it’s a limited capacity system with serial processes, somehow attached to a massively parallel system, etc.

12.07.2025 03:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For example, it seems reasonable to expect the system will encode a generative model of its environment, including its body, reflecting multiple temporal scales that allow for retrospection and prospective control. It would likely require the capacity for internal simulation of counterfactual…

12.07.2025 03:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If we’re talking about extant artificial systems I agree. At the same time, the brain has a physical control architecture, which we know does feel. We just need to figure out what that architecture is. I think there are plenty of clues to work from, with much more than simple value signals.

12.07.2025 02:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The neural basis of one's own conscious and unconscious emotional states The study of emotional states has recently received considerable attention within the cognitive and neural sciences. However, limited work has been do…

Or at least that’s the kind of argument we’ve tried to make in a series of papers a while ago. This is one example, in case you’re interested: www.sciencedirect.com/science/arti...

11.07.2025 20:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I think it depends what you mean by β€˜robot’. If you define it as having the β€˜wrong’ kind of control system then the argument they don’t feel goes through by assumption. For me, the question comes down to what the right control architecture is.

11.07.2025 20:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ya, I think it’s consistent. Strong positive valence is also prioritizing allocation of cognitive resources on that person. Where that prioritization comes from could be innate or learned. Positive valence seems more about approaching/maintaining states, while negative about avoiding/removing.

11.07.2025 20:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I’m sympathetic to the idea of it guiding simulation in a model-based setting under uncertainty. But I’m skeptical of valence per se as a model-free learning signal, in part bc punishment-based conditioning appears to be possible without awareness.

11.07.2025 14:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I see what you mean for sure. My first thought would be a prioritization function of some kind. β€œEven though I highly value x in general, right now motivation needs to be directed toward y”. So degree of valence reflects degree of current priority to deal with y instead of x.

11.07.2025 14:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

other aspects of emotion could be experienced through this same sort of general mechanism. In other words, they don’t just motivate reflexive approach/avoidance. They guide internal simulation and planning about what would happen if I did this or that and assign value to different imagined outcomes

11.07.2025 12:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I tend to have sympathies toward neural models of explicit vs implicit perception in which a stimulus leads to an explicit (reportable) percept when its representation is made accessible to deep temporal offline planning where we can simulate the world as different than it currently is. Valence and

11.07.2025 12:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@rssmith is following 20 prominent accounts