Jorge Morales's Avatar

Jorge Morales

@jorge-morales.bsky.social

I'm a philosopher, psychologist and neuroscientist studying vision, mental imagery, consciousness and introspection. As S.S. Stevens said "there are numerous pitfalls in this business." https://www.subjectivitylab.org

3,778 Followers  |  3,221 Following  |  650 Posts  |  Joined: 30.07.2023  |  2.8029

Latest posts by jorge-morales.bsky.social on Bluesky

OSF

🚨New preprint🚨
Across 42 countries, we tested whether national-level cultural tightnessβ€”how strongly societies enforce normsβ€”shapes responses to climate-related norm messages.Β osf.io/preprints/psyarxiv/xzj7r_v1

20.10.2025 20:48 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

This was a challenging week but this is how my day started so I’ll call it even

17.10.2025 23:39 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Become a Visiting Fellow!

Interested in joining the Center for Philosophy of Science? Applications for 2026-27 now open for Visiting Fellows! Postdoc applications will be available soon!

More info about our programs available on our website at https://www.centerphilsci.pitt.edu/programs/overview/

15.10.2025 16:01 β€” πŸ‘ 4    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
MIT Consciousness Club The MIT Consciousness Club aims to foster interdisciplinary research on consciousness at MIT and in the broader Boston area by organizing a monthly event featuring an expert talk on consciousness foll...

The next session of the MIT Consciousness Club is on Thursday 16, 12pm-1:30pm. Rachel Denison will present β€œAttentional Distortions of Subjective Perception”. More information here: sites.google.com/view/mit-con....

10.10.2025 17:04 β€” πŸ‘ 36    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1
Post image Post image

New preprint!

"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"

(by Li, Hammond, & me)

link: doi.org/10.31234/osf...

-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary

14.10.2025 13:22 β€” πŸ‘ 66    πŸ” 22    πŸ’¬ 5    πŸ“Œ 0

... & most of all to my amazing partner who is willing to pick up his life & move across the world with me.

(PS Do you know any aerospace-ish companies in London? My partner will need a new job!! πŸ™)

And THANK YOU to UCL EP for welcoming me -- here’s to the next big adventure in London πŸ₯‚

13.10.2025 16:29 β€” πŸ‘ 20    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

πŸ“£ BIG NEWS EVERYONE. I am so excited to announce…

πŸŽ‰ I’m moving to University College London @ucl.ac.uk to join the Experimental Psychology department in @uclpals.bsky.social! πŸŽ‰

The big move happens in spring/summer. So I’m already exploring recruiting staff & students at UCL for fall 2026!

13.10.2025 16:29 β€” πŸ‘ 374    πŸ” 45    πŸ’¬ 53    πŸ“Œ 2
Post image

same vibe

12.10.2025 01:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Another day, another friend who's been traumatized and whose career is being derailed because a dude couldn't keep it in his pants (and the whole system ran to protect him). Ugh, everything sucks.

10.10.2025 00:45 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

#sorrynotsorry

09.10.2025 23:10 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This is a big one! A 4-year writing project over many timezones, arguing for a reimagining of the influential "core knowledge" thesis.

Led by @daweibai.bsky.social, we argue that much of our innate knowledge of the world is not "conceptual" in nature, but rather wired into perceptual processing. πŸ‘‡

09.10.2025 16:31 β€” πŸ‘ 122    πŸ” 48    πŸ’¬ 7    πŸ“Œ 7

It is a lot of fun! Xueyi said she couldn’t chop onion the next morning πŸ˜‚

09.10.2025 13:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m scheduled for surgery today on my Achilles tendon, followed by 2 weeks of no weight bearing.πŸ˜΅β€πŸ’«

So, like any good scientist, I got together 7 colleagues to study consequences of limb disuse.

Introducing the HEALING study

with @laurelgd.bsky.social @sneuroble.bsky.social @briemreid.bsky.social

25.09.2025 11:47 β€” πŸ‘ 68    πŸ” 5    πŸ’¬ 5    πŸ“Œ 7

I had to google what that meant, I’m a complete newbie. But yeah we were all puzzled and the most experienced among us showed us how to do it (it was a foot jam actually). It was pretty good group activity indeed!

07.10.2025 18:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

Our lab went climbing (yes, on a Tuesday morning, oops) and it was really fun! πŸ§—β€β™‚οΈ it was the first time for a few of us and I can totally see why people get into it.

07.10.2025 18:03 β€” πŸ‘ 17    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

We’re recruiting a postdoctoral fellow to join our team! πŸŽ‰

I’m happy to share that I’ve opened back up the search for this position (it was temporarily closed due to funding uncertainty).

See lab page and doc below for details!

07.10.2025 02:39 β€” πŸ‘ 63    πŸ” 37    πŸ’¬ 2    πŸ“Œ 1
Post image

Long time in the making: our preprint of survey study on the diversity with how people seem to experience #mentalimagery. Suggests #aphantasia should be redefined as absence of depictive thought, not merely "not seeing". Some more take home msg:
#psychskysci #neuroscience

doi.org/10.1101/2025...

02.10.2025 18:10 β€” πŸ‘ 112    πŸ” 35    πŸ’¬ 11    πŸ“Œ 2
Preview
Subjective inflation: phenomenology’s get-rich-quick scheme How do we explain the seemingly rich nature of visual phenomenology while accounting for impoverished perception in the periphery? This apparent misma…

Interestingly, it may just be gaps all the way down. Our experiences themselves may be built out of impoverished signals. In other words, the richness of experience is not necessarily an illusion but a reconstruction. E.g.:

03.10.2025 13:16 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Absolutely! Hard to capture in words and introspective reports, and impossible to fully capture once it’s operationalized in an experiment.

03.10.2025 12:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
OSF

This one explored differences in experience and eye movements during reading β€œAphantasia modulates immersion experience but not eye fixation patterns during story reading” osf.io/preprints/ps...

03.10.2025 12:08 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Shamelessly promoting my favorite paper. Everybody who was anybody in the history of science/philosophy/mathematics had a view on the moon illusion. frances-egan.org/uploads/3/5/...

02.10.2025 18:23 β€” πŸ‘ 48    πŸ” 16    πŸ’¬ 4    πŸ“Œ 2

This is part of what makes it interesting. They are so good at some examples but terrible at others (in almost any task but definitely in ours). This means they aren't doing it in any principled way (otherwise it should be trivial to get most of them right). But how are they getting some right then?

02.10.2025 16:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you, Greg! This is very encouraging to hear from you. This project has been a lot of fun, and mentoring the undergrad who led it has been super rewarding. He's already working on new questions around the same topic, so hopefully there will be more to share in the coming months.

02.10.2025 16:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

That's pretty good! Even with graphic design elements!

02.10.2025 16:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A few people have asked if the reason why some LLMs perform this visual imagery task successfully is only because the stimuli / task-type we used were in the models' training data. If this were so, data contamination would make our results uninteresting. See this thread for why this isn't the case.

02.10.2025 12:39 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We haven't, but that's a cool idea!

02.10.2025 15:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We have no clue what's going on under the hood. One thing we did explore was varying the reasoning effort parameter in the OpenAI reasoning models we tested. We found, perhaps unsurprisingly, that as reasoning token and time allocations decreased, so did the performance.

02.10.2025 15:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Sadly not at the time.

02.10.2025 12:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

A few people have asked if the reason why some LLMs perform this visual imagery task successfully is only because the stimuli / task-type we used were in the models' training data. If this were so, data contamination would make our results uninteresting. See this thread for why this isn't the case.

02.10.2025 12:39 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Now *that* is cool! I guess I’m not surprised it didn’t work back then. We tried with several small, open models and nether got a single answer right. In fact, had we done our study six months ago (before o3 and GPT-5 were released) we wouldn’t have found performance above the human baseline.

02.10.2025 10:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@jorge-morales is following 20 prominent accounts