Aidan Wright's Avatar

Aidan Wright

@aidangcw.bsky.social

Psychopathology | Personality | Quant Methods Professor of Psychology and Psychiatry | Eisenberg Family Depression Center | University of Michigan Editor | Journal of Psychopathology and Clinical Science Founder | www.smart-workshops.com

6,699 Followers  |  1,204 Following  |  1,729 Posts  |  Joined: 09.07.2023  |  1.9766

Latest posts by aidangcw.bsky.social on Bluesky

Ah, havenโ€™t mastered the art of a sunny side up yet I see

04.08.2025 19:19 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Has she tried the madame version?

04.08.2025 13:13 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Tempting, isnโ€™t it?

04.08.2025 13:08 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

The timing on this post from Twitterโ€ฆ

03.08.2025 01:08 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

This is peak lakehouse

03.08.2025 00:11 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
a close up of a man 's face and the words `` you are not alone '' . ALT: a close up of a man 's face and the words `` you are not alone '' .
30.07.2025 21:47 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Mechanistic Science | Infection and Immunity โ€œScience is the knowledge of consequences, and dependence of one fact upon another.โ€ โ€”Thomas Hobbes (7)

1:1 correspondence seems deeply unlikely. I really love this brief editorial that talks about it in terms of description and mechanism.

To use their eg of a candle and light, thereโ€™s no obvious link between the chemical combustion and the concept of illumination.

journals.asm.org/doi/full/10....

30.07.2025 21:46 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If I understand you correctly I totally agree. This absolutely should be hard. Which is why I think itโ€™s so absurd when people act like it isnโ€™t.

30.07.2025 21:42 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Interesting way to say it. I guess thatโ€™s probably how they think about it, as assumed. I think the best version of RDoC would be designed at adressing the multilevel/multimodal coherence issue instead of being a neural reductionism model

30.07.2025 20:45 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

These are good examples that illustrate the issue.

Though another way to frame it is who was audacious (hubristic?) enough to think they could capture a psychological construct with peripheral psychophys?

I still see this in the logic of some research studies.

30.07.2025 19:34 โ€” ๐Ÿ‘ 8    ๐Ÿ” 1    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

For sure. I canโ€™t think of a more important issue.

30.07.2025 17:15 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

This is the question of our times. Though I think answers are likely to be at least somewhat concept specific

30.07.2025 16:59 โ€” ๐Ÿ‘ 13    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I was thinking the other day about how much peopleโ€™s ignorance of survivor bias influences their perspectives (including politics) much to everyoneโ€™s detriment

30.07.2025 16:57 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

IIRC the average was ~100 words/ minute, which is what you expect when reading out loud, but there was real variability here.

Would be happy to chat more if you're interested in pursuing collaboratively, but if not we include our prompts in the manuscript, etc.

27.07.2025 16:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

I don't have a great sense for the exact minimum. No doubt it will depend on eliciting prompt, construct, and model. But we do see that absolute error is modestly associated with length and at lower number of diaries/minutes (2 samples) we see things stabilize at some point.

27.07.2025 16:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

You could also create multi informant latent variables, with either self and LLM or multiple LLMs. Trait scores are highly correlated across models but not perfectly so.

27.07.2025 15:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For sure. A limitation of the data we had available for this study is that we lacked non-self-report outcomes. So, due to method overlap self-report almost always would โ€œwinโ€. To show incremental validity ideally weโ€™d try to predict unprivileged outcomes.

27.07.2025 15:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Itโ€™s already longer than we were shooting for at first. It would be good to add that though on revision or rejection.

27.07.2025 12:53 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I think this is fair criticism and easy to add. True story. We had that as a focus early on when we first did this with just ChatGPT. Then we thought it would be better not as an ad for openAI and went for many models. Then the output started ballooning and it sort of got unwieldy and lost track

27.07.2025 12:53 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Ha

27.07.2025 12:19 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

To say nothing of the sand orcs

27.07.2025 12:08 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I draw this distinction because I think there are other ways to set this sort of thing up where you train this or another model up to specifically predict self report. Here we ask the LLM to rate personality based on the text we give it. We discuss in context of self-other agreement literature

27.07.2025 12:07 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks. A nuance, perhaps, but we donโ€™t treat self-reports as โ€œtrueโ€ in an absolute sense. The personality lit has dispelled the notion of self-as-true a long time ago. We think of this as a different rater based on different modality of data. I think this nuance is critical, others might disagree.

27.07.2025 12:07 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

There is also potential for getting info that isnโ€™t easily captured by questionnaire. Here we use the big5 as the target because we had self-reports to compare but also because the info contained is so so broad. Iโ€™m pretty convinced we can start extrapolating.

27.07.2025 12:02 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I particularly think this holds potential for taking narrative summaries collected in real- or near-time to events (as we do in Study 2) before they are filtered through days of interpretation and meaning making by folks. Also that is a context where we canโ€™t give long lists of questionnaire items.

27.07.2025 12:02 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If I may, this is sort of a double barreled question. 1. I think we should typically trust our patients. At least that their reports reflect their experiences accurately. 2. The value of this method isnโ€™t in getting more trustworthy info, but in aggregating and distilling info from other data.

27.07.2025 12:02 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

It is wild. Thatโ€™s an apt description. When we started doing this I almost fell out of my chair I was so surprised.

27.07.2025 11:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks for engaging, Michael! Great idea. As you noted, one of the big takeaways/demonstrations here was how much this interface can democratize the use of LLMs. But also weโ€™re very interested in understanding this all better and figuring out what works best. Let me chat with team and get back to u

27.07.2025 11:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

So, ironically perhaps, the spell check AI didnโ€™t catch the error in the figure caption. As much as I wish we had, we did not have โ€œsand elf-reportsโ€ in this study. As you all know, sand elves are impossible to work with. It was just human self-report.

H/t @gregdepow.bsky.social for catching this

27.07.2025 11:45 โ€” ๐Ÿ‘ 15    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

The robots know us so well.

We show pretrained LLM ratings of personality from brief snippets of natural language achieve agreement with self-reports that are comparable to family and friends.

27.07.2025 03:22 โ€” ๐Ÿ‘ 16    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@aidangcw is following 20 prominent accounts