Our study draws renewed attention to the distinction between beliefs and attitudes. It also showcases how LLMs can be used to peer into belief systems. We welcome any feedback!
02.04.2025 13:04 β π 0 π 0 π¬ 0 π 0@patrickpliu.bsky.social
Columbia Political Science | PhD Student
Our study draws renewed attention to the distinction between beliefs and attitudes. It also showcases how LLMs can be used to peer into belief systems. We welcome any feedback!
02.04.2025 13:04 β π 0 π 0 π¬ 0 π 0Across 2 studies, focal + distal counterarguments reduced focal + distal belief strength (respectively). But focal arguments had larger and more durable effects on downstream attitudes.
We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
Ex: Respondent said they care about public infrastructure.
In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
Ordinarily, a design that a) elicits personally important issues + relevant beliefs through convos, b) uses tailored treatments, & c) measures persistence of effects would require 3 survey waves and immense resource/labor costs.
We overcome these issues (+ replicate) using LLMs.
We engaged ppl in direct dialogue to discuss an issue they care about and the reasons for their stance. We generated a βfocalβ belief from this text convo and a less relevant βdistalβ belief, then randomly assigned a focal belief counterargument, distal argument, or placebo text.
02.04.2025 13:04 β π 0 π 0 π¬ 1 π 0Identifying relevant beliefs is challenging! Fact-checking studies rely on databases to identify prevalent misinfo and network methods map mental associations at a group level, but the beliefs ppl personally treat as relevant on an issue are diverse and shaped by political preferences.
02.04.2025 13:04 β π 0 π 0 π¬ 1 π 0We build on classic psych models that represent attitudes as weighted sums of beliefs about an object. The impact of belief change on subsequent attitude change increases with the beliefβs weight, capturing its relevance. Low relevance = small effect of info on attitudes.
02.04.2025 13:04 β π 0 π 0 π¬ 1 π 0There is a tendency to conclude that attitudes (evaluations of an object) are stickier than beliefs (factual positions) about the object, possibly b/c of motivations to preserve attitudes.
But this assumes beliefs targeted by the informational treatment matter for the attitude.
Puzzle: Studies widely find learning occurs w/o attitude change. Correcting vaccine misinformation fails to alter vax intentions, reducing misperceptions of the # of immigrants doesnβt reduce hostility, learning about govt spending doesnβt affect econ policy preferencesβ¦ the list goes on.
02.04.2025 13:04 β π 0 π 0 π¬ 1 π 0Link: go.shr.lc/4j9My8H
We find arguments targeting relevant beliefs produce strong and durable attitude changeβmore than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!
π§΅ Why do facts often change beliefs but not attitudes?
In a new WP with @yamilrvelez.bsky.social and @scottclifford.bsky.social, we caution against interpreting this as rigidity or motivated reasoning. Often, the beliefs *relevant* to peopleβs attitudes are not what researchers expect.