Jan Pfänder's Avatar

Jan Pfänder

@janpfa.bsky.social

phd student https://janpfander.github.io/

173 Followers  |  166 Following  |  58 Posts  |  Joined: 20.10.2023  |  1.6679

Latest posts by janpfa.bsky.social on Bluesky

This is an excellent point that generalizes.
Researchers often defend suboptimal practices by referring to future studies with better designs.

But: Why would anybody run those studies when you can just throw a bunch of variables into a regression and make sweeping "preliminary" claims?

28.10.2025 11:22 — 👍 69    🔁 24    💬 6    📌 2
Preview
Lucky Coincidences: Experiencing Serendipity in Museums and Beyond Serendipity is the unintentional, accidental discovery of something new or surprising that feels positive and meaningful for the individual. Four studies (N1 = 1638; N2 = 279; N3 = 520; N4 = 452) exa...

✨ LUCKY COINCIDENCES ✨ Have you ever come across a surprising, accidental discovery that felt meaningful and motivated you to further engage with it?
In our new paper now out in JASP (doi.org/10.1111/jasp...), we explore such serendipitous experiences in museums and beyond. 1/5 🧵

22.09.2025 08:47 — 👍 13    🔁 5    💬 1    📌 0

Thank you! Yes, we'll post updates here :)

18.10.2025 18:58 — 👍 1    🔁 0    💬 0    📌 0
Preview
RFK Jr. in interview with Scripps News: ‘Trusting the experts is not science’ HHS Secretary RFK Jr. sat down with Scripps News for a wide-ranging interview, discussing mRNA vaccine funding policy changes and a recent shooting at the Centers for Disease Control and Prevention.

1. "'Trusting the experts is not a feature of either a science or democracy," Kennedy said."

It's literally a vital feature of both science and of representative democracy.

I've written a fair bit about trust in expertise as a vital mechanism in the collective epistemology of science.

12.08.2025 04:48 — 👍 10002    🔁 2860    💬 538    📌 480
Post image

Climate science is facing significant opposition in the US. Today we are launching the collaborative Strengthening Trust in Climate Scientists Megastudy 📈 Find out more and join our efforts 👇🧵

15.10.2025 09:37 — 👍 42    🔁 14    💬 1    📌 0

👉 Learn about our submission criteria and how to contribute on our Call for Collaborations page: janpfander.github.io/trust_climat...

🙌 I lead this project together with @colognaviktoria.bsky.social at
@eawag.bsky.social
and
@madalina.bsky.social
and
@smconstantino.bsky.social
at Stanford.

15.10.2025 10:03 — 👍 0    🔁 0    💬 1    📌 0

🔍 What We’re Looking For
We are seeking short, text-based informational interventions that could increase trust in climate scientists. The most promising interventions will be selected by the study leads and an advisory board. Deadline for submission is November 11, 2025.

15.10.2025 10:03 — 👍 0    🔁 0    💬 1    📌 0

🤝 Collaborate with us
We invite researchers at any career stage as well as practitioners to submit intervention ideas to increase trust in climate scientists in the US. Successful contributors will receive co-authorship. Interventions can be submitted by individuals or teams.

15.10.2025 10:03 — 👍 0    🔁 0    💬 1    📌 0

🚀 What is a megastudy?
A megastudy is a large-scale online experiment designed for robust, replicable results.

15.10.2025 10:03 — 👍 0    🔁 0    💬 1    📌 0

🌎 Why trust in climate scientists?
Across 55 countries, trust in climate scientists was the strongest predictor of belief in climate change and support for climate policy (Todorova et al., 2024). Yet, climate scientists tend to be less trusted than scientists of other disciplines.

15.10.2025 10:03 — 👍 0    🔁 0    💬 1    📌 0
Post image

Help us strengthen trust in climate scientists in the US! Join our megastudy 👇

15.10.2025 10:03 — 👍 14    🔁 11    💬 2    📌 0
Preview
The threat of analytic flexibility in using large language models to simulate human data: A call to attention Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...

Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧵

18.09.2025 07:56 — 👍 330    🔁 149    💬 12    📌 59

I'm sorry for your loss! The case of your dad is touching and provides hope that science rejection can be overcome. Surely you've been a great science explainer to him :)

14.09.2025 15:37 — 👍 0    🔁 0    💬 0    📌 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

"Acceptance of the scientific consensus was very high in the sample as a whole (95.1%), but also in every sub-sample (e.g. no trust in science: 87.3%) ... [P]eople are motivated to reject specific scientific beliefs, and not science as a whole."

journals.sagepub.com/doi/abs/10.1...

03.09.2025 20:13 — 👍 5    🔁 2    💬 0    📌 0
Preview
Quasi-universal acceptance of basic science in the United States - Jan Pfänder, Lou Kerzreho, Hugo Mercier, 2025 Substantial minorities of the population report a low degree of trust in science, or endorse conspiracy theories that violate basic scientific knowledge. This m...

Quasi-universal acceptance of basic science in the United States journals.sagepub.com/doi/abs/10.1...

high in sample as a whole (95.1%) "also in every sub-sample (e.g. no trust in science: 87.3%; complete endorsement of flat Earth theory: 87.2%"

"motivated to reject specific scientific beliefs"

02.09.2025 21:20 — 👍 3    🔁 1    💬 0    📌 1
Research – Jan Pfänder

A big thank you to my amazing co-authors Lou Kerzreho and @hugoreasoning.bsky.social

For access to a version of the paper, please check out my website janpfander.github.io/research/

05.09.2025 11:17 — 👍 1    🔁 0    💬 0    📌 0

These results lead us to believe that, in many instances, science rejection might have nothing to do with the underlying science.

Instead, other factors (e.g. psychological traits or political ideology) are likely to be the key drivers of such rejections.

05.09.2025 11:17 — 👍 2    🔁 0    💬 1    📌 0

However, if people were genuinely distrusting science, as some claim to be, they should reject most or all of basic science knowledge. But they don’t.

05.09.2025 11:17 — 👍 1    🔁 0    💬 1    📌 0

Second, it suggests something about the psychology of science rejection.

One might think that the root of rejecting the scientific consensus on specific topics such as vaccines or climate change is genuine distrust of science.

05.09.2025 11:17 — 👍 1    🔁 0    💬 1    📌 0

Why does this quasi-universal acceptance of basic science matter?

First, it gives hope: science rejection does not appear to be wholesale.

Stressing basic science underlying controversial topics such as vaccines or climate change might help science communicators convincing skeptics.

05.09.2025 11:17 — 👍 4    🔁 0    💬 1    📌 0
Post image

On average, participants accepted the scientific consensus in 95% of cases.

Even participants who claimed they don’t trust science at all accepted the scientific consensus in 87% of cases.

Flat earthers accepted 87% of basic science claims.

Climate change deniers had an acceptance rate of 92%.

05.09.2025 11:17 — 👍 3    🔁 0    💬 1    📌 0

After each question, we showed participants the correct, scientifically consensual answer, with a short explanation and some links.

We then asked participants: Do you accept this answer?

05.09.2025 11:17 — 👍 1    🔁 0    💬 1    📌 0

We asked participants questions that are often used to test basic science knowledge, such as:

Are electrons smaller, larger, or the same size as atoms? [Smaller; Same size; Larger]

05.09.2025 11:17 — 👍 1    🔁 0    💬 1    📌 0
Preview
Quasi-universal acceptance of basic science in the United States - Jan Pfänder, Lou Kerzreho, Hugo Mercier, 2025 Substantial minorities of the population report a low degree of trust in science, or endorse conspiracy theories that violate basic scientific knowledge. This m...

How much do people really reject science?

New paper out doi.org/10.1177/0963...

In four studies, we asked Americans—including flat Earthers, climate change deniers and vaccine skeptics—whether they accepted basic scientific facts.

The result? A surprisingly high level of agreement. 👇

05.09.2025 11:17 — 👍 43    🔁 15    💬 3    📌 4
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities

Abstract
Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).

Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities Abstract Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).

Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve.

Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve.

A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals).

Illustrated are 
1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals
2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and
3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.

A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals). Illustrated are 1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals 2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and 3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.

Ever stared at a table of regression coefficients & wondered what you're doing with your life?

Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...

25.08.2025 11:49 — 👍 953    🔁 283    💬 48    📌 20
Post image

Happy to share that my first paper is out in Thinking & Reasoning! 📄📢
With Aikaterini Voudouri, @boissinesther.bsky.social & @wimdeneys.bsky.social we show that deliberate reasoning helps not just to correct but also to justify intuitive judgments.

🔗Full paper: shorturl.at/JTeTi
Quick thread below!

21.08.2025 07:46 — 👍 14    🔁 6    💬 1    📌 0
Preview
Extreme weather event attribution predicts climate policy support across the world - Nature Climate Change Literature produced inconsistent findings regarding the links between extreme weather events and climate policy support across regions, populations and events. This global study offers a holistic asse...

🚨 How do exposure to extreme weather events and the subjective attribution of these events to climate change relate to climate policy support across the world? 🔥🌎

Find out more in our OA article published in Nature Climate Change 👇 1/7 🧵 www.nature.com/articles/s41...

01.07.2025 10:38 — 👍 75    🔁 43    💬 1    📌 8
Introducing Papercheck Introducing Papercheck Introducing Papercheck An Automated Tool to Check for Best Practices in Scientifi...

Very excited to publicly share news about a new tool, Papercheck, that @debruine.bsky.social and me started to develop more than a year ago! In an introductory blog post, we explain our philosophy to automatically check scientific papers for best practices. daniellakens.blogspot.com/2025/06/intr...

17.06.2025 11:15 — 👍 178    🔁 79    💬 5    📌 6
Preview
How laypeople evaluate scientific explanations containing jargon - Nature Human Behaviour Cruz and Lombrozo examine how laypeople make sense of scientific explanations and find that although jargon reduces understanding, for short explanations, jargon makes the explanation more satisfying.

How do laypeople make sense of scientific explanations? This new Article from @cruzf.bsky.social & @tanialombrozo.bsky.social looks at the role of jargon. They find that people find explanations with jargon satisfying, even though jargon reduces understanding.
www.nature.com/articles/s41...

16.06.2025 01:40 — 👍 23    🔁 7    💬 0    📌 1
5-panel comic. (1) [teacher with long hair next to whiteboard] TEACHER: I’m supposed to give you the tools to do good science. (2) [teacher addressing students] But what *are* those tools? Methodology is hard and there are so many ways to get incorrect results. What is the magic ingredient that makes for good science? (3) TEACHER: To figure it out, I ran a regression with all the factors people say are important: [embedded list in sub-panel, cut off at end] Outcome variable: correct scientific results. Predictors: collaboration; skepticism of others’ claims; questioning your own beliefs; trying to falsify hypotheses; checking citations; statistical rigor; blinded analysis; financial disclosure; open data (4) TEACHER: The regression says two ingredients are the most crucial: 1) genuine curiosity about the answer to a question, and 2) ammonium hydroxide. (5) STUDENT: Wait, why did *ammonia* score so high? How did it even get on the list? LONG HAIR: ...And now you’re doing good science!

5-panel comic. (1) [teacher with long hair next to whiteboard] TEACHER: I’m supposed to give you the tools to do good science. (2) [teacher addressing students] But what *are* those tools? Methodology is hard and there are so many ways to get incorrect results. What is the magic ingredient that makes for good science? (3) TEACHER: To figure it out, I ran a regression with all the factors people say are important: [embedded list in sub-panel, cut off at end] Outcome variable: correct scientific results. Predictors: collaboration; skepticism of others’ claims; questioning your own beliefs; trying to falsify hypotheses; checking citations; statistical rigor; blinded analysis; financial disclosure; open data (4) TEACHER: The regression says two ingredients are the most crucial: 1) genuine curiosity about the answer to a question, and 2) ammonium hydroxide. (5) STUDENT: Wait, why did *ammonia* score so high? How did it even get on the list? LONG HAIR: ...And now you’re doing good science!

Good Science

xkcd.com/3101/

12.06.2025 20:28 — 👍 3526    🔁 633    💬 24    📌 34

@janpfa is following 20 prominent accounts