Jan Pfänder's Avatar

Jan Pfänder

@janpfa.bsky.social

phd student https://janpfander.github.io/

187 Followers  |  184 Following  |  63 Posts  |  Joined: 20.10.2023  |  2.0607

Latest posts by janpfa.bsky.social on Bluesky

Post image Post image

Breaking news: The #NCCR #CLIM+ on "Climate Extremes & Society" will be funded by the @snf-fns.ch in the coming 4 years! With 47 PIs, 20 institutions & 22 stakeholders from #health to #finance & #agriculture, it will unite climate expertise from both natural & social science!
nccr-climplus.ch

30.01.2026 08:47 — 👍 30    🔁 19    💬 2    📌 2
Post image

What motivates people to engage in climate advocacy?

In a new PNAS Nexus megastudy [https://doi.org/10.1093/pnasnexus/pgaf400] led by @dgoldwert.bsky.social we tested 17 theoretical interventions on a large US sample (N=31,324) to increase public, political, and financial climate advocacy.

1/5

27.01.2026 18:23 — 👍 35    🔁 19    💬 1    📌 1
Causion interface showing a front-door DAG

Causion interface showing a front-door DAG

Played around with Causion by @isager.bsky.social
It's an impressive teaching tool for causal inference... and it also really pretty. Amazing color scheme and slick interface.
Shown below a demonstration of an extended front-door DAG in Causion.

29.01.2026 02:28 — 👍 17    🔁 5    💬 2    📌 0
Call for Proposals: Data Collection for
Replication+Novel Political Science Survey Experiments
Alexander Coppock and Mary McGrath
January 27, 2026
We invite proposals for a survey experiment replication+novel design competition. Se-
lected replication+novel design survey experiments will be conducted on large samples of
American respondents, quota sampled to match U.S. Census margins and filtered for quality
and attention by the survey sample provider Rep Data (repdata.com).
Each proposal consists of two parts: (1) a replication study of an existing, previously
published survey experiment, and (2) a novel experimental design on a topic of the authors’
choosing.
The replication studies and reanalyses of the existing studies will be combined into a
meta-paper to be co-authored by all authors of accepted proposals along with the princi-
pal investigators (Coppock and McGrath). As a condition for acceptance, authors commit
to sharing the data and producing a write-up of the findings from their novel design for
submission to a scholarly journal, and public posting of a working paper pre-publication.

Call for Proposals: Data Collection for Replication+Novel Political Science Survey Experiments Alexander Coppock and Mary McGrath January 27, 2026 We invite proposals for a survey experiment replication+novel design competition. Se- lected replication+novel design survey experiments will be conducted on large samples of American respondents, quota sampled to match U.S. Census margins and filtered for quality and attention by the survey sample provider Rep Data (repdata.com). Each proposal consists of two parts: (1) a replication study of an existing, previously published survey experiment, and (2) a novel experimental design on a topic of the authors’ choosing. The replication studies and reanalyses of the existing studies will be combined into a meta-paper to be co-authored by all authors of accepted proposals along with the princi- pal investigators (Coppock and McGrath). As a condition for acceptance, authors commit to sharing the data and producing a write-up of the findings from their novel design for submission to a scholarly journal, and public posting of a working paper pre-publication.

🎺 Call for proposals 🎺

1️⃣ replicate an existing experiment
2️⃣ run a novel experiment

on repdata.com

3️⃣ coauthor with Mary McGrath and me to meta-analyze the replications and existing studies
4️⃣ publish your study

details: alexandercoppock.com/replication_...
applications open Feb 1

please repost!

27.01.2026 22:16 — 👍 68    🔁 59    💬 0    📌 1
Preview
RegCheck RegCheck is an AI tool to compare preregistrations with papers instantly.

Comparing registrations to published papers is essential to research integrity - and almost no one does it routinely because it's slow, messy, and time-demanding.

RegCheck was built to help make this process easier.

Today, we launch RegCheck V2.

🧵

regcheck.app

22.01.2026 11:05 — 👍 173    🔁 91    💬 7    📌 6
Preview
Industry Influence in High-Profile Social Media Research To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to indus...

Ever wonder what proportion of high profile social media research is tied to the tech industry?

New from me, @cailinmeister.bsky.social, @jevinwest.bsky.social and @carlbergstrom.com.

Thread tomorrow.

arxiv.org/abs/2601.11507

19.01.2026 02:17 — 👍 100    🔁 35    💬 3    📌 8
Post image Post image Post image Post image

We've got ISSUES. Literally.

We scraped >100k special issues & over 1 million articles to bring you a PISS-poor paper. We quantify just how many excess papers are published by guest editors abusing special issues to boost their CVs. How bad is it & what can we do?

arxiv.org/abs/2601.07563

A 🧵 1/n

13.01.2026 08:24 — 👍 500    🔁 314    💬 17    📌 50
Post image

Too many significance tests!!

Made this little graphic for my #stats class, showing the various kinds of (N)HST and how interpreting confidence intervals can replace all of them.

Made with #rstats #ggplot (duh)

12.01.2026 20:54 — 👍 104    🔁 27    💬 6    📌 3
Preview
A registered report megastudy on the persuasiveness of the most-cited climate messages - Nature Climate Change How to effectively communicate climate change to the public has long been studied and debated. Through a registered report megastudy, researchers tested the ten most-cited climate change messaging str...

Interesting megastudy on the (in)effectiveness of climate messaging: tiny effects on attitudes, no effects on donation www.nature.com/articles/s41...
"Persuasiveness varied little across party lines", another win for Persuasion in Parallel @aecoppock.bsky.social

07.01.2026 11:56 — 👍 22    🔁 6    💬 0    📌 1
Post image Post image

Kids: Inspect your data (yikes)
www.nature.com/articles/s41...
www.nytimes.com/2025/12/03/b...

03.12.2025 14:01 — 👍 37    🔁 8    💬 2    📌 1

1. Transparency is necessary for credibility
2. Transparency is hard to change
3. Require transparency*
4. Transparency is not magic
5. Journals are part of problem
6. Expect more from journals
7. Peer review is not magic
8. A crisis can look a lot like „normal“ science
9. Meta-analysis is not magic

03.12.2025 09:40 — 👍 91    🔁 39    💬 4    📌 1
Title page of our paper

Title page of our paper

🚨 Mapping climate change coverage

In a new preprint, Simon Wimmer, @jmbh.bsky.social, and I analyzed over 50,000 articles about climate change from major German newspapers across the political spectrum (2010-2024) using large language models 🧵

🔗 Link: osf.io/preprints/so...

02.12.2025 09:37 — 👍 25    🔁 9    💬 2    📌 1

Scientific Reports has a ⬆️ Impact Inflation: a very high IF given their citation network (self-citing, citation cartels, etc).

They'll even typeset & publish AI slop for a fee!

Strain: bit.ly/StrainQSS
Strain explorer β: pagoba.shinyapps.io/strain_explo...

#SciPub #ResearchIntegrity #AcademicSky

28.11.2025 07:20 — 👍 31    🔁 13    💬 2    📌 0
Post image

We have a new preprint: osf.io/preprints/so...

What have we learned about social media - the constantly moving target of empirical research - over the past decade?

30.10.2025 10:53 — 👍 84    🔁 39    💬 2    📌 4

Experimental participants to us

12.11.2025 14:08 — 👍 136    🔁 29    💬 3    📌 1
Preview
Global studies on trust in science suggest new theoretical and methodological directions Public trust in science is vital for tackling global challenges. Recently, global surveys and Many Labs collaborations have begun to broaden the scope…

www.sciencedirect.com/science/arti...

25.11.2025 12:31 — 👍 5    🔁 0    💬 0    📌 0

In a new short review piece, we argue that advancing research on trust in science requires:

- clearer concepts

- harmonized measures

- a normative discussion about what levels and forms of trust are desirable.

Huge thanks to my co-authors @nielsmede.bsky.social and @colognaviktoria.bsky.social.

25.11.2025 12:31 — 👍 4    🔁 1    💬 1    📌 0
Redirecting

Trust in science is increasingly being studied across the globe—which is good news. However, expanding geographic coverage alone isn’t enough.

doi.org/10.1016/j.co...

25.11.2025 12:31 — 👍 9    🔁 1    💬 1    📌 0
This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.

This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.

There is no reason why systematic reviews can't be open. The data used for synthesis is *already* open and there are many excellent open source tools that can facilitate the easy sharing of analysis scripts.

Here's a nice guide for performing open systematic reviews doi.org/10.1525/coll...

24.11.2025 12:10 — 👍 119    🔁 40    💬 0    📌 0
Post image

new paper by Sean Westwood:

With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research

18.11.2025 19:15 — 👍 777    🔁 390    💬 41    📌 127
Title page of the paper "Techno-optimistic scientists take fewer climate action"

Title page of the paper "Techno-optimistic scientists take fewer climate action"

🚨Techno-optimistic scientists take fewer climate actions

In a new preprint, @colognaviktoria.bsky.social, @maiensachis.bsky.social, @jmbh.bsky.social & I examine techno-optimism among 9,199 scientists and how it relates to their civic engagement and lifestyle choices🧵

🔗 Link: tinyurl.com/hh94huzv

14.11.2025 09:20 — 👍 74    🔁 31    💬 2    📌 6
Post image

🎉 New preprint: Bayesian Competence Inference guides Knowledge Attribution and Information search

If someone knows that Venus is the only planet in the Solar System that rotates clockwise, will they also know what Earth’s only natural satellite is? What about which planets have no moons at all?

13.11.2025 17:16 — 👍 27    🔁 11    💬 1    📌 0

🎉 You’ve exceeded even our most optimistic expectations — we received 107 intervention proposals.

THANK YOU!

🕵 Our advisory board will now begin reviewing all interventions.

🔗 More information: janpfander.github.io/trust_climat...

13.11.2025 10:52 — 👍 8    🔁 2    💬 0    📌 1
A table showing profit margins of major publishers. A snippet of text related to this table is below.

1. The four-fold drain
1.1 Money
Currently, academic publishing is dominated by profit-oriented, multinational companies for
whom scientific knowledge is a commodity to be sold back to the academic community who
created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis,
which collectively generated over US$7.1 billion in revenue from journal publishing in 2024
alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit
margins have always been over 30% in the last five years, and for the largest publisher
(Elsevier) always over 37%.
Against many comparators, across many sectors, scientific publishing is one of the most
consistently profitable industries (Table S1). These financial arrangements make a substantial
difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor &
Francis revenues were generated in North America, meaning that North American
researchers were charged over US$2.27 billion by just two for-profit publishers. The
Canadian research councils and the US National Science Foundation were allocated US$9.3
billion in that year.

A table showing profit margins of major publishers. A snippet of text related to this table is below. 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.

A figure detailing the drain on researcher time.

1. The four-fold drain

1.2 Time
The number of papers published each year is growing faster than the scientific workforce,
with the number of papers per researcher almost doubling between 1996 and 2022 (Figure
1A). This reflects the fact that publishers’ commercial desire to publish (sell) more material
has aligned well with the competitive prestige culture in which publications help secure jobs,
grants, promotions, and awards. To the extent that this growth is driven by a pressure for
profit, rather than scholarly imperatives, it distorts the way researchers spend their time.
The publishing system depends on unpaid reviewer labour, estimated to be over 130 million
unpaid hours annually in 2020 alone (9). Researchers have complained about the demands of
peer-review for decades, but the scale of the problem is now worse, with editors reporting
widespread difficulties recruiting reviewers. The growth in publications involves not only the
authors’ time, but that of academic editors and reviewers who are dealing with so many
review demands.
Even more seriously, the imperative to produce ever more articles reshapes the nature of
scientific inquiry. Evidence across multiple fields shows that more papers result in
‘ossification’, not new ideas (10). It may seem paradoxical that more papers can slow
progress until one considers how it affects researchers’ time. While rewards remain tied to
volume, prestige, and impact of publications, researchers will be nudged away from riskier,
local, interdisciplinary, and long-term work. The result is a treadmill of constant activity with
limited progress whereas core scholarly practices – such as reading, reflecting and engaging
with others’ contributions – is de-prioritized. What looks like productivity often masks
intellectual exhaustion built on a demoralizing, narrowing scientific vision.

A figure detailing the drain on researcher time. 1. The four-fold drain 1.2 Time The number of papers published each year is growing faster than the scientific workforce, with the number of papers per researcher almost doubling between 1996 and 2022 (Figure 1A). This reflects the fact that publishers’ commercial desire to publish (sell) more material has aligned well with the competitive prestige culture in which publications help secure jobs, grants, promotions, and awards. To the extent that this growth is driven by a pressure for profit, rather than scholarly imperatives, it distorts the way researchers spend their time. The publishing system depends on unpaid reviewer labour, estimated to be over 130 million unpaid hours annually in 2020 alone (9). Researchers have complained about the demands of peer-review for decades, but the scale of the problem is now worse, with editors reporting widespread difficulties recruiting reviewers. The growth in publications involves not only the authors’ time, but that of academic editors and reviewers who are dealing with so many review demands. Even more seriously, the imperative to produce ever more articles reshapes the nature of scientific inquiry. Evidence across multiple fields shows that more papers result in ‘ossification’, not new ideas (10). It may seem paradoxical that more papers can slow progress until one considers how it affects researchers’ time. While rewards remain tied to volume, prestige, and impact of publications, researchers will be nudged away from riskier, local, interdisciplinary, and long-term work. The result is a treadmill of constant activity with limited progress whereas core scholarly practices – such as reading, reflecting and engaging with others’ contributions – is de-prioritized. What looks like productivity often masks intellectual exhaustion built on a demoralizing, narrowing scientific vision.

A table of profit margins across industries. The section of text related to this table is below:

1. The four-fold drain
1.1 Money
Currently, academic publishing is dominated by profit-oriented, multinational companies for
whom scientific knowledge is a commodity to be sold back to the academic community who
created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis,
which collectively generated over US$7.1 billion in revenue from journal publishing in 2024
alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit
margins have always been over 30% in the last five years, and for the largest publisher
(Elsevier) always over 37%.
Against many comparators, across many sectors, scientific publishing is one of the most
consistently profitable industries (Table S1). These financial arrangements make a substantial
difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor &
Francis revenues were generated in North America, meaning that North American
researchers were charged over US$2.27 billion by just two for-profit publishers. The
Canadian research councils and the US National Science Foundation were allocated US$9.3
billion in that year.

A table of profit margins across industries. The section of text related to this table is below: 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.

The costs of inaction are plain: wasted public funds, lost researcher time, compromised
scientific integrity and eroded public trust. Today, the system rewards commercial publishers
first, and science second. Without bold action from the funders we risk continuing to pour
resources into a system that prioritizes profit over the advancement of scientific knowledge.

The costs of inaction are plain: wasted public funds, lost researcher time, compromised scientific integrity and eroded public trust. Today, the system rewards commercial publishers first, and science second. Without bold action from the funders we risk continuing to pour resources into a system that prioritizes profit over the advancement of scientific knowledge.

We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...

11.11.2025 11:52 — 👍 637    🔁 453    💬 8    📌 65

"For instance, randomized controlled trials could explicitly manipulate multilingualism"

11.11.2025 08:28 — 👍 80    🔁 11    💬 13    📌 3
Post image

Fascinating economics job market paper by Jens Oehlen on the effects of Enigma codebreaking and how it interacted with military/intelligence strategy jensoehlen.github.io/uploads/Enig...

06.11.2025 01:31 — 👍 45    🔁 8    💬 3    📌 1
Preview
Computational Turing Test Reveals Systematic Differences Between Human and AI Language Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...

LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵

07.11.2025 11:13 — 👍 334    🔁 134    💬 14    📌 38
An aerial photograph of the Tilburg University campus.

An aerial photograph of the Tilburg University campus.

I am hiring PhD candidates to study the psychology of attention & technology use at @tilburg-university.bsky.social.

We're looking for motivated & curious scholars with expertise in cognitive psychology and statistics, and offer a friendly work environment with great terms & benefits.

tiu.nu/22989

23.10.2025 15:04 — 👍 49    🔁 53    💬 1    📌 3
psyarxiv coauthorship network from https://vuorre.com/psyarxiv-dashboard/coauthorship showing Matti Vuorre's coauthors

psyarxiv coauthorship network from https://vuorre.com/psyarxiv-dashboard/coauthorship showing Matti Vuorre's coauthors

Wrote a little interactive webapp for creating psyarxiv coauthorship networks, try it out: vuorre.com/psyarxiv-das...

Feedback welcome: github.com/mvuorre/psya...

05.11.2025 10:55 — 👍 11    🔁 4    💬 1    📌 0

⏰ 1 week left to submit your intervention to strengthen trust in climate scientists in the U.S. — come join us in this megastudy!

04.11.2025 17:25 — 👍 2    🔁 0    💬 0    📌 0

@janpfa is following 20 prominent accounts