Brian Nosek

Brian Nosek

@briannosek.bsky.social

Co-founder of Project Implicit, Society for Improving Psychological Science, and the Center for Open Science; Professor at the University of Virginia

11,140 Followers 355 Following 5,512 Posts Joined Oct 2023
1 day ago

Everything appearing in COS's impact report would not have happened without the partnerships, collaborations, and support of thousands of researchers, hundreds of research-supporting organizations, and dozens of grassroots efforts. Thanks to all genuinely working to improve the research culture.

9 0 0 0
1 day ago
Preview
How bioRxiv changed the way biologists share ideas – in numbers Four million articles are now downloaded from bioRxiv every month, according to an analysis of the life-science preprint server’s first 13 years of existence.

"bioRxiv is exactly the type of living experiment we all should be striving for, to achieve our shared goals of optimizing how research evidence is shared and used.”
via @chrisnsimms.bsky.social www.nature.com/articles/d41...

41 12 1 0
2 days ago

Not to brag, but they offered me Macallan 25.

3 0 1 0
3 days ago

Round 2 results of the predicting replicability challenge are released!

Teams improved quite a bit from Round 1. The results provide an appetizer for release of the SCORE program outcomes. And, you can join to participate in Round 3 of the competition.

www.cos.io/blog/predict...

11 4 1 0
3 days ago
Preview
Predicting Replicability Challenge: Round 2 Results Growth trends are producing a "strain on scientific publishing." The rates of scientific output have risen from just under 4 million publications in 2000 to over 10 million in 2024 and the total size of the reviewer pool has not kept pace, leading to a shortage in evaluation capacity. The recent explosion of AI-assisted and fully AI-generated research threatens to overwhelm journals’ ability to review submissions in a timely and sustainable way, such that evaluation rather than production will become the limiting factor in the growth of knowledge.

Research output is growing faster than the systems used to evaluate it. The Predicting Replicability Challenge invites teams to use automated approaches to predict whether research findings will hold up when replicated.

Read about Round 2 results and how to participate:

8 3 0 1
3 days ago

If you would like to see a deeper dive into the results, here is a preprint: bsky.app/profile/meta...

2 0 0 0
3 days ago

Round 2 results of the predicting replicability challenge are released!

Teams improved quite a bit from Round 1. The results provide an appetizer for release of the SCORE program outcomes. And, you can join to participate in Round 3 of the competition.

www.cos.io/blog/predict...

11 4 1 0
4 days ago

@plosbiology.org is formalizing its long‑standing practice of asking authors to share research code, introducing a mandatory #code‑sharing policy and clarifying what is meant by code sharing.

Learn more and find guidance on best practice: plos.io/4reyX3v

60 22 1 3
3 days ago

Each discipline will also show reasonable internal consensus about their placement here as a good thing.

3 0 0 0
4 days ago

Wonderful remembrance of a wonderful person.

3 1 0 0
4 days ago
A graphic promoting a book review. On the left is the cover of the book Inside an Academic Scandal: A Story of Fraud and Betrayal by Max H. Bazerman. The cover has large orange and black text and an abstract horizontal paint‑stroke graphic beneath the word “Scandal.” On the right, black text reads: “Jennifer Byrne reviews Inside an Academic Scandal: A Story of Fraud and Betrayal.” At the bottom right is small text that reads “Vol. 46 No. 1 (2026): February,” alongside a small circular logo with the letters “PIR.”

Jennifer Byrne (@jabyrnesci.bsky.social) reviews Max H. Bazerman’s 'Inside an Academic Scandal', a narrative of research misconduct, institutional response, and the ethical challenges surrounding fraud in academia.
journals.uvic.ca/index.php/pi...

11 8 2 0
6 days ago
Post image

My bug and bun are standing up for science today.

26 0 0 0
6 days ago
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

I finally gave "Is open science neoliberal?" by @uyguntunc.bsky.social @mntunc.bsky.social and Eper a close read. It made sense of critiques that did not resonate w/ how I understand the reform movement.

Are there rejoinders defending the neoliberal attribution or challenging this description?

9 1 2 0
6 days ago
Preview
BYD just killed your EV argument with a battery that competes with gas engines The Chinese carmaker's new batteries feature a 5-minute charge and 621-mile range, plus a 620,000-mile lifespan and lower prices.

If this turns out as described, this is massive news.

www.fastcompany.com/91503415/byd...

35 13 7 3
1 week ago
Preview
Welcome! You are invited to join a webinar: TOP 101: An Overview of the Transparency and Openness Promotion Guidelines. After registering, you will receive a confirmation email about joining the webinar. The Transparency and Openness Promotion Guidelines (TOP) is a policy framework that can provide specific recommendations for journals, research funders, universities, and researchers about practices that are designed to increase the verifiability of empirical research claims. TOP was updated in 2025 and contains three main parts: Research Practices, Verification Practices, and Verification Studies. This webinar will include an overview of these components and how TOP’s tiered and modular structure makes it applicable to a diverse community of researchers and policymakers. This webinar is part of COS’s ongoing campaign to demonstrate implementations of our Transparency and Openness Promotion (TOP) guidelines (https://bit.ly/top-guidelines), grounding our policy framework in real-world examples. If you have feedback, implementation stories, or lessons learned from working with TOP, get in touch with us at top@cos.io.

Happening March 11! TOP 101: An Overview of the Transparency & Openness Promotion Guidelines

Learn about the TOP framework, which provides recommendations for journals, funders, universities, & researchers on practices that can increase verifiability of research claims.

💻

8 5 1 0
1 week ago
Preview
SIPS 2026 – June 8-10, 2026 The submission portal for the online conference is still open! We invite researchers to submit their contributions for online SIPS 2026. We look forward to your participation. Please note that the…

Register for #SIPS2026 Now!
Join us online and/or in DC for a high-energy, interactive meeting where YOU shape the future of psychological science.

✅ Collaborative hackathons & workshops
✅ Engaging unconferences & roundtables
✅ Inspiring keynotes & lightning talks

Register:

3 3 0 0
1 week ago
Preview
The Report of Stereotype Threat's Demise Has Been Greatly Exaggerated And why I'm feeling a certain way about it

Today Dominic Packer & @jayvanbavel.bsky.social republished @minzlicht.bsky.social's post about "The Downfall of Stereotype Threat" in their widely-read The Power Of Us newsletter. I felt a certain way about that & the evidence presented there & thought I'd respond:

37 13 2 3
1 week ago

Fun to read this after my post on stereotype threat was republished by @jayvanbavel.bsky.social et al. I don't agree with all that Mary wrote, but it's worth a read. One place I'll push back: if stereotype threat only really affects feelings (and not performance) would it have made the same impact?

5 1 1 1
1 week ago

I admit being fascinated by the idea of receiving a critique from LLM-Nosek. I doubt it would dislike my drafts as much as I do.

7 0 0 0
2 weeks ago

📣 Introducing ManyLabsDACH!

We are delighted to announce our new large-scale crowd-science study spanning Germany (D), Austria (A), and Switzerland (CH). Each participating lab will submit one design proposal, and all participating labs will then jointly select the design to be implemented.

8 5 1 2
2 weeks ago
Preview
Building a Publishing Model for Replication: Q&A with the Senior Editors of Replication Research COS spoke with the senior editors of Replication Research—a community-led Diamond Open Access journal that supports reproduction and replication studies.

Replication Research (R2), a 🆕 community-led Diamond OA journal, makes replication studies more discoverable, publishable & rigorously evaluated—without subscription barriers or author fees. Ahead of #LoveReplicationsWeek, R2's senior editors shared their vision in our Q&A:

13 9 0 1
2 weeks ago
Post image

Clicking on both the title and the pdf links worked just fine for me here. They didn't for you?

0 0 1 0
2 weeks ago

Thanks for the heads up. I just checked on Google Scholar and found and downloaded several pdfs from OSF. Can you provide some specific examples so that we can share it with the product team? @cos.io @olsonscholcomm.bsky.social

0 0 1 0
2 weeks ago
Preview
Five ways to spot when a paper is a fraud Science sleuths share their common-sense tips for sniffing out fishy articles.

Science sleuths share their common-sense tips for sniffing out fishy articles

go.nature.com/4ceKgVF

55 25 2 2
2 weeks ago
Preview
Building a More Resilient Ecosystem for Publicly-Funded Research Data COS previously announced efforts to develop a community-driven strategic plan for long-term preservation, accessibility, and usability of federally-funded data. This post shares updates on the…

As a member of @cos.io effort to build a more resilient ecosystem (see link below if you missed it), we want to draw attention to the call for feedback. Be sure to share your thoughts with us by March 2, ahead of our next planning meeting, using the survey on the post!
www.cos.io/blog/buildin...

6 5 0 1
2 weeks ago
Preview
Global Research Initiative on Open Science establishes its Academic Advisory Board GRIOS, the Global Research Initiative on Open Science, is pleased to announce the establishment of its Academic Advisory Board (AAB), following a highly competitive selection process. The newly appoin...

GRIOS has established its Academic Advisory Board: 11 leading scholars from 9 countries across Europe, Africa & the Americas, with expertise in #metascience, #reproducibility, #ResearchPolicy & #ScholComm and strong interest in evidence-based #OpenScience policies
🔗 www.grios.org/global-resea...

10 8 1 5
1 month ago
Post image

2025 #EinsteinFoundationAward winners' colloquium w/
+ @simine.com, psychologist @unimelb.edu.au
+ Olavo Amaral, coordinator BRI/@redebrrepro.bsky.social
+ Max Sprang, bioinformatician @unimainz.bsky.social

March 13 at @bihatcharite.bsky.social & online
www.einsteinfoundation.de/en/insights/...

23 8 0 3
2 weeks ago
Preview
Input on Development of a Community-Led Strategy for Ensuring the Preservation, Accessibility, and Usability of Publicly-Funded Research Data The Center for Open Science (COS) is leading an effort to develop a community-driven strategic plan for ensuring the long-term preservation, accessibility, and usability of federally-funded scientific...

💡 Rather than reinventing the wheel, we're aiming to coordinate and amplify existing efforts, while identifying any gaps to galvanize more action.

Ahead of the committee's in-person meeting in early March, we want to hear from you. Share your thoughts with us through this survey 👇
(3/3)

2 2 0 0
2 weeks ago
Post image

Join us on Thurs for funder perspectives on open science policy development. Register: cos-io.zoom.us/webinar/regi.... This session explores how funding orgs create, roll out, & manage open science policies that increase transparency in the research they support, aligned with their missions & values

0 2 0 0
2 weeks ago
Preview
Q&A: How does Psychology Professor Brian Nosek Think We Can Improve Scientific Research? Co-founder of the Center for Open Science, Prof. Nosek is the invited speaker for this week's Page-Barbour Lectures at UVA.

Attention: People in the Charlottesville region. I will be giving the Page Barbour lectures on Tue, Wed, and Thu evening this week in Nau Hall at UVA. Open to the public.

More information: as.virginia.edu/news/qa-how-...

15 3 0 0