A new PNAS paper finds that polarization increased immediately after the release of Lady Gagaβs βJust Danceβ and the advent of the late-2000s electro-pop era, which both appeared around the same year, 2008.
15.12.2025 08:21 β
π 196
π 37
π¬ 12
π 3
5 Things We Learned About Peer Review in 2024 - Absolutely Maybe
Back in 2019 I wrote a couple of posts summarizing what we had learned from research about peer review at journals. Sinceβ¦
My belated annual roundup of things we learned about peer review in 2024. Including structured peer review, reviewers' uncertainty, & "uselessly elongated review bias." Now online at @plos.org
absolutelymaybe.plos.org/2025/04/28/5...
Featuring @mariomalicki.bsky.social @aidybarnett.bsky.social
28.04.2025 07:04 β
π 13
π 9
π¬ 1
π 0
π
On 27th November 2024, TIER2 and Taylor & Francis Group hosted a workshop at the 19th Munin Conference on Scholarly Publishing in TromsΓΈ, Norway, addressing research reproducibility. A report on the workshop is now available on TIER2's website.π‘
πLearn more here: tier2-project.eu/news/tier2-a...
24.04.2025 11:21 β
π 2
π 2
π¬ 0
π 0
Sadly spent most of this very sunny day writing a narrative CV for a grant application. Jeez, that's a lot of extra work that I'm not sure anyone is likely to read too deeply.
What do others think of them?
05.04.2025 15:34 β
π 0
π 0
π¬ 0
π 0
A new TIER2 preprint, authored by Serge Horbach, Nicki Lisa Cole, Simone Kopeinik, Barbara Leitner, Tony Ross-Hellauer and Joeri Tijdink, explores the barriers and enablers for reproducibility in research.
πLearn more: tier2-project.eu/news/new-tie...
14.03.2025 09:56 β
π 2
π 2
π¬ 0
π 0
GenAI synthetic data create ethical challenges for scientists. Hereβs how to address them. | PNAS
GenAI synthetic data create ethical challenges for scientists. Here’s how to address
them.
"Conflation of synthetic #GenAI and real data could corrupt the research record; degrade the #quality and #reproducibility of scientific data and analytical methods; and, ironically, sabotage the training of AI models." www.pnas.org/doi/10.1073/... #synthetic_data #research_integrity
26.02.2025 19:42 β
π 5
π 2
π¬ 0
π 0
Was reminded of this ultimate Venn today.
17.02.2025 15:01 β
π 135
π 31
π¬ 4
π 1
New paper by Thomas Klebel "Investigating patterns of knowledge production in research on three UN sustainable development goals", just published in Online Information Review! doi.org/10.1108/OIR-...
31.01.2025 10:20 β
π 0
π 0
π¬ 0
π 0
While I see (again) many new persons here, here is the meta-research and open science starting pack 2.
29.01.2025 06:01 β
π 29
π 13
π¬ 2
π 0
New Preprint! "Reproducibility and replicability of qualitative research: An integrative review of concepts, barriers and enablers" - osf.io/preprints/me...
A nice ouput from our TIER2 project, led by Nicki Lisa Cole :)
07.01.2025 10:42 β
π 18
π 8
π¬ 0
π 0
5/ π A Call for Responsible Use
To ensure GenAI aligns with Open Science values:
- Researchers must integrate GenAI with care and scrutiny.
- Developers need to create transparent, unbiased tools.
- Policymakers must balance innovation and risk.
17.12.2024 10:44 β
π 2
π 0
π¬ 1
π 0
4/ π The Risk
Despite the potential, there are challenges:
β Opaque βblack boxβ models undermine transparency
β Bias in training data risks reinforcing inequalities
β High computational demands raise sustainability concerns.
17.12.2024 10:35 β
π 1
π 0
π¬ 1
π 0
3/ β¨ The Opportunity
GenAI can:
β
Increase efficiency of enhanced documentation
β
Simplify complex science into accessible language
β
Break language barriers through translation
β
Enable public participation in research
β
Promote inclusivity, accessibility, and understanding.
17.12.2024 10:35 β
π 2
π 0
π¬ 1
π 0
2/ TL;DR
2/ TL;DR. Mohammad Hosseini, Serge Horbach, @kristiholmes.bsky.social and I explore GenAI's enormous potential to enhance accessibility and efficiency in science. But we emphasise that to do so, GenAI must bespeak Open Science principles of openness, fairness, and transparency.
17.12.2024 10:35 β
π 4
π 2
π¬ 1
π 0
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer
Crossmark: Check for Updates
Author and Article Information
Quantitative Science Studies 1β24.
https://doi.org/10.1162/qss_a_00337
Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAIβs limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.
1/ π¨ NEW PAPER! βOpen Science at the Generative AI Turnβ
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. π§΅
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI
17.12.2024 10:34 β
π 16
π 11
π¬ 1
π 1
Its presumptuous but I dont mind that so much - but what I really dislike is when I need to use those details and validate the new account just in order to decline.
11.12.2024 11:18 β
π 1
π 0
π¬ 0
π 0
8/ Read the full paper here for insights on how to reshape research evaluation systems for fairness and effectiveness: doi.org/10.1093/rese...
05.12.2024 11:22 β
π 0
π 0
π¬ 0
π 0
7/ We close with recommendations: clarify core purposes of research assessment, use shared frameworks, train assessors on bias, reduce over-frequent assessments, and move beyond binary thinking on qualitative/quantitative methods.
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
6/ We examine the βperformativity of assessment criteria,β revealing a tension between rigid/flexible criteria and how transparently they are communicated. Transparent, equitable frameworks are vital to align formal criteria with the realities of research evaluation.
05.12.2024 11:22 β
π 1
π 0
π¬ 1
π 0
5/ Respondents noted that beyond metrics, informal factorsβsocial dynamics, politics, and demographicsβplay key roles in assessment outcomes. These hidden criteria emerge in opaque processes, granting assessors significant flexibility.
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
4/ Through qualitative analysis of free-text responses from 121 international researchers, we highlight a major gap between formal evaluation criteria and their practical application.
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
3/ How do current systems enable βhidden factorsβ like cronyism or evaluator biases, and how might these change under proposed reforms? Our study examines researchers' perceptions of social and political influences on assessment processes.
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
2/ Reform of research assessment, especially to avoid over-quantification and empower qualitative assessment, is a hot topic. Change is coming. But how do we balance broader criteria to value activities beyond publishing/funding, peer review reliance, and merit-based rewards?
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
1/ The full paper is available at: doi.org/10.1093/rese...
05.12.2024 11:22 β
π 0
π 0
π¬ 1
π 0
New Paper! βUnderstanding the social and political dimensions of research(er) assessment: evaluative flexibility and hidden criteria in promotion processes at research institutesβ, just published in Research Evaluation by me, @naubertbonn.bsky.social & Serge Horbach. Thread below!
05.12.2024 11:22 β
π 7
π 4
π¬ 1
π 0
Beware of the #streetlight effect in measuring #openscience #impact @tonyrh.bsky.social #munin2024 conference
28.11.2024 14:04 β
π 2
π 1
π¬ 0
π 0
Brilliant overview of #openscience by @tonyrh.bsky.social #munin2024 conference: particularly liked the βunintended consequences β side, (me, not Tony) thinking about the 12290$ APC asked by #SpringerNature for one single article to be #openaccess - not equitable at all.
28.11.2024 13:56 β
π 2
π 1
π¬ 0
π 0
Fantastic thread, thank you Bart!
02.04.2024 13:52 β
π 1
π 0
π¬ 0
π 0
The LSE Impact Blog covering our recent scoping review on evidence regarding the uptake, attitudes and efficacy of Open Peer Review.
Conclusion: many persistent and crucial questions remain poorly explored. We suggest ways to move forward and invite collaboration!
blogs.lse.ac.uk/impactofsoci...
21.03.2024 12:16 β
π 1
π 1
π¬ 0
π 0