tonyRH.bsky.social's Avatar

tonyRH.bsky.social

@tonyrh.bsky.social

450 Followers  |  112 Following  |  23 Posts  |  Joined: 30.01.2024
Posts Following

Posts by tonyRH.bsky.social (@tonyrh.bsky.social)

A new PNAS paper finds that polarization increased immediately after the release of Lady Gaga’s β€œJust Dance” and the advent of the late-2000s electro-pop era, which both appeared around the same year, 2008.

15.12.2025 08:21 β€” πŸ‘ 196    πŸ” 37    πŸ’¬ 12    πŸ“Œ 3
Preview
5 Things We Learned About Peer Review in 2024 - Absolutely Maybe Back in 2019 I wrote a couple of posts summarizing what we had learned from research about peer review at journals. Since…

My belated annual roundup of things we learned about peer review in 2024. Including structured peer review, reviewers' uncertainty, & "uselessly elongated review bias." Now online at @plos.org

absolutelymaybe.plos.org/2025/04/28/5...

Featuring @mariomalicki.bsky.social @aidybarnett.bsky.social

28.04.2025 07:04 β€” πŸ‘ 13    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“…On 27th November 2024, TIER2 and Taylor & Francis Group hosted a workshop at the 19th Munin Conference on Scholarly Publishing in TromsΓΈ, Norway, addressing research reproducibility. A report on the workshop is now available on TIER2's website.πŸ’‘
πŸ”—Learn more here: tier2-project.eu/news/tier2-a...

24.04.2025 11:21 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Sadly spent most of this very sunny day writing a narrative CV for a grant application. Jeez, that's a lot of extra work that I'm not sure anyone is likely to read too deeply.

What do others think of them?

05.04.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

A new TIER2 preprint, authored by Serge Horbach, Nicki Lisa Cole, Simone Kopeinik, Barbara Leitner, Tony Ross-Hellauer and Joeri Tijdink, explores the barriers and enablers for reproducibility in research.
πŸ”—Learn more: tier2-project.eu/news/new-tie...

14.03.2025 09:56 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
GenAI synthetic data create ethical challenges for scientists. Here’s how to address them. | PNAS GenAI synthetic data create ethical challenges for scientists. Here’s how to address them.

"Conflation of synthetic #GenAI and real data could corrupt the research record; degrade the #quality and #reproducibility of scientific data and analytical methods; and, ironically, sabotage the training of AI models." www.pnas.org/doi/10.1073/... #synthetic_data #research_integrity

26.02.2025 19:42 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Was reminded of this ultimate Venn today.

17.02.2025 15:01 β€” πŸ‘ 135    πŸ” 31    πŸ’¬ 4    πŸ“Œ 1
Post image

New paper by Thomas Klebel "Investigating patterns of knowledge production in research on three UN sustainable development goals", just published in Online Information Review! doi.org/10.1108/OIR-...

31.01.2025 10:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

While I see (again) many new persons here, here is the meta-research and open science starting pack 2.

29.01.2025 06:01 β€” πŸ‘ 29    πŸ” 13    πŸ’¬ 2    πŸ“Œ 0
Post image

New Preprint! "Reproducibility and replicability of qualitative research: An integrative review of concepts, barriers and enablers" - osf.io/preprints/me...

A nice ouput from our TIER2 project, led by Nicki Lisa Cole :)

07.01.2025 10:42 β€” πŸ‘ 18    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
Preview
Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities Abstract. Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing re...

6/ πŸ“š Read the full study:

πŸ”— β€œOpen Science at the Generative AI Turn”
Published in Quantitative Science Studies (MIT Press):
πŸ‘‰ doi.org/10.1162/qss_...

Let’s work together to ensure that #AI & #GenAI align with the values of #OpenScience!

17.12.2024 10:45 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

5/ 🌍 A Call for Responsible Use
To ensure GenAI aligns with Open Science values:
- Researchers must integrate GenAI with care and scrutiny.
- Developers need to create transparent, unbiased tools.
- Policymakers must balance innovation and risk.

17.12.2024 10:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4/ πŸ” The Risk
Despite the potential, there are challenges:
❌ Opaque β€œblack box” models undermine transparency
❌ Bias in training data risks reinforcing inequalities
❌ High computational demands raise sustainability concerns.

17.12.2024 10:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/ ✨ The Opportunity
GenAI can:
βœ… Increase efficiency of enhanced documentation
βœ… Simplify complex science into accessible language
βœ… Break language barriers through translation
βœ… Enable public participation in research
βœ… Promote inclusivity, accessibility, and understanding.

17.12.2024 10:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

2/ TL;DR
2/ TL;DR. Mohammad Hosseini, Serge Horbach, @kristiholmes.bsky.social and I explore GenAI's enormous potential to enhance accessibility and efficiency in science. But we emphasise that to do so, GenAI must bespeak Open Science principles of openness, fairness, and transparency.

17.12.2024 10:35 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer 

Crossmark: Check for Updates
Author and Article Information
Quantitative Science Studies 1–24.
https://doi.org/10.1162/qss_a_00337

Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer Crossmark: Check for Updates Author and Article Information Quantitative Science Studies 1–24. https://doi.org/10.1162/qss_a_00337 Abstract Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

1/ 🚨 NEW PAPER! β€œOpen Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. 🧡
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI

17.12.2024 10:34 β€” πŸ‘ 16    πŸ” 11    πŸ’¬ 1    πŸ“Œ 1

Its presumptuous but I dont mind that so much - but what I really dislike is when I need to use those details and validate the new account just in order to decline.

11.12.2024 11:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

8/ Read the full paper here for insights on how to reshape research evaluation systems for fairness and effectiveness: doi.org/10.1093/rese...

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

7/ We close with recommendations: clarify core purposes of research assessment, use shared frameworks, train assessors on bias, reduce over-frequent assessments, and move beyond binary thinking on qualitative/quantitative methods.

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

6/ We examine the β€œperformativity of assessment criteria,” revealing a tension between rigid/flexible criteria and how transparently they are communicated. Transparent, equitable frameworks are vital to align formal criteria with the realities of research evaluation.

05.12.2024 11:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

5/ Respondents noted that beyond metrics, informal factorsβ€”social dynamics, politics, and demographicsβ€”play key roles in assessment outcomes. These hidden criteria emerge in opaque processes, granting assessors significant flexibility.

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4/ Through qualitative analysis of free-text responses from 121 international researchers, we highlight a major gap between formal evaluation criteria and their practical application.

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/ How do current systems enable β€œhidden factors” like cronyism or evaluator biases, and how might these change under proposed reforms? Our study examines researchers' perceptions of social and political influences on assessment processes.

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ Reform of research assessment, especially to avoid over-quantification and empower qualitative assessment, is a hot topic. Change is coming. But how do we balance broader criteria to value activities beyond publishing/funding, peer review reliance, and merit-based rewards?

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1/ The full paper is available at: doi.org/10.1093/rese...

05.12.2024 11:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New Paper! β€œUnderstanding the social and political dimensions of research(er) assessment: evaluative flexibility and hidden criteria in promotion processes at research institutes”, just published in Research Evaluation by me, @naubertbonn.bsky.social & Serge Horbach. Thread below!

05.12.2024 11:22 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

Beware of the #streetlight effect in measuring #openscience #impact @tonyrh.bsky.social #munin2024 conference

28.11.2024 14:04 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Brilliant overview of #openscience by @tonyrh.bsky.social #munin2024 conference: particularly liked the β€œunintended consequences β€œ side, (me, not Tony) thinking about the 12290$ APC asked by #SpringerNature for one single article to be #openaccess - not equitable at all.

28.11.2024 13:56 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Fantastic thread, thank you Bart!

02.04.2024 13:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The LSE Impact Blog covering our recent scoping review on evidence regarding the uptake, attitudes and efficacy of Open Peer Review.

Conclusion: many persistent and crucial questions remain poorly explored. We suggest ways to move forward and invite collaboration!

blogs.lse.ac.uk/impactofsoci...

21.03.2024 12:16 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0