Sven E. Hug's Avatar

Sven E. Hug

@svenhug.bsky.social

Research Evaluation. Scientometrics. Peer Review. Science Policy. Advisor & Evaluator Swiss Science Council πŸ‡¨πŸ‡­πŸ›οΈ. Associated Member Robert K. Merton Center for Science Studies πŸ”­πŸ§ͺ. Personal account.

1,396 Followers  |  1,052 Following  |  20 Posts  |  Joined: 07.11.2023  |  1.9069

Latest posts by svenhug.bsky.social on Bluesky

Preview
We’re now OpenAlex - OpenAlex blog For years, we’ve been working under the name OurResearch. That name sat at the top of our org chart, with three child projects under it: OpenAlex, Unpaywall, and Unsub. Starting today, things are simp...

OurResearch rebrands to OpenAlex.

blog.openalex.org/were-now-ope...

29.09.2025 20:47 β€” πŸ‘ 15    πŸ” 9    πŸ’¬ 0    πŸ“Œ 1
Preview
Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions?...

We built the simplest possible social media platform. No algorithms. No ads. Just LLM agents posting and following.

It still became a polarization machine.

Then we tried six interventions to fix social media.

The results were… not what we expected.

arxiv.org/abs/2508.03385

06.08.2025 08:24 β€” πŸ‘ 287    πŸ” 99    πŸ’¬ 13    πŸ“Œ 42

That being said, I'm looking forward to the insights from the 'referee consensus model'.
2/2

28.07.2025 09:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Not very surprising to me, as in traditional journal peer review, the editor(s) are expected to reconcile views among individual referees. This provides an additional perspective while keeping the decision-making power with the editor(s).
1/2

28.07.2025 09:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

😊

04.07.2025 08:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We often have to judge who is knowledgeableβ€”precisely when we are not. Can humans really do that? Our new paper in Psychological Science shows that, surprisingly, we can. drive.google.com/file/d/1b15E...

02.06.2025 11:42 β€” πŸ‘ 101    πŸ” 30    πŸ’¬ 5    πŸ“Œ 2

There is a large literature on grant peer review but afaik nobody has looked at review scores like you have. Interesting!

09.05.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

who says that science doesn't generate profit?

24.04.2025 01:38 β€” πŸ‘ 51    πŸ” 24    πŸ’¬ 5    πŸ“Œ 7

Danke fΓΌr die rasche und klare Antwort! πŸ‘

08.03.2025 10:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Weshalb sollte man eine PaketlΓΆsung wollen? Weshalb nicht?

08.03.2025 09:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Principles of Evaluative Bibliometrics in a DORA/CoARA Context The document, "Principles of Evaluative Bibliometrics in a DORA/CoARA Context," provides a comprehensive examination of evaluative bibliometrics, exploring its role within research evaluation. It begi...

Advocates of research assessment reforms and bibliometricians sometimes have a rocky and heated relationship. πŸ”₯

This paper, written by three bibliometricians, attempts to reconcile the two camps.

What are your thoughts on this issue?

#CoARA
#DORA

zenodo.org/records/1467...

31.01.2025 10:40 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Out now in Nature Human Behaviour: Our 68-country #survey on public attitudes to #science πŸ“£
It shows: People still #trust scientists and support an active role of scientists in society and policy-making. #OpenAccess available here: www.nature.com/articles/s41... @natureportfolio.bsky.social
(1/13)

20.01.2025 10:27 β€” πŸ‘ 363    πŸ” 164    πŸ’¬ 7    πŸ“Œ 21
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer 

Crossmark: Check for Updates
Author and Article Information
Quantitative Science Studies 1–24.
https://doi.org/10.1162/qss_a_00337

Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer Crossmark: Check for Updates Author and Article Information Quantitative Science Studies 1–24. https://doi.org/10.1162/qss_a_00337 Abstract Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

1/ 🚨 NEW PAPER! β€œOpen Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. 🧡
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI

17.12.2024 10:34 β€” πŸ‘ 16    πŸ” 11    πŸ’¬ 1    πŸ“Œ 1
Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge | Preprint screenshot

Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge | Preprint screenshot

Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge; a new preprint & thread.
In it, I'll say a little about theatres of persuasion, and why new collaborative structures change how they look osf.io/preprints/me... #sts #metascience 1/

03.12.2024 09:12 β€” πŸ‘ 41    πŸ” 23    πŸ’¬ 2    πŸ“Œ 0

πŸ˜‚πŸ€£

28.11.2024 21:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Does training of peer reviewers work?

"Evidence from 10 RCTs suggests that training peer reviewers may lead to little or no improvement in the quality of peer review."

Cochrane systematic review πŸ”“: www.cochranelibrary.com/cdsr/doi/10....

28.11.2023 18:04 β€” πŸ‘ 27    πŸ” 15    πŸ’¬ 2    πŸ“Œ 0

With all the new influx of users, I’d love to see a community around #sciencepolicy #scipol #bibliometrics #scientometrics #scisci #metascience. Please share if you want to be part of it, use the tags to find others, or just say hi πŸ‘‹

21.11.2024 21:13 β€” πŸ‘ 35    πŸ” 16    πŸ’¬ 4    πŸ“Œ 0
Preview
The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound Abstract. Endorsed by the European Research Area, a Coalition for Advancing Research Assessment (CoARA), primarily composed of research institutions and fu

The perennial dispute between quantitative and qualitative research assessment is once again heating up.

The match-up this time:
Evaluative scientometrics
vs
CoARA

πŸ”“The forced battle between peer-review and scientometric research assessment

academic.oup.com/rev/advance-...

17.05.2024 14:43 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
Challenges in Research Policy This open access volume examines significant challenges in research policy, offering expert insights and policy recommendations on critical issues.

New edited volume:
Challenges in Research Policy
And it's open access! πŸ‘

link.springer.com/book/10.1007...

15.11.2024 13:25 β€” πŸ‘ 17    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0
OSF

It's raining preprints, hallelujah 🎢

Here is my latest preprint (review article) >> Sustaining the β€˜frozen footprints’ of scholarly communication through open citations

osf.io/preprints/so...

21.11.2024 10:34 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
20 things you didn’t know about Google Scholar Google Scholar celebrates two decades of breaking down barriers to academic research and making it accessible to everyone, everywhere.

Google Scholar 20th Anniversary: 20 things you didn't know about Google Scholar
blog.google/outreach-ini...

19.11.2024 19:54 β€” πŸ‘ 15    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Preview
Challenges in Research Policy This open access volume examines significant challenges in research policy, offering expert insights and policy recommendations on critical issues.

New edited volume:
Challenges in Research Policy
And it's open access! πŸ‘

link.springer.com/book/10.1007...

15.11.2024 13:25 β€” πŸ‘ 17    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0

Hello Bart! πŸ‘‹πŸ»
Taking the opportunity to express my appreciation for your research - and for contributing to the beautiful blue place! πŸ¦‹

14.11.2024 19:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound Abstract. Endorsed by the European Research Area, a Coalition for Advancing Research Assessment (CoARA), primarily composed of research institutions and fu

The perennial dispute between quantitative and qualitative research assessment is once again heating up.

The match-up this time:
Evaluative scientometrics
vs
CoARA

πŸ”“The forced battle between peer-review and scientometric research assessment

academic.oup.com/rev/advance-...

17.05.2024 14:43 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

(6) design and test interventions to make peer review less conservative,
(7) assess whether these interventions make a real difference to scientific progress.

15.02.2024 22:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To make progress, IMVHO we should
(1) rigorously define what we mean by risky/conservative research,
(2) create valid measures for it,
(3) provide robust evidence for conservatism,
(4) theorize mechanisms that may cause conservatism,
(5) test these theories,

15.02.2024 22:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If we continue like this, peer review research will forever remain highly fragmented, and there will be little progress.

15.02.2024 22:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Third, the authors seem to ignore research that has been done outside their own field (economics). For the record, research on peer review begann in the 1970s and not in economics.

15.02.2024 22:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Second, the Gross-Bergstrom theory, which offers a powerful explanation for conservatism in grant peer review, is nowhere mentioned.

15.02.2024 22:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@svenhug is following 20 prominent accounts