I’m given to understand the technology can and has been harnessed to useful effect. Ok. But in aggregate it is also increasing the relative volume of garbage that editors and reviewers must now contend with. With limited time and resources, that leads invariably to inefficiencies. Not good! (2/2)
03.03.2026 20:42 —
👍 7
🔁 2
💬 1
📌 0
As a journal editor, the most noticeable consequence of LLMs I’ve observed is an increase in slop submissions. While this might show up on someone’s ledger as greater productivity, what it has meant for me is more toil at the desk review stage and challenges with workflow. (1/2)
03.03.2026 20:31 —
👍 13
🔁 2
💬 1
📌 0
Now on @socarxiv.bsky.social !
osf.io/preprints/so...
26.02.2026 17:18 —
👍 11
🔁 6
💬 1
📌 0
New version of our preprint on bioRxiv about bioRxiv up. Now that’s what I call a revision – 6 years after the first version!
It has new data about our progress and highlights from a massive user survey. 1/n
www.biorxiv.org/content/10.1...
26.02.2026 16:05 —
👍 78
🔁 43
💬 1
📌 4
Yet.
25.02.2026 20:28 —
👍 1
🔁 0
💬 0
📌 0
Prof. Cohen is just venting. This is not an official policy statement.
25.02.2026 20:28 —
👍 0
🔁 0
💬 1
📌 0
Where should SocArXiv draw the AI line?
A case that helps sort out the questions for an AI policy at SocArXiv.
It's one thing to ask authors to disclose how they used #AI. But it's another to have an articulate policy on what degrees and kinds of use make a work unacceptable, esp at a #preprint repository. Thoughtful take by @philipncohen.com, director of @socarxiv.bsky.social.
socopen.org/2026/02/22/w...
24.02.2026 14:23 —
👍 4
🔁 4
💬 0
📌 0
Nope.
23.02.2026 16:55 —
👍 512
🔁 43
💬 56
📌 84
It is definitely a risk factor for poor quality work of various kinds (one pattern we have observed is some folks who do this produce papers on very different topics very quickly...)
23.02.2026 03:11 —
👍 0
🔁 0
💬 1
📌 0
I'm not convinced by the case for here - GenAI can't think, and writing is part of the thought process / refinement of ideas. Outsourcing all the "thinking" to GenAI leaves a gap where the core intellectual endeavour must be.
23.02.2026 00:50 —
👍 5
🔁 2
💬 1
📌 0
This is actually my take on the @socarxiv.bsky.social question of whether to set policy banning fully AI-generated submissions - literally writing the prose is an important part of connecting with readers. As in faith-based fellowship, the act of being in community is an important part of Science.
22.02.2026 23:19 —
👍 1
🔁 2
💬 1
📌 0
"no one should be looking at the corpus of SocArXiv as a repository of the best research. (...) There's a lot of bad work on it, which, unlike most journals & some preprint servers, we are not shy about admitting, because it doesn’t hurt the good work that is here, & we’re not trying to make money"
22.02.2026 15:47 —
👍 5
🔁 3
💬 0
📌 1
Where Should #SocArXiv Draw the #AI Line? (via @socarxiv.bsky.social) socopen.org/2026/02/22/w... #scholcomm #preprints #publishing
22.02.2026 16:15 —
👍 1
🔁 1
💬 0
📌 0
Raises important questions: how will #scholcomm adapt, which norms around publishing will emerge? How will research assessment work in the future?
What do we want "research" to look like?
Curious to see where @socarxiv.bsky.social will end up with their policy.
22.02.2026 06:37 —
👍 1
🔁 2
💬 0
📌 0
The blog post and the thread suggest that It was a well written paper that -however- marginally contributed to the scientific discourse. If not for the author's (authors'?) own declaration, it wouldn't have been removed.
The degree of AI use by the author(s) is imho a bad reason for removal.
22.02.2026 02:44 —
👍 6
🔁 1
💬 1
📌 0
I'm the moderator who flagged the submission Philip is talking about here. The goal is a conversation about balance of "AI" and human work in papers. I think I may write a post about my own opinions, but we social scientists need to have a broader conversation about this. In particular, 1/2
22.02.2026 01:49 —
👍 7
🔁 5
💬 1
📌 1
So it passes the minimal quality bar. And apparently non-hallucinatory. The basis for rejection would be the disclosed AI use. But we don't have that policy (yet). Would should our policy be?
/4
22.02.2026 01:59 —
👍 1
🔁 1
💬 0
📌 0
If not for the AI, we would accept this. It's boring, unoriginal, and superficial. Its literature review is deficient. It has models, apparently done competently, and graphs and tables. Cites and in-text quotations appeared to be real. As a whole, it was coherent and relevant to existing research /3
22.02.2026 01:59 —
👍 1
🔁 1
💬 1
📌 1
A paper submission says they used AI tools used to generate code, literature search, consult on statistics, and draft text — but claims to have formulated the question and theory, chosen variables and models, interpreted results, and verified every data, citations & quotation, and shared the code /2
22.02.2026 01:59 —
👍 0
🔁 1
💬 1
📌 0
Where should SocArXiv draw the AI line?
A case that helps sort out the questions for an AI policy at SocArXiv.
From @philipncohen.com, a case for discussing our policy on AI-generated research.
22.02.2026 01:09 —
👍 13
🔁 9
💬 1
📌 5
180
164
160
SocArXiv submissions: 30 days to 16 Feb 2026
As seen by Notebook LM
140
Gender &
dentiry: 54
120
100
Family & Life
Course: 46
106
Artificial
Intelligence &
Ethics: 24
80
80
60
Data Science &
Research
Methods: 40
67
Total Abstracts Count
Social Inequality
& Class: 40
Public Health &
Epidemiology: 90
40
Governance &
Democratic
Institutions: 36
58
Climate Policy &
Conservation: 36
20
Mental Health &
Webbeing: 15
Urban Planning &
Infrastructure: 22
0
Migration &
Integration: 14
Social Networks
& Community: 10
Sociology &
Demography
Communication &
Digital Media: 42
Technology
& Media
Health &
Medicine
International
Relations &
Confilct: 18
Public Policy &
Administration: 13
Political Science
44
Higher Ed,
Careers &
Management: 28
39
Philosophy &
Theory: 23
Urban &
Environmental
Studies
Broad Disciplines
47
Macroeconomics
& Development: 21
Finance &
Investment: 14
Behavioral &
Microeconomics: 12
Economics &
Finance
Pedagogy &
Instruction: 16
Education
Archaeology &
History: 14
Humanities
7
General/Misc: 7
Uncategorized
A NotebookLM
Subject coding on @socarxiv.bsky.social is very messy - people choose any subject(s) and we don't police it. I had Notebook LM categorize the last ~500 titles+abstracts into 15-30 topics nested within disciplines. It made this. (Didn't check it beyond a few word frequencies, which look reasonable.)
21.02.2026 00:07 —
👍 7
🔁 1
💬 2
📌 0
Introducing COS’s 2026–2028 Strategic Plan: Advancing Lifecycle Open Science
The Center for Open Science (COS) has released its 2026–2028 Strategic Plan, outlining a focused, three-year direction for advancing openness, integrity, and trustworthiness in research.
COS has released its 2026–2028 Strategic Plan, outlining a focused direction for advancing openness, integrity, and trustworthiness in research. This plan aligns our work around Lifecycle Open Science (LOS)—research with publicly accessible plans, contents, and outcomes.
💡:
19.02.2026 14:13 —
👍 6
🔁 6
💬 0
📌 2
SocArXiv New Submissions at Record Pace socopen.org/2026/02/17/s... #preprints #repositories #scholcomm @socarxiv.bsky.social
18.02.2026 15:05 —
👍 3
🔁 4
💬 0
📌 0
In the old days, our hosts (Center for Open Science) used to provide regular data on uploads, downloads, and views. Now that they don't, I've been meaning to come up with a regular report mechanism. I don't know about APIs and JSON, but I am semi-fluent in Stata, so I tried using ChatGPT...
/1
17.02.2026 19:09 —
👍 6
🔁 2
💬 2
📌 1
line graph showing cumulative socarxiv papers, starting in 2016, with an upward kink in 2025
SocArXiv accepted 3,162 new papers in 2025, an increase of 20% over 2024. As we are now accepting only social science papers, and turning away new papers on technical AI topics, this seems to reflect an increase in (human) social science. Thanks for sharing, social scientists!
17.02.2026 18:29 —
👍 17
🔁 6
💬 0
📌 1
Congratulations! The best paper-acceptance skeets are SocArXiv preprint paper acceptance skeets.
17.02.2026 18:26 —
👍 3
🔁 0
💬 0
📌 0
🚨My first preprint is out on @socarxiv.bsky.social!
How do AI-generated content labels shape what people see as authentic on social media — and do labels have unintended side effects? osf.io/preprints/so...
A thread 🧵
12.02.2026 14:06 —
👍 17
🔁 7
💬 7
📌 0
OSF
Very excited that @lauralindberg.bsky.social and I just posted a preprint of a paper that we've been working on for a while, looking at the (lack of) comparability of the most recent NSFG with prior waves of data. Feedback extremely welcome!
osf.io/preprints/so...
16.02.2026 15:02 —
👍 4
🔁 5
💬 3
📌 3
Surveys are important for miscarriage research, yet questionnaire design and sample characteristics can influence reported miscarriage prevalence
We compared miscarriage reporting across British surveys with @mccompans.bsky.social & @heinivaisanen.bsky.social - take a look at our pre-print here ⬇️
13.02.2026 09:55 —
👍 11
🔁 6
💬 1
📌 0
If they do we won't tell. We only share threads that reject the null, so to speak
11.02.2026 22:37 —
👍 3
🔁 0
💬 0
📌 0