"I got my PhD by writing prompts instead of doing research, I'm winning"
got some bad news, there still no jobs and now you also know nothing
Palantir CEO promises that his technology will reduce educated women's economic and political power newrepublic.com/post/207693/...
Join us for a webinar on Cultivating #FAIRdata across the disciplines on Mar 5th with colleagues @fairsharing.bsky.social and @researchdataall.bsky.social #RDAambassadors @allysonlister.bsky.social and Daniel Manrique-Castano. #OxFOS26 @ox.ac.uk
➡️ Register at go.glam.ox.ac.uk/OxFOS26_Regi...
“Helsinki hasn’t registered a single traffic-related fatality in the past year…Citing data that shows the risk of pedestrian fatality is cut in half by reducing a car’s speed from 40 to 30km/hr, city officials imposed the lower limit in most of Helsinki’s residential areas and city center in 2021.”
The inevitable next stage of academic publishers profiting from academics' work is here - scraping it for AI then charging subscriptions for access to the AI summaries, and then again for the citations. Academic content assetization as we called it in a recent paper. www.science.org/content/arti...
Microsoft said the bug meant that its Copilot AI chatbot was reading and summarizing paying customers' confidential emails, bypassing data protection policies.
1/ Finally wrote up “The Story of Mendeley”! Most people know the tool, few know about its rise and fall. The Mendeley story provides important clues for how to build self-sustaining AND non-extractive knowledge commons, which is why I think it deserves more attention 🧵
A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
arxiv.org/abs/2602.058...
#AI #LLMs #Hallucinations #Misconduct #ScholComm
State of Open Data talk
Brian Nosek brings up something I've been thinking about.
Pre-AI - Benefits of data sharing often exceeded the costs (most people use your data for good)
Post-AI - People have real concerns about how their open data will be used for things they don't ethically agree with
Is the Group Chat coming back soon? Hope it isn't dropped forever :(
It's not unheard of to find errors in your data after publishing it. While it's not fun when this happens, this one-pager can help guide you through the process of updating data, code, and publications when errors are found.
osf.io/q4jre/files/...
Have you registered for Thursday's webinar? Huge interest in this one.
Still time to register.
Confronting the Challenges of Sensitive Open Data
#OpenScience #OpenData
katinamagazine.org/content/arti...
Ex-Meta chief AI scientist Yann LeCun has Lunch with the FT and in one of those instances so rare that you know he didn't sign an NDA, says exactly why as.ft.com/r/e503690d-8...
THIS THIS THIS. ALL OF THIS
THIS is why faculty resist technological strategies for teaching. There is no engaging with Edtech without this context
When we say "no, everything hasn't been digitized," I need you to understand that we really mean is that virtually nothing has been digitized. This is because the realm of primary sources that historians use is incomprehensibly large.
"An academic discovers a paper attributed to him that does not exist has been cited 42 times" is a sentence with an actual referent in 2025.
One of the many reasons AI can't produce good writing is it can't hate its own writing. It can't think to itself "Maybe I'm illiterate" during the writing process. And that's essential
'Critical washing' is a very useful phrase I've just learned thanks to this paper. And while it relates to AI we can apply it in other areas. Key for me would be safety, wellbeing and mental distress in education settings where there's a huge amount of discussion and little comparative action.
How do I get people to understand that high quality data collected with intention and analyzed by experts have even more potential to revolutionize health care?
In light of record submission rates and a large volume of AI-generated slop, SocArXiv recently implemented a policy requiring ORCIDs linked in the OSF profile of submitting authors, and narrowing our focus to social science subjects. Today we are taking two more steps:
/1
From @aip-publishing.bsky.social: cost of a peer reviewed article is $2700 (*before* you start giving back to the community). Would like to see a more detailed split, but it does align with estimates from eLife and EMBO #ScientificPublishing
www.stm-publishing.com/cost-transpa...
take me seriously
"[T]he root problem is arguably that #ChatGPT still pretends to be a person—a consistent entity that knows you... It assumes the mantle of human emotion and acts like it understands you and sympathizes with what you’re going through" arstechnica.com/ai/2025/11/o... #ethics #tech #design #business
After Coalition S disrupted scientific publishing, new plan retreats from strict requirements
#ScholarlyPublishing #OpenAccess
science.org/content/article/after-coalition-s-disrupted-scientific-publishing-new-plan-retreats-strict-requirements
“Like misinformation, misconduct is nothing new. But it’s become easier for authors to execute it with the aid of artificial intelligence and “𝙥𝙖𝙥𝙚𝙧 𝙢𝙞𝙡𝙡𝙨,” while being harder for publishers to contend with, given the volume of potential misconduct cases…” @bmj.com
If you ever wonder why your artist friend is so pissed off whenever they see AI slop used instead of real, human-created art, it's exactly what you think: