The thing I really don't get about academics using LLMs to generate ideas/write papers is: that's the part of my work I π― love: creative challenging & fulfilling. I pay the piper by sitting in committees, doing loads of paperwork & abiding by lots of procedures. Why give up the best part of the job?
03.03.2026 20:27 β
π 58
π 8
π¬ 3
π 2
a truly remarkable series of graphs
01.03.2026 20:38 β
π 1
π 0
π¬ 0
π 0
from now on will be treating all funding proposal and paper rejections as "definite wins"
28.02.2026 11:55 β
π 1
π 0
π¬ 0
π 0
UK govs current and recent past presiding over irreversible damage to the research infrastructure of the country - capacity building is infinitely harder than capacity culling
just spending their time pointing fingers at each other, as if any of that matters any more
25.02.2026 20:35 β
π 0
π 0
π¬ 0
π 0
there is a perfectly good summary of all papers I have authored or co-authored
this is commonly known as the 'abstract'
the abstract is guaranteed to convey authorial voice *by definition* because it was written by the paper's authors
/6
24.02.2026 18:02 β
π 5
π 0
π¬ 0
π 0
second, why is 'factual error' the only relevant criteria for authors objecting to AI summaries?
all AI summaries by definition take agency and authorial voice away from authors
I am not sure why the ACM seems to be intent on diluting this authorial voice
/5
24.02.2026 18:02 β
π 1
π 0
π¬ 1
π 0
taking the new policy as read, I'm not sure it should be my job to locate factual errors in LLM generated summaries
ACM is asking authors now to do the job of correcting LLM output, which is a fool's errand given that LLMs are stochastic and therefore cannot be guaranteed free of error
/4
24.02.2026 18:01 β
π 2
π 0
π¬ 1
π 0
however, it's now *no longer possible to opt-out or remove AI summarisation* from any of your ACM published papers, because apparently the ACM has changed the policy
per a recent response from dl-team@hq.acm.org, I was told "The policy has been revised to allow only corrections to the summary"
/3
24.02.2026 18:00 β
π 1
π 0
π¬ 1
π 0
"Iβm an author. How do I opt out of my content being summarized?
To have a summary removed from an article, please email your request to dl-team@hq.acm.org using your institutional email address."
/2
24.02.2026 18:00 β
π 1
π 0
π¬ 1
π 0
π§΅
in Dec 2025 the ACM Digital Library released its so-called AI tools, which includes "AI summarisation" (with ~100 and ~400 word versions) and "AI podcasts"
in Dec, their webpage (dl.acm.org/generative-a...) on these new "features" stated a basic opt-out method
/1
24.02.2026 17:59 β
π 2
π 1
π¬ 2
π 0
I'd put money on the FAQ itself being generated text with content free phrases like that surfacing
23.02.2026 22:57 β
π 2
π 0
π¬ 1
π 0
"emphasizes information that researchers are likely to value, including an articleβs results and limitations"
astounding insights - researchers are desperate to get their hands on limitations and results because that information is absolutely never contained in the papers - praise be AI summaries
23.02.2026 22:56 β
π 1
π 0
π¬ 1
π 0
Artificial Intelligence Tools | ACM Digital Library
this is a change from what was on the FAQ in Dec 2025, and I quote (copied from dl.acm.org/generative-a...):
"To have a summary removed from an article, please email your request to dl-team@hq.acm.org using your institutional email address."
23.02.2026 19:53 β
π 0
π 0
π¬ 1
π 0
they emailed me back, and are now stating that their policy is as follows:
"ACM does not provide the option to opt-out of the AI summaries"
23.02.2026 19:53 β
π 0
π 0
π¬ 1
π 0
well that's interesting because I got nothing back via email (sent request to them end of Dec)
I still appear to have garbage AI slop summaries associated with my papers
are you suggesting we have to specify a list of DOIs for them to act on?
cc @acm-sigchi.bsky.social
23.02.2026 13:14 β
π 1
π 0
π¬ 1
π 0
how long before "AI tool use" is just another e-learning task, where you have to click thru pages of videos featuring "Dave" and "Sarah", complete a questionnaire at min 80% correct, while your line manager constantly emails everyone begging to please get the department compliance totals up?
23.02.2026 07:36 β
π 12
π 1
π¬ 1
π 0
YouTube video by Adam Neely
Suno, AI Music, and the Bad Future
I made a long video about technocapitalism and AI music
youtu.be/U8dcFhF0Dlk?...
02.02.2026 20:23 β
π 396
π 112
π¬ 32
π 70
it would be a real shame if the people involved in the original decisions around purchase were held to account by the media @resprofnews.bsky.social ...
03.02.2026 17:16 β
π 1
π 0
π¬ 0
π 0
Screenshot of a THE story with the headline "Nottingham posts Β£85 million deficit as value of campus plummets"
Screenshot of a NottinghamshireLive news story with the headline "University of Nottingham's disastrous 'vanity project' campus could be sold for just Β£14."
Now I'm not an economist, but I feel like this is not great economic news.
03.02.2026 15:34 β
π 27
π 11
π¬ 2
π 2
the idea that AI criticism is / has had much concrete effect beyond perhaps some minor chin stroking from those in power about 'regulation' is absurd when you consider the enormous power differentials between AI's promoters and everyone else
02.02.2026 12:49 β
π 0
π 0
π¬ 0
π 1
what exactly is going on at UKRI?
02.02.2026 08:25 β
π 0
π 0
π¬ 0
π 0
none of this says anything about how previous cross-disciplinary calls worked out in practice - the pros, and the unintended side effects and inequities
if UKRI is to make an argument about more UK research being "bucketised" like this it had better be clear about those practical features
01.02.2026 19:44 β
π 1
π 0
π¬ 0
π 0
bye
30.01.2026 07:53 β
π 0
π 0
π¬ 0
π 0
exactly the same process is happening with uni managers
29.01.2026 11:42 β
π 1
π 0
π¬ 1
π 0
AI Skills Boost - AI Skills Hub
I think I might take one of these "under 20 minutes" AI skills courses the UK Govt. seems very keen on everyone doing and live-post it here... Maybe we can all learn something together! The press release sends me to aiskillshub.org.uk/aiskillsboost/ - let's go and see!
28.01.2026 12:07 β
π 1129
π 505
π¬ 43
π 346
Phillipson is framing the shoveling of money into AI edutech binfire companies as 'progressive'
29.01.2026 11:36 β
π 2
π 0
π¬ 0
π 0
this is just the present iteration of "everyone should learn to code"
28.01.2026 17:04 β
π 1
π 0
π¬ 0
π 0