a truly remarkable series of graphs
01.03.2026 20:38 β π 1 π 0 π¬ 0 π 0a truly remarkable series of graphs
01.03.2026 20:38 β π 1 π 0 π¬ 0 π 0from now on will be treating all funding proposal and paper rejections as "definite wins"
28.02.2026 11:55 β π 1 π 0 π¬ 0 π 0
UK govs current and recent past presiding over irreversible damage to the research infrastructure of the country - capacity building is infinitely harder than capacity culling
just spending their time pointing fingers at each other, as if any of that matters any more
there is a perfectly good summary of all papers I have authored or co-authored
this is commonly known as the 'abstract'
the abstract is guaranteed to convey authorial voice *by definition* because it was written by the paper's authors
/6
second, why is 'factual error' the only relevant criteria for authors objecting to AI summaries?
all AI summaries by definition take agency and authorial voice away from authors
I am not sure why the ACM seems to be intent on diluting this authorial voice
/5
taking the new policy as read, I'm not sure it should be my job to locate factual errors in LLM generated summaries
ACM is asking authors now to do the job of correcting LLM output, which is a fool's errand given that LLMs are stochastic and therefore cannot be guaranteed free of error
/4
however, it's now *no longer possible to opt-out or remove AI summarisation* from any of your ACM published papers, because apparently the ACM has changed the policy
per a recent response from dl-team@hq.acm.org, I was told "The policy has been revised to allow only corrections to the summary"
/3
"Iβm an author. How do I opt out of my content being summarized?
To have a summary removed from an article, please email your request to dl-team@hq.acm.org using your institutional email address."
/2
π§΅
in Dec 2025 the ACM Digital Library released its so-called AI tools, which includes "AI summarisation" (with ~100 and ~400 word versions) and "AI podcasts"
in Dec, their webpage (dl.acm.org/generative-a...) on these new "features" stated a basic opt-out method
/1
I'd put money on the FAQ itself being generated text with content free phrases like that surfacing
23.02.2026 22:57 β π 2 π 0 π¬ 1 π 0
"emphasizes information that researchers are likely to value, including an articleβs results and limitations"
astounding insights - researchers are desperate to get their hands on limitations and results because that information is absolutely never contained in the papers - praise be AI summaries
this is a change from what was on the FAQ in Dec 2025, and I quote (copied from dl.acm.org/generative-a...):
"To have a summary removed from an article, please email your request to dl-team@hq.acm.org using your institutional email address."
they emailed me back, and are now stating that their policy is as follows:
"ACM does not provide the option to opt-out of the AI summaries"
well that's interesting because I got nothing back via email (sent request to them end of Dec)
I still appear to have garbage AI slop summaries associated with my papers
are you suggesting we have to specify a list of DOIs for them to act on?
cc @acm-sigchi.bsky.social
how long before "AI tool use" is just another e-learning task, where you have to click thru pages of videos featuring "Dave" and "Sarah", complete a questionnaire at min 80% correct, while your line manager constantly emails everyone begging to please get the department compliance totals up?
23.02.2026 07:36 β π 12 π 1 π¬ 1 π 0
two interesting, related pieces by @tante.cc about how we treat technology (in this case LLMs specifically but is more general than that) and also (IMO) how social media is a terrible venue for the doing of that discourse / critique
tante.cc/2026/02/20/a...
tante.cc/2026/02/20/o...
the only constant is how utterly self-important and self-absorbed these people are
www.bbc.co.uk/news/article...
I made a long video about technocapitalism and AI music
youtu.be/U8dcFhF0Dlk?...
I watched a bunch of AI sessions at the WEF so you wouldnβt have to.
But you should read this.
it would be a real shame if the people involved in the original decisions around purchase were held to account by the media @resprofnews.bsky.social ...
03.02.2026 17:16 β π 1 π 0 π¬ 0 π 0Screenshot of a THE story with the headline "Nottingham posts Β£85 million deficit as value of campus plummets"
Screenshot of a NottinghamshireLive news story with the headline "University of Nottingham's disastrous 'vanity project' campus could be sold for just Β£14."
Now I'm not an economist, but I feel like this is not great economic news.
03.02.2026 15:34 β π 27 π 11 π¬ 2 π 2the idea that AI criticism is / has had much concrete effect beyond perhaps some minor chin stroking from those in power about 'regulation' is absurd when you consider the enormous power differentials between AI's promoters and everyone else
02.02.2026 12:49 β π 0 π 0 π¬ 0 π 1what exactly is going on at UKRI?
02.02.2026 08:25 β π 0 π 0 π¬ 0 π 0
none of this says anything about how previous cross-disciplinary calls worked out in practice - the pros, and the unintended side effects and inequities
if UKRI is to make an argument about more UK research being "bucketised" like this it had better be clear about those practical features
bye
30.01.2026 07:53 β π 0 π 0 π¬ 0 π 0exactly the same process is happening with uni managers
29.01.2026 11:42 β π 1 π 0 π¬ 1 π 0I think I might take one of these "under 20 minutes" AI skills courses the UK Govt. seems very keen on everyone doing and live-post it here... Maybe we can all learn something together! The press release sends me to aiskillshub.org.uk/aiskillsboost/ - let's go and see!
28.01.2026 12:07 β π 1130 π 505 π¬ 43 π 347Phillipson is framing the shoveling of money into AI edutech binfire companies as 'progressive'
29.01.2026 11:36 β π 2 π 0 π¬ 0 π 0this is just the present iteration of "everyone should learn to code"
28.01.2026 17:04 β π 1 π 0 π¬ 0 π 0basically all of LinkedIn
26.01.2026 11:55 β π 2 π 0 π¬ 1 π 0