James Garforth's Avatar

James Garforth

@jamesgarforth.bsky.social

Lecturer in Ethics, Society and Professionalism in the School of Informatics at the University of Edinburgh. Proud associate of the Centre for Technomoral Futures.

115 Followers  |  252 Following  |  37 Posts  |  Joined: 29.08.2023  |  2.1053

Latest posts by jamesgarforth.bsky.social on Bluesky

You heard it here first folks, Rachel only ever psychically levitates her many premium bonds!

27.11.2025 10:41 — 👍 0    🔁 0    💬 0    📌 0

Sorry, but why does every single person who thinks of this believe they’re the only person in the world to think of this? FWIW, as others have said, yes, students can and will use LLMs to reflect on the LLM’s initial response. (That’s also not what a primary source is but whatever.)

24.11.2025 15:11 — 👍 104    🔁 14    💬 8    📌 1

Two years? Does that mean you went for the golden retriever? :D

24.11.2025 12:40 — 👍 0    🔁 0    💬 1    📌 0
Preview
Edinburgh university staff back industrial action again in longstanding dispute over cuts and redundancies Staff at the University of Edinburgh have today (Tuesday) backed industrial action for a second time in a longstanding dispute over £140million cuts, job losses and compulsory redundancies.

There was moment I thought perhaps we had movement here, but the @ucuedinburgh.bsky.social strike is on. You can read more about why here:

www.ucu.org.uk/article/1423...

Please write and call your MSPs on Monday: www.ucuedinburgh.org.uk/lobby-your-msp

14.11.2025 19:11 — 👍 39    🔁 27    💬 1    📌 0

Maybe we can get a hologram to go on tour.

14.11.2025 18:19 — 👍 0    🔁 0    💬 0    📌 0
11.11.2025 08:43 — 👍 1675    🔁 493    💬 5    📌 9

Identifying flaws in GenAI unfortunately offers a pretext for claims that perfecting the product is just a matter of time & money. So pointing to chatbots’ role in,say, suicides can only go so far if we don’t also identify the systemic, irresolvable lack of Gen AI’s human commitment bc math has none

08.11.2025 01:51 — 👍 246    🔁 75    💬 2    📌 8

📣 I am hiring a postdoc! aial.ie/hiring/postd...

applications from suitable candidates that are passionate about investigating the use of genAI in public service operations with the aim of keeping governments transparent and accountable are welcome

pls share with your networks

30.10.2025 19:51 — 👍 144    🔁 162    💬 3    📌 9
Ruha Benjamin on a stage standing in front of a projected screen which reads:

UNLEARNING 

INTELLIGENCE as Smartness
INNOVATION as Social Progress
TECHNOLOGY as Self-Propelled
DEEP LEARNING as Statistical Strength
POWER as Subjugation
IMAGINATION as a Superfluous
HUMAN NATURE as Self-Interested

Ruha Benjamin on a stage standing in front of a projected screen which reads: UNLEARNING INTELLIGENCE as Smartness INNOVATION as Social Progress TECHNOLOGY as Self-Propelled DEEP LEARNING as Statistical Strength POWER as Subjugation IMAGINATION as a Superfluous HUMAN NATURE as Self-Interested

@ruha9.bsky.social asking us to unlearn some of the bullshit silicon valley and western thinking is predicated on

30.10.2025 14:23 — 👍 166    🔁 48    💬 4    📌 1

“what radicalized you” idk paying attention

29.10.2025 17:03 — 👍 12972    🔁 4252    💬 103    📌 126
Preview
‘Change course now’: humanity has missed 1.5C climate target, says UN head Exclusive: ‘Devastating consequences’ now inevitable but emissions cuts still vital, says António Guterres in sole interview before Cop30

As the single most important news story this year, I can't wait to see the detailed and central coverage this will get in every media outlet we have

28.10.2025 01:00 — 👍 2066    🔁 906    💬 43    📌 68

I didn't know we had a call scheduled

27.10.2025 18:22 — 👍 1    🔁 0    💬 1    📌 0

😢

27.10.2025 18:22 — 👍 1    🔁 0    💬 0    📌 0

A stopped train is on time twice a day?

27.10.2025 18:20 — 👍 1    🔁 0    💬 1    📌 0

If students believe they are paying universities for “information,” they shouldn’t go

16.10.2025 18:49 — 👍 57    🔁 6    💬 7    📌 1

This is amazing and very well deserved!

16.10.2025 12:18 — 👍 1    🔁 0    💬 0    📌 0

We're hiring at BRAID UK!

Exciting opportunity for a Research Associate to lead work around the social return on AI investment, working closely with @ewaluger.bsky.social and @shannonvallor.bsky.social

www.jobs.ac.uk/job/DPA010/r...

Applications open. Closes 7 Nov, Interviews 9 Dec.

13.10.2025 10:03 — 👍 3    🔁 5    💬 1    📌 0

I enjoyed our previous conversation about this, and would happily have another. I offer as tribute the following puns: "Impromptu Engineering" and "Pomp Engineering"

09.10.2025 22:58 — 👍 1    🔁 0    💬 1    📌 0

the good news is, it's almost the weekend, the bad news is that i've once again realised that i picked a topic of research that requires me to be at least mildly infuriated about 70% of the time

03.10.2025 15:46 — 👍 26    🔁 2    💬 1    📌 0

Damn, sorry for sapping this bit of magic from the world!

01.10.2025 08:56 — 👍 1    🔁 0    💬 0    📌 0

AI has gone too far

01.10.2025 08:21 — 👍 0    🔁 0    💬 0    📌 0

On behalf of other universities: it's nice for Oxford to be tanking its own respectability.

19.09.2025 16:13 — 👍 16    🔁 0    💬 0    📌 0
Preview
Anti-Trump Protesters Take Aim at ‘Naive’ US-UK AI Deal Thousands marched in London to protest President Donald Trump’s second state visit. Among them were many environmental activists unhappy with Britain’s new AI deal with the US.

I covered the protests in London against Trump for @wired.com. Protesters aren't convinced by the AI deal from US tech giants: They want to know what the UK is giving them in exchange for up to $45bn in investment and where the power for data centers will come from www.wired.com/story/climat...

18.09.2025 11:12 — 👍 193    🔁 55    💬 9    📌 4

Very likely I'm doing the same thing next semester, we should share notes!

14.09.2025 08:09 — 👍 1    🔁 0    💬 0    📌 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 — 👍 3534    🔁 1801    💬 105    📌 340

"I love you"?

06.09.2025 09:02 — 👍 0    🔁 0    💬 0    📌 0

I do, tomorrow evening! Do you?

03.09.2025 07:44 — 👍 1    🔁 0    💬 1    📌 0

I'm going to be in the same struggle, if you want to form a support group :D

03.09.2025 07:34 — 👍 1    🔁 0    💬 1    📌 0
Slide: There are no shortcuts--'ugly social realities' 
Fear of being left out a fabricated & marketing public relations rhetoric 
Evidence, rigorous testing and evaluation 
AI in education = commercialization of a collective responsibility 
Outsourcing a social, civic, and democratic process of cultivating the coming generation to commercial and capitalist enterprise whose priority is profit

Slide: There are no shortcuts--'ugly social realities' Fear of being left out a fabricated & marketing public relations rhetoric Evidence, rigorous testing and evaluation AI in education = commercialization of a collective responsibility Outsourcing a social, civic, and democratic process of cultivating the coming generation to commercial and capitalist enterprise whose priority is profit

Mic drop from @abeba.bsky.social at UNESCO Digital Learning Week

02.09.2025 15:07 — 👍 181    🔁 84    💬 2    📌 8

The world needs more apple crumble. Be that hero for us.

31.08.2025 17:41 — 👍 1    🔁 0    💬 0    📌 0

@jamesgarforth is following 20 prominent accounts