Chris Moran's Avatar

Chris Moran

@chrismoranuk.bsky.social

Guardian's head of editorial innovation. Focusing on AI in the newsroom

3,692 Followers  |  153 Following  |  106 Posts  |  Joined: 04.10.2023  |  2.1501

Latest posts by chrismoranuk.bsky.social on Bluesky

Preview
MythBusting Large Language Models Chatbots can be deceptive. How do LLMs actually work under the hood?

I love this piece from our brilliant lead engineer on newsroom AI, Joseph Smith. If you're non-technical but looking for a deeper understanding of how LLMs work beyond the basics, it's a fantastically clear and useful place to start medium.com/@joelochlann...

31.07.2025 09:42 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Andrej Karpathy: Software Is Changing (Again)
YouTube video by Y Combinator Andrej Karpathy: Software Is Changing (Again)

This 40m video is a brilliant articulation of where we are now with LLMs and a thoughtful reflection on autonomy, agency, software and where we go next. I love the phrase "jagged intelligence" and using Memento's protagonist as an articulation of context windows. Do watch youtu.be/LCEmiRjPEtQ?...

19.06.2025 08:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
A.I. Is Poised to Rewrite History. Literally.

This is an excellent, thoughtful, nuanced piece on the role AI can play in writing www.nytimes.com/2025/06/16/m...

16.06.2025 21:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Sam Coates Sky on X: "How AI lied and gaslit me: πŸ€―πŸ’£πŸ”₯ Won’t lie: what happened to me this week made my jaw drop Do watch: https://t.co/mgmkvR9XSx" / X How AI lied and gaslit me: πŸ€―πŸ’£πŸ”₯ Won’t lie: what happened to me this week made my jaw drop Do watch: https://t.co/mgmkvR9XSx

So much energy is being put into the long-term impact of GenAI on journalism and short-term issues like whether we're using the latest models. But this video illustrates the immediate crucial issue: do journalists have a basic grasp on the ways LLMs work? x.com/SamCoatesSky...

09.06.2025 08:05 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
In a dangerous era for journalism – a powerful new tool to help protect sources Today, the Guardian, in collaboration with the University of Cambridge, launches Secure Messaging, a world-first from a media organisation

β€œSecure Messaging is not just a tool for the Guardian. As part of our commitment to protecting the media and the public interest globally, the Guardian has published the source code for the technology that enables this system” www.theguardian.com/membership/2...

09.06.2025 07:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Eeeessh. This really, really isn’t good. If user behaviour is changing so much and search is changing so much, and actually the changes are not really affecting clickthrough… why not let us see all that?

21.05.2025 17:29 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
The 15 biggest announcements at Google I/O 2025 Catch up on what you missed.

Google IO underlines that the battleground has moved from models as the competitive edge. This second age is about integration and with Workspace, Chrome, Search, Android and more (and the appetite to push Gemini everywhere), Google has a clear advantage www.theverge.com/news/669408/...

21.05.2025 08:40 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
Google's AI Adventure - One Year On A year after the introduction of AI Overviews, let's look at the impact on publishers' traffic and visibility and what's next in store for AI in search.

As usual @polemicdigital.com is essential reading: β€œThe theory is that extensive usage of LLMs drives increased Google usage with users wanting to verify the generative AI output with actual trusted sources, turning to Google to find those sources.” www.seoforgooglenews.com/p/googles-ai...

08.05.2025 11:39 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

I wrote this piece on three serious challenges journalism faces right now and how AI may make them worse if we don't change course

It is based on panels and private conversations in Perugia and includes my favourite quote from @chrismoranuk.bsky.social #ijf25
www.linkedin.com/pulse/journa...

14.04.2025 20:40 β€” πŸ‘ 24    πŸ” 9    πŸ’¬ 2    πŸ“Œ 0
Preview
Now the hard part: evaluating and integrating AI in newsrooms As the integration of (generative) AI technologies into news organisations continues apace, the need for robust, transparent, and consistent evaluation frameworks has become increasingly pressing.…

And I should add a plug to the panel I organise with @chrismoranuk.bsky.social, @rubinafillion.nytimes.com & Tess Jeffers.

Now the hard part: evaluating and integrating AI in newsrooms
14:00 - 14:50, Saturday 12/04/2025 – Teatro del Pavone
buff.ly/NCJkUCN

08.04.2025 09:05 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

This is a thread of recent articles written for @techpolicypress.bsky.social that, I think, tell something of a story about AI and how we got to where we are.

27.03.2025 23:34 β€” πŸ‘ 51    πŸ” 22    πŸ’¬ 3    πŸ“Œ 3
A "mind map" in NotebookLM showing a breakdown of the concepts in the UK government AI and Copyright consultation

A "mind map" in NotebookLM showing a breakdown of the concepts in the UK government AI and Copyright consultation

Only just started to play with NotebookLM's new Mind Maps feature, but for journalists trying to parse large documents or collections of them, it seems an interesting way of seeing the broad landscape and then getting into the detail by clicking a node, as an alternative to asking specific questions

19.03.2025 12:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Gemini Canvas - write, code, & create in one space with AI Gemini Canvas is your interactive space to write, code, and create. Go from idea to creation in minutes.

Innovation continues apace in the LLM world (where innovation = ruthlessly Sherlocking everyone else's features). Canvas is Claude's Artifacts feature. For those without access to Claude, it's undoubtedly a useful and versatile tool, especially for product ideation gemini.google/overview/can...

19.03.2025 11:30 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Slop Is a Brute Force Attack on the Algorithms That Control Reality Generative AI spammers are brute forcing the internet, and it is working.

Two things to highlight: the inevitability of this outcome (as predicted in GPT4's System Card) and the shrewd observation that LLMs, for all their benefits, are the perfect tool for endlessly voracious social platforms that prioritise novelty, triviality and outrage www.404media.co/ai-slop-is-a...

17.03.2025 17:10 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
University students describe how they adopt AI for writing and research in a general education course - Scientific Reports Scientific Reports - University students describe how they adopt AI for writing and research in a general education course

β€œStudents are not merely passive recipients of AI-generated content; instead, they are engaging actively with AI-based tools to augment their research processes, enhance comprehension, and construct well-informed, analytically-robust academic texts.” www.nature.com/articles/s41...

17.03.2025 14:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This post is misleading. We were testing specifically to see whether the chatbots accurately identified the sources of excerpts from news articles. We did not intend to extrapolate these findings to the overall accuracy of the chatbots.

13.03.2025 16:38 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1
Preview
Why AI-assisted Literature Reviews Currently Fall Short The tale of the long tail

This is worth reading if you’re interested in deep research AI tools: β€œUsed uncritically, AI research assistants risk perpetuating a cycle where only easily discoverable sources are read and cited in research products” theimportantwork.substack.com/p/why-ai-ass...

13.03.2025 18:10 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Excellent to see this gap covered. Previously saving your NotebookLM responses lost the elegant citations that make the tool as a whole so useful

13.03.2025 14:54 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Revealed: How the UK tech secretary uses ChatGPT for policy advice New Scientist has used freedom of information laws to obtain the ChatGPT records of Peter Kyle, the UK's technology secretary, in what is believed to be a world-first use of such legislation

🚨 @newscientist.com SCOOP: I've exclusively obtained Peter Kyle's interactions with ChatGPT using FOI laws - in what I believe may be a world-first transparency release. The chatbot said "Lack of Government or Institutional Support" slowed UK AI adoption www.newscientist.com/article/2472...

13.03.2025 12:14 β€” πŸ‘ 182    πŸ” 117    πŸ’¬ 18    πŸ“Œ 93
Preview
Expanding AI Overviews and introducing AI Mode AI Mode is a new generative AI experiment in Google Search.

I’m posting quite a lot less about incremental AI changes these days. But this is where most people’s eyes should be in terms of impact, risk and corporate power struggles. With models becoming less distinct and moats shrinking, it all becomes about integration and scale blog.google/products/sea...

07.03.2025 08:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I am delighted to share this new paper on AI collaboration in Chinese news organisations, led by @qingxiaohci.bsky.social‬, which has just been accepted at #CHI25.

https://buff.ly/4gCc7hb

14.02.2025 18:11 β€” πŸ‘ 19    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Post image

New from 404 Media: anyone can push updates to the Doge.gov site. Two sources independently found the issue, one made their own decision to deface the site. "THESE 'EXPERTS' LEFT THEIR DATABASE OPEN."

www.404media.co/anyone-can-p...

14.02.2025 07:06 β€” πŸ‘ 1206    πŸ” 432    πŸ’¬ 37    πŸ“Œ 84
Preview
The human in the loop Human assessment in the age of AI

This is a really thoughtful, balanced and interesting piece on practical, responsible ways that LLMs might make marking easier while retaining the core human skills that make it a valuable exercise substack.nomoremarking.com/p/the-human-...

01.02.2025 10:27 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image

Please define 'chutzpah'

29.01.2025 08:52 β€” πŸ‘ 19    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Shake up of tech and AI usage across NHS and other public services to deliver plan for change The government has announced a new plan to leverage technology and AI tools like

My main take from the UK government's AI announcement as a journalist and a British sitcom obsessive is that tech teams always have a wicked sense of humour when naming their products www.gov.uk/government/n...

21.01.2025 08:44 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Preview
Apple is pulling its AI-generated notifications for news after generating fake headlines | CNN Business Apple is temporarily pulling its newly introduced artificial intelligence feature that summarizes news notifications after it repeatedly sent users error-filled headlines, sparking backlash from a new...

Good call from Apple edition.cnn.com/2025/01/16/m...

17.01.2025 07:21 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 3

The β€˜gift’ that just keeps giving. Here’s what I wrote about this the time before last www.linkedin.com/pulse/alerts...

15.01.2025 19:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Behind the scenes, the company was also quietly dismantling a system to prevent the spread of misinformation. When the company announced on Jan. 7 that it would end its fact-checking partnerships, the company also instructed teams responsible for ranking content in the company’s apps to stop penalizing misinformation, according to sources and an internal document obtained by Platformer.

The result is that the sort of viral hoaxes that ran roughshod over the platform during the 2016 US presidential election β€” β€œPope Francis endorses Trump,” Pizzagate, and all the rest β€” are now just as eligible for free amplification on Facebook, Instagram, and Threads as true stories.

Behind the scenes, the company was also quietly dismantling a system to prevent the spread of misinformation. When the company announced on Jan. 7 that it would end its fact-checking partnerships, the company also instructed teams responsible for ranking content in the company’s apps to stop penalizing misinformation, according to sources and an internal document obtained by Platformer. The result is that the sort of viral hoaxes that ran roughshod over the platform during the 2016 US presidential election β€” β€œPope Francis endorses Trump,” Pizzagate, and all the rest β€” are now just as eligible for free amplification on Facebook, Instagram, and Threads as true stories.

NEW: Meta has quietly dismantled the system that prevented misinformation from spreading in the United States. Machine-learning classifiers that once identified viral hoaxes and limited their reach have now been switched off, Platformer has learned www.platformer.news/meta-ends-mi...

15.01.2025 00:51 β€” πŸ‘ 26273    πŸ” 9385    πŸ’¬ 1383    πŸ“Œ 981

OpenAI’s economic blueprint: β€œAutomobiles weren’t invented [in America]. But in the UK where some of the earliest cars were introduced, growth was stunted by regulation.” cdn.openai.com/global-affai...

14.01.2025 08:20 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
There were several questions about whether it was really safe to hand NHS data to tech companies, to which the answer was always that we needed to stop worrying. Why we should trust people who built their previous AI engines by simply stealing the text of 183,000 books wasn’t explained. 

Afterwards I asked Matt Clifford, Starmer’s AI adviser, about this, and he claimed ignorance of the biggest copyright theft in history, which seems a strange knowledge gap. Perhaps Clifford was trained on a different dataset. The suggestion that authors and artists might not be delighted to see their work stolen seemed to rile him, but just as he was becoming interestingly snippy about the way in which cynics were holding the country back, he was dragged away by a civil servant, so we never found out why it is so vital to human progress that my books are ripped off by a bunch of billionaires. 

Hoping for answers, I turned to the report on the subject that Clifford has just produced, which contains a single reference to copyright, and complains that β€œuncertainty” over intellectual property rules are holding back the data miners. The government proposes to deal with this in much the same way that the Metropolitan Police has dealt with the β€œuncertainty” over actual property laws that was holding back central London bike thieves.

There were several questions about whether it was really safe to hand NHS data to tech companies, to which the answer was always that we needed to stop worrying. Why we should trust people who built their previous AI engines by simply stealing the text of 183,000 books wasn’t explained. Afterwards I asked Matt Clifford, Starmer’s AI adviser, about this, and he claimed ignorance of the biggest copyright theft in history, which seems a strange knowledge gap. Perhaps Clifford was trained on a different dataset. The suggestion that authors and artists might not be delighted to see their work stolen seemed to rile him, but just as he was becoming interestingly snippy about the way in which cynics were holding the country back, he was dragged away by a civil servant, so we never found out why it is so vital to human progress that my books are ripped off by a bunch of billionaires. Hoping for answers, I turned to the report on the subject that Clifford has just produced, which contains a single reference to copyright, and complains that β€œuncertainty” over intellectual property rules are holding back the data miners. The government proposes to deal with this in much the same way that the Metropolitan Police has dealt with the β€œuncertainty” over actual property laws that was holding back central London bike thieves.

This was not, from the perspective of content creators, a terribly reassuring exchange. thecritic.co.uk/robot-dreams/

14.01.2025 07:32 β€” πŸ‘ 107    πŸ” 58    πŸ’¬ 4    πŸ“Œ 8

@chrismoranuk is following 20 prominent accounts